repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
squishbug/DataScienceProgramming
03-NumPy-and-Linear-Algebra/Introduction_class.ipynb
cc0-1.0
%matplotlib inline import math import numpy as np import matplotlib.pyplot as plt ##import seaborn as sbn ##from scipy import * """ Explanation: Introduction to NumPy Topics Basic Synatx creating vectors matrices special: ones, zeros, identity eye add, product, inverse Mechanics: indexing, slicing, concatenating, reshape, zip Numpy load, save data files Random numbers $\rightarrow$ distributions Similarity: Euclidian vs Cosine Example Nearest Neighbor search Example Linear Regression References Quick Start Tutorial https://docs.scipy.org/doc/numpy-dev/user/quickstart.html NumPy Basic https://docs.scipy.org/doc/numpy-dev/user/basics.html NumPy Refernce https://docs.scipy.org/doc/numpy-dev/reference/index.html Basics this section uses content created by Rob Hicks http://rlhick.people.wm.edu/stories/linear-algebra-python-basics.html Loading libraries The python universe has a huge number of libraries that extend the capabilities of python. Nearly all of these are open source, unlike packages like stata or matlab where some key libraries are proprietary (and can cost lots of money). In lots of my code, you will see this at the top: End of explanation """ %ls """ Explanation: This code sets up Ipython Notebook environments (lines beginning with %), and loads several libraries and functions. The core scientific stack in python consists of a number of free libraries. The ones I have loaded above include: sympy: provides for symbolic computation (solving algebra problems) numpy: provides for linear algebra computations matplotlib.pyplot: provides for the ability to graph functions and draw figures scipy: scientific python provides a plethora of capabilities seaborn: makes matplotlib figures even pretties (another library like this is called bokeh). This is entirely optional and is purely for eye candy. End of explanation """ y = 1/2 print y x = .5 print x """ Explanation: Creating arrays, scalars, and matrices in Python Scalars can be created easily like this: End of explanation """ x_list = [1,2,3] print x_list print type(x_list) x_vector = np.array([1,2,3]) print x_vector print type(x_vector) x_list*2 x_vector*2 """ Explanation: Vectors and Lists The numpy library (we will reference it by np) is the workhorse library for linear algebra in python. To creat a vector simply surround a python list ($[1,2,3]$) with the np.array function: End of explanation """ c_list = [1,2] print "The list:",c_list print "Has length:", len(c_list) c_vector = np.array(c_list) print "\nThe vector:", c_vector print "Has shape:",c_vector.shape len(c_vector) z = [5,6] print "This is a list, not an array: z = ",z print '\ntype = '+str(type(z)) A = np.array([[0, 1, 2], [5, 6, 7]]) print 'A='+str(A)+'\n' print 'A.shape='+str(A.shape)+'\n' print 'type = '+str(type(A)) v = np.array([1,2,3,4,5,6,7,8,9,10,11,12]) print v.shape print len(v) v = v.reshape([3,2,2]) print v print v.shape print len(v) for element in v: print element A = np.ones([4,4], dtype='float') print A B = np.zeros([3,2]) print(B) C = np.eye(3) print(C) """ Explanation: We could have done this by defining a python list and converting it to an array: End of explanation """ A = np.array(range(6)).reshape([2,3]) print(A) result = A + 3 #or result = 3 + A print result """ Explanation: Matrix Addition and Subtraction Adding or subtracting a scalar value to a matrix To learn the basics, consider a small matrix of dimension $2 \times 2$, where $2 \times 2$ denotes the number of rows $\times$ the number of columns. Let $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{smallmatrix} \bigr)$. Consider adding a scalar value (e.g. 3) to the A. $$ \begin{equation} A+3=\begin{bmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{bmatrix}+3 =\begin{bmatrix} a_{11}+3 & a_{12}+3 \ a_{21}+3 & a_{22}+3 \end{bmatrix} \end{equation} $$ The same basic principle holds true for A-3: $$ \begin{equation} A-3=\begin{bmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{bmatrix}-3 =\begin{bmatrix} a_{11}-3 & a_{12}-3 \ a_{21}-3 & a_{22}-3 \end{bmatrix} \end{equation} $$ Notice that we add (or subtract) the scalar value to each element in the matrix A. A can be of any dimension. This is trivial to implement, now that we have defined our matrix A: End of explanation """ B = np.random.randn(2,2) print B # A = np.array([[1,0], [0,1]]) A = np.eye(2) A A+B """ Explanation: Adding or subtracting two matrices Consider two small $2 \times 2$ matrices, where $2 \times 2$ denotes the # of rows $\times$ the # of columns. Let $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{smallmatrix} \bigr)$ and $B$=$\bigl( \begin{smallmatrix} b_{11} & b_{12} \ b_{21} & b_{22} \end{smallmatrix} \bigr)$. To find the result of $A-B$, simply subtract each element of A with the corresponding element of B: $$ \begin{equation} A -B = \begin{bmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{bmatrix} - \begin{bmatrix} b_{11} & b_{12} \ b_{21} & b_{22} \end{bmatrix} = \begin{bmatrix} a_{11}-b_{11} & a_{12}-b_{12} \ a_{21}-b_{21} & a_{22}-b_{22} \end{bmatrix} \end{equation} $$ Addition works exactly the same way: $$ \begin{equation} A + B = \begin{bmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{bmatrix} + \begin{bmatrix} b_{11} & b_{12} \ b_{21} & b_{22} \end{bmatrix} = \begin{bmatrix} a_{11}+b_{11} & a_{12}+b_{12} \ a_{21}+b_{21} & a_{22}+b_{22} \end{bmatrix} \end{equation} $$ An important point to know about matrix addition and subtraction is that it is only defined when $A$ and $B$ are of the same size. Here, both are $2 \times 2$. Since operations are performed element by element, these two matrices must be conformable- and for addition and subtraction that means they must have the same numbers of rows and columns. I like to be explicit about the dimensions of matrices for checking conformability as I write the equations, so write $$ A_{2 \times 2} + B_{2 \times 2}= \begin{bmatrix} a_{11}+b_{11} & a_{12}+b_{12} \ a_{21}+b_{21} & a_{22}+b_{22} \end{bmatrix}_{2 \times 2} $$ Notice that the result of a matrix addition or subtraction operation is always of the same dimension as the two operands. Let's define another matrix, B, that is also $2 \times 2$ and add it to A: End of explanation """ A A * 3 """ Explanation: Matrix Multiplication Multiplying a scalar value times a matrix As before, let $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{smallmatrix} \bigr)$. Suppose we want to multiply A times a scalar value (e.g. $3 \times A$) $$ \begin{equation} 3 \times A = 3 \times \begin{bmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{bmatrix} = \begin{bmatrix} 3a_{11} & 3a_{12} \ 3a_{21} & 3a_{22} \end{bmatrix} \end{equation} $$ is of dimension (2,2). Scalar multiplication is commutative, so that $3 \times A$=$A \times 3$. Notice that the product is defined for a matrix A of any dimension. Similar to scalar addition and subtration, the code is simple: End of explanation """ # Let's redefine A and C to demonstrate matrix multiplication: A = np.arange(6).reshape((3,2)) C = np.random.randn(2,2) print A.shape print C.shape """ Explanation: Multiplying two matricies Now, consider the $2 \times 1$ vector $C=\bigl( \begin{smallmatrix} c_{11} \ c_{21} \end{smallmatrix} \bigr)$ Consider multiplying matrix $A_{2 \times 2}$ and the vector $C_{2 \times 1}$. Unlike the addition and subtraction case, this product is defined. Here, conformability depends not on the row and column dimensions, but rather on the column dimensions of the first operand and the row dimensions of the second operand. We can write this operation as follows $$ \begin{equation} A_{2 \times 2} \times C_{2 \times 1} = \begin{bmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{bmatrix}{2 \times 2} \times \begin{bmatrix} c{11} \ c_{21} \end{bmatrix}{2 \times 1} = \begin{bmatrix} a{11}c_{11} + a_{12}c_{21} \ a_{21}c_{11} + a_{22}c_{21} \end{bmatrix}_{2 \times 1} \end{equation} $$ Alternatively, consider a matrix C of dimension $2 \times 3$ and a matrix A of dimension $3 \times 2$ $$ \begin{equation} A_{3 \times 2}=\begin{bmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \ a_{31} & a_{32} \end{bmatrix}{3 \times 2} , C{2 \times 3} = \begin{bmatrix} c_{11} & c_{12} & c_{13} \ c_{21} & c_{22} & c_{23} \ \end{bmatrix}_{2 \times 3} \end{equation} $$ Here, A $\times$ C is $$ \begin{align} A_{3 \times 2} \times C_{2 \times 3}=& \begin{bmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \ a_{31} & a_{32} \end{bmatrix}{3 \times 2} \times \begin{bmatrix} c{11} & c_{12} & c_{13} \ c_{21} & c_{22} & c_{23} \end{bmatrix}{2 \times 3} \ =& \begin{bmatrix} a{11} c_{11}+a_{12} c_{21} & a_{11} c_{12}+a_{12} c_{22} & a_{11} c_{13}+a_{12} c_{23} \ a_{21} c_{11}+a_{22} c_{21} & a_{21} c_{12}+a_{22} c_{22} & a_{21} c_{13}+a_{22} c_{23} \ a_{31} c_{11}+a_{32} c_{21} & a_{31} c_{12}+a_{32} c_{22} & a_{31} c_{13}+a_{32} c_{23} \end{bmatrix}_{3 \times 3} \end{align} $$ So in general, $X_{r_x \times c_x} \times Y_{r_y \times c_y}$ we have two important things to remember: For conformability in matrix multiplication, $c_x=r_y$, or the columns in the first operand must be equal to the rows of the second operand. The result will be of dimension $r_x \times c_y$, or of dimensions equal to the rows of the first operand and columns equal to columns of the second operand. Given these facts, you should convince yourself that matrix multiplication is not generally commutative, that the relationship $X \times Y = Y \times X$ does not hold in all cases. For this reason, we will always be very explicit about whether we are pre multiplying ($X \times Y$) or post multiplying ($Y \times X$) the vectors/matrices $X$ and $Y$. For more information on this topic, see this http://en.wikipedia.org/wiki/Matrix_multiplication. End of explanation """ print A.dot(C) print np.dot(A,C) # What would happen to C.dot(A) """ Explanation: We will use the numpy dot operator to perform the these multiplications. You can use it two ways to yield the same result: End of explanation """ # A x = y # x = y/A <- NO # x = inv(A) y A = np.zeros([3,3]) print A np.linalg.inv(A) A = np.linspace(0., 2., 6).reshape([3,2]) print(A) np.linalg.inv(A) # note, we need a square matrix (# rows = # cols), use C: C = np.random.randn(3,3) print C, '\n' C_inverse = np.linalg.inv(C) print C_inverse """ Explanation: Matrix Division The term matrix division is actually a misnomer. To divide in a matrix algebra world we first need to invert the matrix. It is useful to consider the analog case in a scalar work. Suppose we want to divide the $f$ by $g$. We could do this in two different ways: $$ \begin{equation} \frac{f}{g}=f \times g^{-1}. \end{equation} $$ In a scalar seeting, these are equivalent ways of solving the division problem. The second one requires two steps: first we invert g and then we multiply f times g. In a matrix world, we need to think about this second approach. First we have to invert the matrix g and then we will need to pre or post multiply depending on the exact situation we encounter (this is intended to be vague for now). Inverting a Matrix As before, consider the square $2 \times 2$ matrix $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \ a_{21} & a_{22}\end{smallmatrix} \bigr)$. Let the inverse of matrix A (denoted as $A^{-1}$) be $$ \begin{equation} A^{-1}=\begin{bmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{bmatrix}^{-1}=\frac{1}{a_{11}a_{22}-a_{12}a_{21}} \begin{bmatrix} a_{22} & -a_{12} \ -a_{21} & a_{11} \end{bmatrix} \end{equation} $$ The inverted matrix $A^{-1}$ has a useful property: $$ \begin{equation} A \times A^{-1}=A^{-1} \times A=I \end{equation} $$ where I, the identity matrix (the matrix equivalent of the scalar value 1), is $$ \begin{equation} I_{2 \times 2}=\begin{bmatrix} 1 & 0 \ 0 & 1 \end{bmatrix} \end{equation} $$ furthermore, $A \times I = A$ and $I \times A = A$. An important feature about matrix inversion is that it is undefined if (in the $2 \times 2$ case), $a_{11}a_{22}-a_{12}a_{21}=0$. If this relationship is equal to zero the inverse of A does not exist. If this term is very close to zero, an inverse may exist but $A^{-1}$ may be poorly conditioned meaning it is prone to rounding error and is likely not well identified computationally. The term $a_{11}a_{22}-a_{12}a_{21}$ is the determinant of matrix A, and for square matrices of size greater than $2 \times 2$, if equal to zero indicates that you have a problem with your data matrix (columns are linearly dependent on other columns). The inverse of matrix A exists if A is square and is of full rank (ie. the columns of A are not linear combinations of other columns of A). For more information on this topic, see this http://en.wikipedia.org/wiki/Matrix_inversion, for example, on inverting matrices. End of explanation """ print C.dot(C_inverse) print "\nIs identical to:\n" print C_inverse.dot(C) A = np.matrix([[1,0,3],[4,5,6],[7,-8,9]]) # or A = np.matrix('[1 0 3; 4,5,6; 7,-8,9]') print A print "trace A = ", A.trace() print "det(A) = ", np.linalg.det(A) print A.I print B B = np.array(A) print "different behaviors of arrays and matrices" print "A*A: \n", A*A print "B*B: \n", B*B print "matrix mult B.dot(B): \n", B.dot(B) A.I*A print A.A print A.A1 """ Explanation: Check that $C\times C^{-1} = I$: End of explanation """ A = np.arange(6).reshape((6,1)) B = np.arange(6).reshape((1,6)) A B A.dot(B) B.dot(A) B.T A.T B.T.dot(A.T) A.T.dot(B.T) A = np.arange(6).reshape((3,2)) B = np.arange(8).reshape((2,4)) print "A is" print A print "The Transpose of A is" print A.T """ Explanation: Transposing a Matrix At times it is useful to pivot a matrix for conformability- that is in order to matrix divide or multiply, we need to switch the rows and column dimensions of matrices. Consider the matrix $$ \begin{equation} A_{3 \times 2}=\begin{bmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \ a_{31} & a_{32} \end{bmatrix}{3 \times 2} \end{equation} $$ The transpose of A (denoted as $A^{\prime}$) is $$ \begin{equation} A^{\prime}=\begin{bmatrix} a{11} & a_{21} & a_{31} \ a_{12} & a_{22} & a_{32} \ \end{bmatrix}_{2 \times 3} \end{equation} $$ End of explanation """ print B.T.dot(A.T) print "Is identical to:" print (A.dot(B)).T B.shape print B, '\n' B[0, 3] A = np.arange(12).reshape((3,4)) A A[2,:].shape A[:,1].reshape(1,3).shape """ Explanation: One important property of transposing a matrix is the transpose of a product of two matrices. Let matrix A be of dimension $N \times M$ and let B of of dimension $M \times P$. Then $$ \begin{equation} (AB)^{\prime}=B^{\prime}A^{\prime} \end{equation} $$ For more information, see this http://en.wikipedia.org/wiki/Matrix_transposition on matrix transposition. This is also easy to implement: End of explanation """ a = np.arange(10) s = slice(2,7,2) print a print a[s] a = np.arange(10) b = a[2:7:2] print a print b a = np.arange(10) b = a[5] print a print b a = np.arange(10) print a print a[2:5] print a[2:5] print a[5:8] print np.concatenate([a[2:5],a[5:8]]) print a[2:8] a = np.arange(10) print a print a[-1] print a[-2:-5:-1] a = np.array([[1,2,3],[3,4,5],[4,5,6]]) print a # slice items starting from index print 'Now we will slice the array from the index a[1:]' print a[1:] print "slicing along 2 dimensions" print a[1:,2] print "aside: if a is a matrix, we keep the original shape of the slice..." print np.matrix(a)[1:,2] # array to begin with a = np.array([[1,2,3],[3,4,5],[4,5,6]]) print 'Our array is:' print a print '\n' # this returns array of items in the second column print 'The items in the second column are:' print a[...,1] print '\n' # Now we will slice all items from the second row print 'The items in the second row are:' print a[1,...] print '\n' # Now we will slice all items from column 1 onwards print 'The items column 1 onwards are:' print a[...,1:] # in-class exercise: a = np.matrix('[4 5 6 7; 4 1 0 1; 5 0 1 3; 9 8 3 2]') print a # how can you get the [[1,0],[0,1]] matrix in the middle of a? # answer: a[1:3,1:3] # in-class exercise: a = np.arange(18) # what is the transpose of the 3x3 matrix formed from taking the even elements of a? # answer: a[0:18:2].reshape([3,3]).T """ Explanation: Mechanics Indexing and Slicing examples from https://www.tutorialspoint.com/numpy/numpy_indexing_and_slicing.htm End of explanation """ A = np.random.rand(5,5)*10 print A, '\n' print (A < 5) np.all(A<5) A.flatten() <5 sum(A.flatten()<5) print A[A < 5] A[A<5] = 0 A A[A>=5] = 1 A A[:2,:2][np.array([[True,False],[1==2,True]])] # in-class exercise A = np.random.randn(4,4) B = np.arange(16).reshape([4,4]) print A, '\n\n', B # find the elements of A that are less than 0, and replace them with corresponding elements of B A[A<0]=B[A<0] A A = np.random.randn(4,4) A A>-1 and A<1 np.logical_and(A>-1,A<1) # in-class exercise: combining logical conditions # USE np.logical_and() A = np.random.randn(4,4) B = np.arange(16).reshape([4,4]) print A, '\n\n', B # find the elements of A that are between -1 and 1, and replace them with elements of B A[np.logical_and(A > -1, A < 1)] = B[np.logical_and(A > -1,A < 1)] A """ Explanation: Logic, Comparison End of explanation """ np.ones((10,5), int) np.zeros((10,5), int) np.eye(5, dtype="int") """ Explanation: Concatenate, Reshape End of explanation """ np.random.seed(100) v1 = np.random.rand(500) v2 = np.random.randn(500) plt.plot(range(v1.shape[0]), v1, '.') plt.scatter(range(len(v2)), v2) plt.xlabel('Index') plt.ylabel('Random Value') plt.title('Some random numbers') plt.show() plt.hist(v1, bins=20); v2 = np.random.randn(10000) plt.hist(v2, bins=100) ; v3 = np.random.beta(3,2, 10000) plt.hist(v3, bins=100) ; # in-class exercise # generate a 1000 points in 2-d uniformly distributed on the rectangle [-1,1]x[0,0.5], then plot the points. points = zip(np.random.rand(1000)*2-1,np.random.rand(1000)*0.5) plt.scatter(np.array(points).T[0],np.array(points).T[1]); # plt.scatter(*zip(*points)); """ Explanation: Random Numbers End of explanation """ %ls -l HW03/ %%sh ./HW03/preprocess_data.sh HW03/Camera.csv HW03/Camera_cleaned.csv head HW03/Camera.csv head HW03/Camera_cleaned.csv DATA = np.genfromtxt('HW03/Camera_cleaned.csv', delimiter=';', names=True, dtype='float') DATA DATA['Max_resolution'].max() np.nanargmin(DATA['Max_resolution']) np.savetxt('Cameras_TMP.csv', DATA, delimiter=',') %%sh ls -l rm Cameras_TMP.csv import pickle with open('Cameras.pkl','wb') as f: pickle.dump(DATA, f) # pickle.dump(DATA, open('Cameras.pkl','wb')) # Shortcut: np.save('Cameras2.pkl', DATA, allow_pickle=True) %ls -l with open('Cameras.pkl','rb') as f: datapkl = pickle.load(f) # Shortcut: datapkl2 = np.load('Cameras2.pkl.npy') datapkl datapkl2 %rm Cameras.pkl %rm Cameras2.pkl.npy # For saving multiple variables in workspace: import shelve """ Explanation: Numpy load, save data files End of explanation """ points = [[9,2,8],[4,7,2],[3,4,4],[5,6,9],[5,0,7],[8,2,7],[0,3,2],[7,3,0],[6,1,1],[2,9,6]] qPoint = [4,5,3] # which point in points is closest to qPoint? # Euclidean distance between 2 points [x1,y1,z1] and [x2,y2,z2] is: # sqrt((x2-x1)**2+(y2-y1)**2+(z2-z1)**2) ### Pure iterative Python ### points = [[9,2,8],[4,7,2],[3,4,4],[5,6,9],[5,0,7],[8,2,7],[0,3,2],[7,3,0],[6,1,1],[2,9,6]] qPoint = [4,5,3] minIdx = -1 minDist = -1 for idx, point in enumerate(points): # iterate over all points print "index is %d, point is %s" % (idx, point) dist = sum([(dp-dq)**2 for dp,dq in zip(point,qPoint)])**0.5 # compute the euclidean distance for each point to q if dist < minDist or minDist < 0: # if necessary, update minimum distance and index of the corresponding point minDist = dist minIdx = idx print 'Nearest point to q: ', points[minIdx] # # # Equivalent NumPy vectorization # # # import numpy as np points = np.array([[9,2,8],[4,7,2],[3,4,4],[5,6,9],[5,0,7],[8,2,7],[0,3,2],[7,3,0],[6,1,1],[2,9,6]]) qPoint = np.array([4,5,3]) minIdx = np.argmin(np.linalg.norm(points-qPoint,axis=1)) # compute all euclidean distances at once and return the index of the smallest one print 'Nearest point to q: ', points[minIdx] """ Explanation: Example Nearest Neighbor search Nearest Neighbor search is a common technique in Machine Learning End of explanation """ n = 100 # numeber of samples Xr = np.random.rand(n)*99.0 y = -7.3 + 2.5*Xr + np.random.randn(n)*27.0 plt.plot(Xr, y, "o", alpha=0.5); """ Explanation: Example: Linear Regression Linear regression is an approach for modeling the relationship between a scalar dependent variable $y$ and one or more explanatory variables (or independent variables) denoted $X$. The case of one explanatory variable is called simple linear regression. For more than one explanatory variable, the process is called multiple linear regression.$^1$ (This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.$^2$ We assume that the equation $y_i = \beta_0 + \beta_1 X_i + \epsilon_i$ where $\epsilon_i \approx N(0, \sigma^2)$ $^1$ David A. Freedman (2009). Statistical Models: Theory and Practice. Cambridge University Press. p. 26. A simple regression equation has on the right hand side an intercept and an explanatory variable with a slope coefficient. A multiple regression equation has two or more explanatory variables on the right hand side, each with its own slope coefficient $^2$ Rencher, Alvin C.; Christensen, William F. (2012), "Chapter 10, Multivariate regression – Section 10.1, Introduction", Methods of Multivariate Analysis, Wiley Series in Probability and Statistics, 709 (3rd ed.), John Wiley &amp; Sons, p. 19, ISBN 9781118391679. End of explanation """ X = np.vstack((np.ones(n), Xr)).T print X.shape X[0:10,:] """ Explanation: Let's add the bias, i.e. a column of $1$s to the explanatory variables End of explanation """ beta = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y) yhat = X.dot(beta) yhat.shape plt.plot(X[:,1], y, "o", alpha=0.5) plt.plot(X[:,1], yhat, "-", alpha=1, color="red") beta """ Explanation: Closed-form Linear Regression And compute the parametes $\beta_0$ and $\beta_1$ according to $$ \beta = (X^\prime X)^{-1} X^\prime y $$ Note: This not only looks elegant but can also be written in Julia code. However, matrix inversion $M^{-1}$ requires $O(d^3)$ iterations for a $d\times d$ matrix.<br /> https://www.coursera.org/learn/ml-regression/lecture/jOVX8/discussing-the-closed-form-solution End of explanation """ n = 100 # numeber of samples X1 = np.random.rand(n)*99.0 X2 = np.random.rand(n)*51.0 - 26.8 X3 = np.random.rand(n)*5.0 + 6.1 X4 = np.random.rand(n)*1.0 - 0.5 X5 = np.random.rand(n)*300.0 y_m = -7.3 + 2.5*X1 + -7.9*X2 + 1.5*X3 + 10.0*X4 + 0.13*X5 + np.random.randn(n)*7.0 plt.hist(y_m, bins=20) ; X_m = np.vstack((np.ones(n), X1, X2, X3, X4, X5)).T X_m.shape X_m[:5] beta_m = np.linalg.inv(X_m.T.dot(X_m)).dot(X_m.T).dot(y_m) beta_m yhat_m = X_m.dot(beta_m) yhat_m.shape plt.hist(yhat_m, bins=20); """ Explanation: Multiple Linear Regression End of explanation """ import math RSMD = math.sqrt(np.square(yhat_m-y_m).sum()/n) print RSMD """ Explanation: Evaluation: Root-mean-square Deviation The root-mean-square deviation (RMSD) or root-mean-square error (RMSE) is a frequently used measure of the differences between values (sample and population values) predicted by a model or an estimator and the values actually observed. The RMSD represents the sample standard deviation of the differences between predicted values and observed values. These individual differences are called residuals when the calculations are performed over the data sample that was used for estimation, and are called prediction errors when computed out-of-sample. The RMSD serves to aggregate the magnitudes of the errors in predictions for various times into a single measure of predictive power. RMSD is a good measure of accuracy, but only to compare forecasting errors of different models for a particular variable and not between variables, as it is scale-dependent.$^1$ $^1$ Hyndman, Rob J. Koehler, Anne B.; Koehler (2006). "Another look at measures of forecast accuracy". International Journal of Forecasting. 22 (4): 679–688. doi:10.1016/j.ijforecast.2006.03.001. End of explanation """ p = X.shape[1] ## get number of parameters lam = 10.0 p, lam beta2 = np.linalg.inv(X.T.dot(X) + lam*np.eye(p)).dot(X.T).dot(y) yhat2 = X.dot(beta2) RSMD2 = math.sqrt(np.square(yhat2-y).sum()/n) print RSMD2 ##n = float(X.shape[0]) print " RMSE = ", math.sqrt(np.square(yhat-y).sum()/n) print "Ridge RMSE = ", math.sqrt(np.square(yhat2-y).sum()/n) plt.plot(X[:,1], y, "o", alpha=0.5) plt.plot(X[:,1], yhat, "-", alpha=0.7, color="red") plt.plot(X[:,1], yhat2, "-", alpha=0.7, color="green") """ Explanation: Regularization, Ridge-Regression Regularization, in mathematics and statistics and particularly in the fields of machine learning and inverse problems, is a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting. In general, a regularization term $R(f)$ is introduced to a general loss function: for a loss function $V$ that describes the cost of predicting $f(x)$ when the label is $y$, such as the square loss or hinge loss, and for the term $\lambda$ which controls the importance of the regularization term. $R(f)$ is typically a penalty on the complexity of $f$, such as restrictions for smoothness or bounds on the vector space norm.$^1$ A theoretical justification for regularization is that it attempts to impose Occam's razor on the solution, as depicted in the figure. From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters. Regularization can be used to learn simpler models, induce models to be sparse, introduce group structure into the learning problem, and more. We're going to add the L2 term $\lambda||\beta||_2^2$ to the regression equation, which yields to$^2$ $$ \beta = (X^\prime X + \lambda I)^{-1} X^\prime y $$ $^1$ Bishop, Christopher M. (2007). Pattern recognition and machine learning (Corr. printing. ed.). New York: Springer. ISBN 978-0387310732. $^2$ http://stats.stackexchange.com/questions/69205/how-to-derive-the-ridge-regression-solution End of explanation """
zzsza/TIL
Tensorflow-Extended/TFDV(data validation) example.ipynb
mit
from __future__ import print_function import sys, os import tempfile, urllib, zipfile # Confirm that we're using Python 2 assert sys.version_info.major is 2, 'Oops, not running Python 2' # Set up some globals for our file paths BASE_DIR = tempfile.mkdtemp() DATA_DIR = os.path.join(BASE_DIR, 'data') OUTPUT_DIR = os.path.join(BASE_DIR, 'chicago_taxi_output') TRAIN_DATA = os.path.join(DATA_DIR, 'train', 'data.csv') EVAL_DATA = os.path.join(DATA_DIR, 'eval', 'data.csv') SERVING_DATA = os.path.join(DATA_DIR, 'serving', 'data.csv') # Download the zip file from GCP and unzip it zip, headers = urllib.urlretrieve('https://storage.googleapis.com/tfx-colab-datasets/chicago_data.zip') zipfile.ZipFile(zip).extractall(BASE_DIR) zipfile.ZipFile(zip).close() print("Here's what we downloaded:") !ls -lR {os.path.join(BASE_DIR, 'data')} !pip2 install -q tensorflow_data_validation import tensorflow_data_validation as tfdv print('TFDV version: {}'.format(tfdv.version.__version__)) """ Explanation: Github Python2에서 진행 Python3에서도 되긴 하는데, 몇 기능이 안될듯(Apache Beam이 아직 파이썬2만 지원) End of explanation """ train_stats = tfdv.generate_statistics_from_csv(data_location=TRAIN_DATA) """ Explanation: Compute and visualize statistics tfdv.generate_statistics_from_csv로 데이터 분포 생성 많은 데이터일 경우 내부적으로 Apache Beam을 사용해 병렬처리 Beam의 PTransform과 결합 가능 End of explanation """ tfdv.visualize_statistics(train_stats) """ Explanation: tfdv.visualize_statistics를 사용해 시각화, 내부적으론 Facets을 사용한다 함 numeric, categorical feature들을 나눔 End of explanation """ schema = tfdv.infer_schema(statistics=train_stats) tfdv.display_schema(schema=schema) """ Explanation: Infer a scahema 데이터를 통해 스키마 추론 tfdv.infer_schema tfdv.display_schema End of explanation """ # Compute stats for evaluation data eval_stats = tfdv.generate_statistics_from_csv(data_location=EVAL_DATA) # Compare evaluation data with training data tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats, lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET') """ Explanation: 평가 데이터 에러 체크 train, validation에서 다른 데이터들이 있음 캐글할 때 유용할듯 End of explanation """ # Check eval data for errors by validating the eval data stats using the previously inferred schema. anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema) tfdv.display_anomalies(anomalies) """ Explanation: Check for evaluation anomalies train 데이터엔 없었는데 validation에 생긴 데이터 있는지 확인 End of explanation """ # Relax the minimum fraction of values that must come from the domain for feature company. company = tfdv.get_feature(schema, 'company') company.distribution_constraints.min_domain_mass = 0.9 # Add new value to the domain of feature payment_type. payment_type_domain = tfdv.get_domain(schema, 'payment_type') payment_type_domain.value.append('Prcard') # Validate eval stats after updating the schema updated_anomalies = tfdv.validate_statistics(eval_stats, schema) tfdv.display_anomalies(updated_anomalies) """ Explanation: Fix evaluation anomalies in the schema 수정 End of explanation """ serving_stats = tfdv.generate_statistics_from_csv(SERVING_DATA) serving_anomalies = tfdv.validate_statistics(serving_stats, schema) tfdv.display_anomalies(serving_anomalies) """ Explanation: Schema Environments serving할 때도 스키마 체크해야 함 Environments can be used to express such requirements. In particular, features in schema can be associated with a set of environments using default_environment, in_environment and not_in_environment. End of explanation """ options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True) serving_stats = tfdv.generate_statistics_from_csv(SERVING_DATA, stats_options=options) serving_anomalies = tfdv.validate_statistics(serving_stats, schema) tfdv.display_anomalies(serving_anomalies) # All features are by default in both TRAINING and SERVING environments. schema.default_environment.append('TRAINING') schema.default_environment.append('SERVING') # Specify that 'tips' feature is not in SERVING environment. tfdv.get_feature(schema, 'tips').not_in_environment.append('SERVING') serving_anomalies_with_env = tfdv.validate_statistics( serving_stats, schema, environment='SERVING') tfdv.display_anomalies(serving_anomalies_with_env) """ Explanation: Int value가 있음 => Float으로 수정 End of explanation """ # Add skew comparator for 'payment_type' feature. payment_type = tfdv.get_feature(schema, 'payment_type') payment_type.skew_comparator.infinity_norm.threshold = 0.01 # Add drift comparator for 'company' feature. company=tfdv.get_feature(schema, 'company') company.drift_comparator.infinity_norm.threshold = 0.001 skew_anomalies = tfdv.validate_statistics(train_stats, schema, previous_statistics=eval_stats, serving_statistics=serving_stats) tfdv.display_anomalies(skew_anomalies) """ Explanation: Check for drift and skew Drift Drift detection is supported for categorical features and between consecutive spans of data (i.e., between span N and span N+1), such as between different days of training data. We express drift in terms of L-infinity distance, and you can set the threshold distance so that you receive warnings when the drift is higher than is acceptable. Setting the correct distance is typically an iterative process requiring domain knowledge and experimentation. Skew Schema Skew 같은 스키마를 가지지 않을 때 Feature Skew Feature 생성 로직이 변경될 때 Dsitribution Skew Train, Serving시 데이터 분포가 다를 경우 End of explanation """ from tensorflow.python.lib.io import file_io from google.protobuf import text_format file_io.recursive_create_dir(OUTPUT_DIR) schema_file = os.path.join(OUTPUT_DIR, 'schema.pbtxt') tfdv.write_schema_text(schema, schema_file) !cat {schema_file} """ Explanation: Freeze the schema 스키마 저장 End of explanation """
mne-tools/mne-tools.github.io
0.14/_downloads/plot_brainstorm_auditory.ipynb
bsd-3-clause
# Authors: Mainak Jas <[email protected]> # Eric Larson <[email protected]> # Jaakko Leppakangas <[email protected]> # # License: BSD (3-clause) import os.path as op import pandas as pd import numpy as np import mne from mne import combine_evoked from mne.minimum_norm import apply_inverse from mne.datasets.brainstorm import bst_auditory from mne.io import read_raw_ctf from mne.filter import notch_filter, filter_data print(__doc__) """ Explanation: Brainstorm auditory tutorial dataset Here we compute the evoked from raw for the auditory Brainstorm tutorial dataset. For comparison, see [1]_ and: http://neuroimage.usc.edu/brainstorm/Tutorials/Auditory Experiment: - One subject, 2 acquisition runs 6 minutes each. - Each run contains 200 regular beeps and 40 easy deviant beeps. - Random ISI: between 0.7s and 1.7s seconds, uniformly distributed. - Button pressed when detecting a deviant with the right index finger. The specifications of this dataset were discussed initially on the FieldTrip bug tracker &lt;http://bugzilla.fcdonders.nl/show_bug.cgi?id=2300&gt;_. References .. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM. Brainstorm: A User-Friendly Application for MEG/EEG Analysis. Computational Intelligence and Neuroscience, vol. 2011, Article ID 879716, 13 pages, 2011. doi:10.1155/2011/879716 End of explanation """ use_precomputed = True """ Explanation: To reduce memory consumption and running time, some of the steps are precomputed. To run everything from scratch change this to False. With use_precomputed = False running time of this script can be several minutes even on a fast computer. End of explanation """ data_path = bst_auditory.data_path() subject = 'bst_auditory' subjects_dir = op.join(data_path, 'subjects') raw_fname1 = op.join(data_path, 'MEG', 'bst_auditory', 'S01_AEF_20131218_01.ds') raw_fname2 = op.join(data_path, 'MEG', 'bst_auditory', 'S01_AEF_20131218_02.ds') erm_fname = op.join(data_path, 'MEG', 'bst_auditory', 'S01_Noise_20131218_01.ds') """ Explanation: The data was collected with a CTF 275 system at 2400 Hz and low-pass filtered at 600 Hz. Here the data and empty room data files are read to construct instances of :class:mne.io.Raw. End of explanation """ preload = not use_precomputed raw = read_raw_ctf(raw_fname1, preload=preload) n_times_run1 = raw.n_times mne.io.concatenate_raws([raw, read_raw_ctf(raw_fname2, preload=preload)]) raw_erm = read_raw_ctf(erm_fname, preload=preload) """ Explanation: In the memory saving mode we use preload=False and use the memory efficient IO which loads the data on demand. However, filtering and some other functions require the data to be preloaded in the memory. End of explanation """ raw.set_channel_types({'HEOG': 'eog', 'VEOG': 'eog', 'ECG': 'ecg'}) if not use_precomputed: # Leave out the two EEG channels for easier computation of forward. raw.pick_types(meg=True, eeg=False, stim=True, misc=True, eog=True, ecg=True) """ Explanation: Data channel array consisted of 274 MEG axial gradiometers, 26 MEG reference sensors and 2 EEG electrodes (Cz and Pz). In addition: 1 stim channel for marking presentation times for the stimuli 1 audio channel for the sent signal 1 response channel for recording the button presses 1 ECG bipolar 2 EOG bipolar (vertical and horizontal) 12 head tracking channels 20 unused channels The head tracking channels and the unused channels are marked as misc channels. Here we define the EOG and ECG channels. End of explanation """ annotations_df = pd.DataFrame() offset = n_times_run1 for idx in [1, 2]: csv_fname = op.join(data_path, 'MEG', 'bst_auditory', 'events_bad_0%s.csv' % idx) df = pd.read_csv(csv_fname, header=None, names=['onset', 'duration', 'id', 'label']) print('Events from run {0}:'.format(idx)) print(df) df['onset'] += offset * (idx - 1) annotations_df = pd.concat([annotations_df, df], axis=0) saccades_events = df[df['label'] == 'saccade'].values[:, :3].astype(int) # Conversion from samples to times: onsets = annotations_df['onset'].values / raw.info['sfreq'] durations = annotations_df['duration'].values / raw.info['sfreq'] descriptions = annotations_df['label'].values annotations = mne.Annotations(onsets, durations, descriptions) raw.annotations = annotations del onsets, durations, descriptions """ Explanation: For noise reduction, a set of bad segments have been identified and stored in csv files. The bad segments are later used to reject epochs that overlap with them. The file for the second run also contains some saccades. The saccades are removed by using SSP. We use pandas to read the data from the csv files. You can also view the files with your favorite text editor. End of explanation """ saccade_epochs = mne.Epochs(raw, saccades_events, 1, 0., 0.5, preload=True, reject_by_annotation=False) projs_saccade = mne.compute_proj_epochs(saccade_epochs, n_mag=1, n_eeg=0, desc_prefix='saccade') if use_precomputed: proj_fname = op.join(data_path, 'MEG', 'bst_auditory', 'bst_auditory-eog-proj.fif') projs_eog = mne.read_proj(proj_fname)[0] else: projs_eog, _ = mne.preprocessing.compute_proj_eog(raw.load_data(), n_mag=1, n_eeg=0) raw.add_proj(projs_saccade) raw.add_proj(projs_eog) del saccade_epochs, saccades_events, projs_eog, projs_saccade # To save memory """ Explanation: Here we compute the saccade and EOG projectors for magnetometers and add them to the raw data. The projectors are added to both runs. End of explanation """ raw.plot(block=True) """ Explanation: Visually inspect the effects of projections. Click on 'proj' button at the bottom right corner to toggle the projectors on/off. EOG events can be plotted by adding the event list as a keyword argument. As the bad segments and saccades were added as annotations to the raw data, they are plotted as well. End of explanation """ if not use_precomputed: meg_picks = mne.pick_types(raw.info, meg=True, eeg=False) raw.plot_psd(tmax=np.inf, picks=meg_picks) notches = np.arange(60, 181, 60) raw.notch_filter(notches) raw.plot_psd(tmax=np.inf, picks=meg_picks) """ Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or 60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the original 60 Hz artifact and the harmonics. The power spectra are plotted before and after the filtering to show the effect. The drop after 600 Hz appears because the data was filtered during the acquisition. In memory saving mode we do the filtering at evoked stage, which is not something you usually would do. End of explanation """ if not use_precomputed: raw.filter(None, 100., h_trans_bandwidth=0.5, filter_length='10s', phase='zero-double') """ Explanation: We also lowpass filter the data at 100 Hz to remove the hf components. End of explanation """ tmin, tmax = -0.1, 0.5 event_id = dict(standard=1, deviant=2) reject = dict(mag=4e-12, eog=250e-6) # find events events = mne.find_events(raw, stim_channel='UPPT001') """ Explanation: Epoching and averaging. First some parameters are defined and events extracted from the stimulus channel (UPPT001). The rejection thresholds are defined as peak-to-peak values and are in T / m for gradiometers, T for magnetometers and V for EOG and EEG channels. End of explanation """ sound_data = raw[raw.ch_names.index('UADC001-4408')][0][0] onsets = np.where(np.abs(sound_data) > 2. * np.std(sound_data))[0] min_diff = int(0.5 * raw.info['sfreq']) diffs = np.concatenate([[min_diff + 1], np.diff(onsets)]) onsets = onsets[diffs > min_diff] assert len(onsets) == len(events) diffs = 1000. * (events[:, 0] - onsets) / raw.info['sfreq'] print('Trigger delay removed (μ ± σ): %0.1f ± %0.1f ms' % (np.mean(diffs), np.std(diffs))) events[:, 0] = onsets del sound_data, diffs """ Explanation: The event timing is adjusted by comparing the trigger times on detected sound onsets on channel UADC001-4408. End of explanation """ raw.info['bads'] = ['MLO52-4408', 'MRT51-4408', 'MLO42-4408', 'MLO43-4408'] """ Explanation: We mark a set of bad channels that seem noisier than others. This can also be done interactively with raw.plot by clicking the channel name (or the line). The marked channels are added as bad when the browser window is closed. End of explanation """ picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True, exclude='bads') epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=reject, preload=False, proj=True) """ Explanation: The epochs (trials) are created for MEG channels. First we find the picks for MEG and EOG channels. Then the epochs are constructed using these picks. The epochs overlapping with annotated bad segments are also rejected by default. To turn off rejection by bad segments (as was done earlier with saccades) you can use keyword reject_by_annotation=False. End of explanation """ epochs.drop_bad() epochs_standard = mne.concatenate_epochs([epochs['standard'][range(40)], epochs['standard'][182:222]]) epochs_standard.load_data() # Resampling to save memory. epochs_standard.resample(600, npad='auto') epochs_deviant = epochs['deviant'].load_data() epochs_deviant.resample(600, npad='auto') del epochs, picks """ Explanation: We only use first 40 good epochs from each run. Since we first drop the bad epochs, the indices of the epochs are no longer same as in the original epochs collection. Investigation of the event timings reveals that first epoch from the second run corresponds to index 182. End of explanation """ evoked_std = epochs_standard.average() evoked_dev = epochs_deviant.average() del epochs_standard, epochs_deviant """ Explanation: The averages for each conditions are computed. End of explanation """ if use_precomputed: sfreq = evoked_std.info['sfreq'] notches = [60, 120, 180] for evoked in (evoked_std, evoked_dev): evoked.data[:] = notch_filter(evoked.data, sfreq, notches) evoked.data[:] = filter_data(evoked.data, sfreq, l_freq=None, h_freq=100.) """ Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or 60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the original 60 Hz artifact and the harmonics. Normally this would be done to raw data (with :func:mne.io.Raw.filter), but to reduce memory consumption of this tutorial, we do it at evoked stage. End of explanation """ evoked_std.plot(window_title='Standard', gfp=True) evoked_dev.plot(window_title='Deviant', gfp=True) """ Explanation: Here we plot the ERF of standard and deviant conditions. In both conditions we can see the P50 and N100 responses. The mismatch negativity is visible only in the deviant condition around 100-200 ms. P200 is also visible around 170 ms in both conditions but much stronger in the standard condition. P300 is visible in deviant condition only (decision making in preparation of the button press). You can view the topographies from a certain time span by painting an area with clicking and holding the left mouse button. End of explanation """ times = np.arange(0.05, 0.301, 0.025) evoked_std.plot_topomap(times=times, title='Standard') evoked_dev.plot_topomap(times=times, title='Deviant') """ Explanation: Show activations as topography figures. End of explanation """ evoked_difference = combine_evoked([evoked_dev, -evoked_std], weights='equal') evoked_difference.plot(window_title='Difference', gfp=True) """ Explanation: We can see the MMN effect more clearly by looking at the difference between the two conditions. P50 and N100 are no longer visible, but MMN/P200 and P300 are emphasised. End of explanation """ reject = dict(mag=4e-12) cov = mne.compute_raw_covariance(raw_erm, reject=reject) cov.plot(raw_erm.info) del raw_erm """ Explanation: Source estimation. We compute the noise covariance matrix from the empty room measurement and use it for the other runs. End of explanation """ trans_fname = op.join(data_path, 'MEG', 'bst_auditory', 'bst_auditory-trans.fif') trans = mne.read_trans(trans_fname) """ Explanation: The transformation is read from a file. More information about coregistering the data, see ch_interactive_analysis or :func:mne.gui.coregistration. End of explanation """ if use_precomputed: fwd_fname = op.join(data_path, 'MEG', 'bst_auditory', 'bst_auditory-meg-oct-6-fwd.fif') fwd = mne.read_forward_solution(fwd_fname) else: src = mne.setup_source_space(subject, spacing='ico4', subjects_dir=subjects_dir, overwrite=True) model = mne.make_bem_model(subject=subject, ico=4, conductivity=[0.3], subjects_dir=subjects_dir) bem = mne.make_bem_solution(model) fwd = mne.make_forward_solution(evoked_std.info, trans=trans, src=src, bem=bem) inv = mne.minimum_norm.make_inverse_operator(evoked_std.info, fwd, cov) snr = 3.0 lambda2 = 1.0 / snr ** 2 del fwd """ Explanation: To save time and memory, the forward solution is read from a file. Set use_precomputed=False in the beginning of this script to build the forward solution from scratch. The head surfaces for constructing a BEM solution are read from a file. Since the data only contains MEG channels, we only need the inner skull surface for making the forward solution. For more information: CHDBBCEJ, :func:mne.setup_source_space, create_bem_model, :func:mne.bem.make_watershed_bem. End of explanation """ stc_standard = mne.minimum_norm.apply_inverse(evoked_std, inv, lambda2, 'dSPM') brain = stc_standard.plot(subjects_dir=subjects_dir, subject=subject, surface='inflated', time_viewer=False, hemi='lh', initial_time=0.1, time_unit='s') del stc_standard, brain """ Explanation: The sources are computed using dSPM method and plotted on an inflated brain surface. For interactive controls over the image, use keyword time_viewer=True. Standard condition. End of explanation """ stc_deviant = mne.minimum_norm.apply_inverse(evoked_dev, inv, lambda2, 'dSPM') brain = stc_deviant.plot(subjects_dir=subjects_dir, subject=subject, surface='inflated', time_viewer=False, hemi='lh', initial_time=0.1, time_unit='s') del stc_deviant, brain """ Explanation: Deviant condition. End of explanation """ stc_difference = apply_inverse(evoked_difference, inv, lambda2, 'dSPM') brain = stc_difference.plot(subjects_dir=subjects_dir, subject=subject, surface='inflated', time_viewer=False, hemi='lh', initial_time=0.15, time_unit='s') """ Explanation: Difference. End of explanation """
maxhutch/thesis-notebooks
Vorticity.ipynb
gpl-3.0
%matplotlib inline import matplotlib matplotlib.rcParams['figure.figsize'] = (10.0, 16.0) import matplotlib.pyplot as plt import numpy as np from scipy.interpolate import interp1d, InterpolatedUnivariateSpline from scipy.optimize import bisect import json from functools import partial class Foo: pass """ Explanation: Figure 1 Start by loading some boiler plate: matplotlib, numpy, scipy, json, functools, and a convenience class. End of explanation """ from chest import Chest from slict import CachedSlict from glopen import glopen, glopen_many """ Explanation: And some more specialized dependencies: 1. Slict provides a convenient slice-able dictionary interface 2. Chest is an out-of-core dictionary that we'll hook directly to a globus remote using... 3. glopen is an open-like context manager for remote globus files End of explanation """ config = Foo() config.name = "Schmidt-Projection/Nu01D01/Nu01D01" #config.arch_end = "maxhutch#alpha-admin/pub" config.arch_end = "alcf#dtn_mira/projects/alpha-nek/" config.frame = 1 config.lower = .25 config.upper = .75 """ Explanation: Configuration for this figure. End of explanation """ c = Chest(path = "{:s}-results".format(config.name), open = partial(glopen, endpoint=config.arch_end), open_many = partial(glopen_many, endpoint=config.arch_end)) sc = CachedSlict(c) with glopen( "{:s}.json".format(config.name), mode='r', endpoint = config.arch_end, ) as f: params = json.load(f) """ Explanation: Open a chest located on a remote globus endpoint and load a remote json configuration file. End of explanation """ T = sc[:,'frame'].keys()[config.frame] frame = sc[T,:] c.prefetch(frame.full_keys()) """ Explanation: We want to grab all the data for the selected frame. End of explanation """ fig = plt.figure() nx = frame['t_yz'].shape[0] ny = frame['t_yz'].shape[1] plt.imshow(frame['vorticity_xy'].transpose(), origin='lower') plt.colorbar(); fig = plt.figure() nx = frame['t_yz'].shape[0] ny = frame['t_yz'].shape[1] plt.imshow(frame['t_xy'].transpose(), origin='lower') plt.colorbar(); def laplace(grid): from numpy.fft import fftn, ifftn, fftfreq kx = fftfreq(grid.shape[0]) ky = fftfreq(grid.shape[1]) kgrid = fftn(grid) for i in range(kgrid.shape[0]): for j in range(kgrid.shape[1]): if kx[i] != 0 or ky[j] != 0: kgrid[i,j] = kgrid[i,j] / (kx[i]**2 + ky[j]**2) kgrid[0,0] = 0. rgrid = np.array(ifftn(kgrid), dtype=float) return rgrid fig = plt.figure() nx = frame['t_yz'].shape[0] ny = frame['t_yz'].shape[1] plt.imshow(frame['vorticity_yz'][:,int(ny*config.lower):int(ny*config.upper)].transpose(), origin='lower') #axs[1].imshow(frame['fz_yz'].transpose(), origin='lower') plt.colorbar(); streamfunction = laplace(frame['vorticity_yz'][:,:]); fig = plt.figure() nx = frame['t_yz'].shape[0] ny = frame['t_yz'].shape[1] print(np.max(streamfunction), np.min(streamfunction)) #background = np.tile(frame['t_yz'][:,int(ny*config.lower):int(ny*config.upper)].transpose(), (1,4)) background = np.tile(streamfunction[:,int(ny*config.lower):int(ny*config.upper)].transpose(), (1,4)) foreground = np.tile(streamfunction[:,int(ny*config.lower):int(ny*config.upper)].transpose(), (1,4)) plt.imshow(background, origin='lower', alpha=0.1) plt.colorbar(); plt.contour(foreground, origin='lower', colors='k') #axs[1].imshow(frame['fz_yz'].transpose(), origin='lower') %install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py %load_ext version_information %version_information numpy, matplotlib, slict, chest, glopen, globussh """ Explanation: Plot the bubble height, the 'H' keys, vs. time. Use a spline to compute the derivative. End of explanation """
otavio-r-filho/AIND-Deep_Learning_Notebooks
sentiment-rnn/Sentiment_RNN_Solution.ipynb
mit
import numpy as np import tensorflow as tf with open('../sentiment-network/reviews.txt', 'r') as f: reviews = f.read() with open('../sentiment-network/labels.txt', 'r') as f: labels = f.read() reviews[:2000] """ Explanation: Sentiment Analysis with an RNN In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels. The architecture for this network is shown below. <img src="assets/network_diagram.png" width=400px> Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own. From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function. We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label. End of explanation """ from string import punctuation all_text = ''.join([c for c in reviews if c not in punctuation]) reviews = all_text.split('\n') all_text = ' '.join(reviews) words = all_text.split() all_text[:2000] words[:100] """ Explanation: Data preprocessing The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit. You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string. First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words. End of explanation """ from collections import Counter counts = Counter(words) vocab = sorted(counts, key=counts.get, reverse=True) vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)} reviews_ints = [] for each in reviews: reviews_ints.append([vocab_to_int[word] for word in each.split()]) """ Explanation: Encoding the words The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network. Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0. Also, convert the reviews to integers and store the reviews in a new list called reviews_ints. End of explanation """ labels = labels.split('\n') labels = np.array([1 if each == 'positive' else 0 for each in labels]) review_lens = Counter([len(x) for x in reviews_ints]) print("Zero-length reviews: {}".format(review_lens[0])) print("Maximum review length: {}".format(max(review_lens))) """ Explanation: Encoding the labels Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1. Exercise: Convert labels from positive and negative to 1 and 0, respectively. End of explanation """ non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0] len(non_zero_idx) reviews_ints[-1] """ Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters. Exercise: First, remove the review with zero length from the reviews_ints list. End of explanation """ reviews_ints = [reviews_ints[ii] for ii in non_zero_idx] labels = np.array([labels[ii] for ii in non_zero_idx]) """ Explanation: Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general. End of explanation """ seq_len = 200 features = np.zeros((len(reviews_ints), seq_len), dtype=int) for i, row in enumerate(reviews_ints): features[i, -len(row):] = np.array(row)[:seq_len] features[:10,:100] """ Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector. This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data. End of explanation """ split_frac = 0.8 split_idx = int(len(features)*0.8) train_x, val_x = features[:split_idx], features[split_idx:] train_y, val_y = labels[:split_idx], labels[split_idx:] test_idx = int(len(val_x)*0.5) val_x, test_x = val_x[:test_idx], val_x[test_idx:] val_y, test_y = val_y[:test_idx], val_y[test_idx:] print("\t\t\tFeature Shapes:") print("Train set: \t\t{}".format(train_x.shape), "\nValidation set: \t{}".format(val_x.shape), "\nTest set: \t\t{}".format(test_x.shape)) """ Explanation: Training, Validation, Test With our data in nice shape, we'll split it into training, validation, and test sets. Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data. End of explanation """ lstm_size = 256 lstm_layers = 1 batch_size = 500 learning_rate = 0.001 """ Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like: Feature Shapes: Train set: (20000, 200) Validation set: (2500, 200) Test set: (2500, 200) Build the graph Here, we'll build the graph. First up, defining the hyperparameters. lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc. lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting. batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory. learning_rate: Learning rate End of explanation """ n_words = len(vocab_to_int) + 1 # Adding 1 because we use 0's for padding, dictionary started at 1 # Create the graph object graph = tf.Graph() # Add nodes to the graph with graph.as_default(): inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs') labels_ = tf.placeholder(tf.int32, [None, None], name='labels') keep_prob = tf.placeholder(tf.float32, name='keep_prob') """ Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability. Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder. End of explanation """ # Size of the embedding vectors (number of units in the embedding layer) embed_size = 300 with graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs_) """ Explanation: Embedding Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights. Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200]. End of explanation """ with graph.as_default(): # Your basic LSTM cell lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) # Add dropout to the cell drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers) # Getting an initial state of all zeros initial_state = cell.zero_state(batch_size, tf.float32) """ Explanation: LSTM cell <img src="assets/network_diagram.png" width=400px> Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph. To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation: tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=&lt;function tanh at 0x109f1ef28&gt;) you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like lstm = tf.contrib.rnn.BasicLSTMCell(num_units) to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell: cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers) Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list. So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell. Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell. Here is a tutorial on building RNNs that will help you out. End of explanation """ with graph.as_default(): outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state) """ Explanation: RNN forward pass <img src="assets/network_diagram.png" width=400px> Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network. outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state) Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer. Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed. End of explanation """ with graph.as_default(): predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid) cost = tf.losses.mean_squared_error(labels_, predictions) optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) """ Explanation: Output We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_. End of explanation """ with graph.as_default(): correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) """ Explanation: Validation accuracy Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass. End of explanation """ def get_batches(x, y, batch_size=100): n_batches = len(x)//batch_size x, y = x[:n_batches*batch_size], y[:n_batches*batch_size] for ii in range(0, len(x), batch_size): yield x[ii:ii+batch_size], y[ii:ii+batch_size] """ Explanation: Batching This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size]. End of explanation """ epochs = 10 with graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=graph) as sess: sess.run(tf.global_variables_initializer()) iteration = 1 for e in range(epochs): state = sess.run(initial_state) for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1): feed = {inputs_: x, labels_: y[:, None], keep_prob: 0.5, initial_state: state} loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed) if iteration%5==0: print("Epoch: {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Train loss: {:.3f}".format(loss)) if iteration%25==0: val_acc = [] val_state = sess.run(cell.zero_state(batch_size, tf.float32)) for x, y in get_batches(val_x, val_y, batch_size): feed = {inputs_: x, labels_: y[:, None], keep_prob: 1, initial_state: val_state} batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed) val_acc.append(batch_acc) print("Val acc: {:.3f}".format(np.mean(val_acc))) iteration +=1 saver.save(sess, "checkpoints/sentiment.ckpt") """ Explanation: Training Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists. End of explanation """ test_acc = [] with tf.Session(graph=graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) test_state = sess.run(cell.zero_state(batch_size, tf.float32)) for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1): feed = {inputs_: x, labels_: y[:, None], keep_prob: 1, initial_state: test_state} batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed) test_acc.append(batch_acc) print("Test accuracy: {:.3f}".format(np.mean(test_acc))) """ Explanation: Testing End of explanation """
WaylonWalker/pyDataVizDay
notebooks/Explore Movie Dataset.ipynb
mit
import os import pandas as pd import settings import etl %matplotlib inline %load_ext watermark %watermark -d -t -v -m -p pea,pandas data = etl.Data() data.load() """ Explanation: Explore Movie Dataset End of explanation """ data.movie.columns """ Explanation: Available Columns End of explanation """ data.movie.dtypes data.movie['net'] = data.movie['gross'] - data.movie['budget'] data.movie.sort_values('budget',ascending=False)[['movie_title', 'title_year', 'budget', 'gross', 'net']] """ Explanation: Add Calulations to etl End of explanation """ from iplotter import C3Plotter c3 = C3Plotter() """ Explanation: plotting with IPlotter This example is using my own branch of IPlotter which builds the dictionary from a pandas DataFrame. Much less verbose, but can be done with the current version on PyPI. End of explanation """ plot_data = data.movie.groupby(['title_year']).min()[['gross', 'net', 'budget']].fillna(0) c3.plot(plot_data, zoom=True) country_group = data.movie.groupby('country').mean()['imdb_score'] values = country_group.values.tolist() countries = country_group.index.values.tolist() from iplotter import PlotlyPlotter from IPython.display import HTML plotly = PlotlyPlotter() c3_plotter = C3Plotter() plotly_chart = [{ "type": 'choropleth', "locationmode": 'country names', "locations": countries, "z": values, "zmin": 0, "zmax": max(values), "colorscale": [ [0, 'rgb(242,240,247)'], [0.2, 'rgb(218,218,235)'], [0.4, 'rgb(188,189,220)'], [0.6, 'rgb(158,154,200)'], [0.8, 'rgb(117,107,177)'], [1, 'rgb(84,39,143)'] ], "colorbar": { "title": 'Count', "thickness": 10 }, "marker": { "line": { "color": 'rgb(255,255,255)', "width": 2 } } }] plotly_layout = { "title": 'Movie Counts by Country', "geo": { "scope": 'country names', } } country_plot = plotly.plot(data=plotly_chart) """ Explanation: Timeseries of mean gross End of explanation """ data.movie.set_index(['budget'])['imdb_score'] score_by_budget = data.movie.set_index(['director_facebook_likes'])[['net']] c3.plot(score_by_budget, kind='scatter', zoom=True, ) from ipywidgets import interact, interactive, fixed, interact_manual def f(country): df = data.movie[data.movie['country'] == country] ax = df.groupby(['director_name']).agg({'director_facebook_likes':'sum', 'gross':'sum'}).plot(kind='scatter', x='director_facebook_likes', y='gross') plt.show() import matplotlib.pyplot as plt interact(f, country=data.movie.country.drop_duplicates().dropna().values.tolist()); """ Explanation: Movies by Country {{ country_plot }} End of explanation """
jphall663/GWU_data_mining
02_analytical_data_prep/src/py_part_2_impute.ipynb
apache-2.0
import pandas as pd # pandas for handling mixed data sets import numpy as np # numpy for basic math and matrix operations """ Explanation: License Copyright (C) 2017 J. Patrick Hall, [email protected] Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Simple imputation - Pandas and numpy Imports End of explanation """ scratch_df = pd.DataFrame({'x1': [0, 1, 2, 3, np.nan, 5, 6, 7, np.nan, 8, 9]}) scratch_df """ Explanation: Create sample data set End of explanation """ scratch_df['x1_impute'] = scratch_df.fillna(scratch_df.mean()) scratch_df """ Explanation: Impute End of explanation """
xtr33me/deep-learning
gan_mnist/Intro_to_GANs_Solution.ipynb
mit
%matplotlib inline import pickle as pkl import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data') """ Explanation: Generative Adversarial Network In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits! GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out: Pix2Pix CycleGAN A whole list The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator. The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can foold the discriminator. The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow. End of explanation """ def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real') inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z') return inputs_real, inputs_z """ Explanation: Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. End of explanation """ def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): with tf.variable_scope('generator', reuse=reuse): # Hidden layer h1 = tf.layers.dense(z, n_units, activation=None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) # Logits and tanh output logits = tf.layers.dense(h1, out_dim, activation=None) out = tf.tanh(logits) return out """ Explanation: Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values. Variable Scope Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks. We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again. To use tf.variable_scope, you use a with statement: python with tf.variable_scope('scope_name', reuse=False): # code here Here's more from the TensorFlow documentation to get another look at using tf.variable_scope. Leaky ReLU TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x: $$ f(x) = max(\alpha * x, x) $$ Tanh Output The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1. End of explanation """ def discriminator(x, n_units=128, reuse=False, alpha=0.01): with tf.variable_scope('discriminator', reuse=reuse): # Hidden layer h1 = tf.layers.dense(x, n_units, activation=None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) logits = tf.layers.dense(h1, 1, activation=None) out = tf.sigmoid(logits) return out, logits """ Explanation: Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. End of explanation """ # Size of input image to discriminator input_size = 784 # Size of latent vector to generator z_size = 100 # Sizes of hidden layers in generator and discriminator g_hidden_size = 128 d_hidden_size = 128 # Leak factor for leaky ReLU alpha = 0.01 # Smoothing smooth = 0.1 """ Explanation: Hyperparameters End of explanation """ tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(input_size, z_size) # Build the model g_model = generator(input_z, input_size) # g_model is the generator output d_model_real, d_logits_real = discriminator(input_real) d_model_fake, d_logits_fake = discriminator(g_model, reuse=True) """ Explanation: Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True). End of explanation """ # Calculate losses d_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_logits_real) * (1 - smooth))) d_loss_fake = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_logits_real))) d_loss = d_loss_real + d_loss_fake g_loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake))) """ Explanation: Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like python tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth) The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images. End of explanation """ # Optimizers learning_rate = 0.002 # Get the trainable_variables, split into G and D parts t_vars = tf.trainable_variables() g_vars = [var for var in t_vars if var.name.startswith('generator')] d_vars = [var for var in t_vars if var.name.startswith('discriminator')] d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars) """ Explanation: Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). We can do something similar with the discriminator. All the variables in the discriminator start with discriminator. Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list. End of explanation """ !mkdir checkpoints batch_size = 100 epochs = 100 samples = [] losses = [] # Only save generator variables saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images, reshape and rescale to pass to D batch_images = batch[0].reshape((batch_size, 784)) batch_images = batch_images*2 - 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z}) _ = sess.run(g_train_opt, feed_dict={input_z: batch_z}) # At the end of each epoch, get the losses and print them out train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images}) train_loss_g = g_loss.eval({input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) # Sample from generator as we're training for viewing afterwards sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) samples.append(gen_samples) saver.save(sess, './checkpoints/generator.ckpt') # Save training generator samples with open('train_samples.pkl', 'wb') as f: pkl.dump(samples, f) """ Explanation: Training End of explanation """ fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator') plt.plot(losses.T[1], label='Generator') plt.title("Training Losses") plt.legend() """ Explanation: Training loss Here we'll check out the training losses for the generator and discriminator. End of explanation """ def view_samples(epoch, samples): fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) im = ax.imshow(img.reshape((28,28)), cmap='Greys_r') return fig, axes # Load samples from generator taken while training with open('train_samples.pkl', 'rb') as f: samples = pkl.load(f) """ Explanation: Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training. End of explanation """ _ = view_samples(-1, samples) """ Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make. End of explanation """ rows, cols = 10, 6 fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True) for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes): for img, ax in zip(sample[::int(len(sample)/cols)], ax_row): ax.imshow(img.reshape((28,28)), cmap='Greys_r') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) """ Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion! End of explanation """ saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) _ = view_samples(0, [gen_samples]) """ Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s. Sampling from the generator We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! End of explanation """
d-li14/CS231n-Assignments
assignment2/Dropout.ipynb
gpl-3.0
# As usual, a bit of setup from __future__ import print_function import time import numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.fc_net import * from cs231n.data_utils import get_CIFAR10_data from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array from cs231n.solver import Solver %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): """ returns relative error """ return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) # Load the (preprocessed) CIFAR10 data. data = get_CIFAR10_data() for k, v in data.items(): print('%s: ' % k, v.shape) """ Explanation: Dropout Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout. [1] Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012 End of explanation """ np.random.seed(231) x = np.random.randn(500, 500) + 10 for p in [0.3, 0.6, 0.75]: out, _ = dropout_forward(x, {'mode': 'train', 'p': p}) out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p}) print('Running tests with p = ', p) print('Mean of input: ', x.mean()) print('Mean of train-time output: ', out.mean()) print('Mean of test-time output: ', out_test.mean()) print('Fraction of train-time output set to zero: ', (out == 0).mean()) print('Fraction of test-time output set to zero: ', (out_test == 0).mean()) print() """ Explanation: Dropout forward pass In the file cs231n/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes. Once you have done so, run the cell below to test your implementation. End of explanation """ np.random.seed(231) x = np.random.randn(10, 10) + 10 dout = np.random.randn(*x.shape) dropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123} out, cache = dropout_forward(x, dropout_param) dx = dropout_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout) print('dx relative error: ', rel_error(dx, dx_num)) """ Explanation: Dropout backward pass In the file cs231n/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation. End of explanation """ np.random.seed(231) N, D, H1, H2, C = 2, 15, 20, 30, 10 X = np.random.randn(N, D) y = np.random.randint(C, size=(N,)) for dropout in [0, 0.25, 0.5]: print('Running check with dropout = ', dropout) model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C, weight_scale=5e-2, dtype=np.float64, dropout=dropout, seed=123) loss, grads = model.loss(X, y) print('Initial loss: ', loss) for name in sorted(grads): f = lambda _: model.loss(X, y)[0] grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5) print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))) print() """ Explanation: Fully-connected nets with Dropout In the file cs231n/classifiers/fc_net.py, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the dropout parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation. End of explanation """ # Train two identical nets, one with dropout and one without np.random.seed(231) num_train = 500 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } solvers = {} dropout_choices = [0, 0.75] for dropout in dropout_choices: model = FullyConnectedNet([500], dropout=dropout) print(dropout) solver = Solver(model, small_data, num_epochs=25, batch_size=100, update_rule='adam', optim_config={ 'learning_rate': 5e-4, }, verbose=True, print_every=100) solver.train() solvers[dropout] = solver # Plot train and validation accuracies of the two models train_accs = [] val_accs = [] for dropout in dropout_choices: solver = solvers[dropout] train_accs.append(solver.train_acc_history[-1]) val_accs.append(solver.val_acc_history[-1]) plt.subplot(3, 1, 1) for dropout in dropout_choices: plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout) plt.title('Train accuracy') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.legend(ncol=2, loc='lower right') plt.subplot(3, 1, 2) for dropout in dropout_choices: plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout) plt.title('Val accuracy') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.legend(ncol=2, loc='lower right') plt.gcf().set_size_inches(15, 15) plt.show() """ Explanation: Regularization experiment As an experiment, we will train a pair of two-layer networks on 500 training examples: one will use no dropout, and one will use a dropout probability of 0.75. We will then visualize the training and validation accuracies of the two networks over time. End of explanation """
robertchase/rhc
mock.ipynb
mit
import sys sys.path.append('/opt/rhc') import rhc.micro as micro import rhc.async as async import logging logging.basicConfig(level=logging.DEBUG) """ Explanation: Defining mock connections Start with some setup. End of explanation """ p=micro.load_connection([ 'CONNECTION placeholder http://jsonplaceholder.typicode.com', 'RESOURCE document /posts/{id}', ]) async.wait(micro.connection.placeholder.document(1)) """ Explanation: Create a simple resource End of explanation """ class MyMock(object): def document(self, method, path, headers, body): print('method', method) print('path', path) print('headers', headers) print('body', body) return 'foo' micro.connection.placeholder.mock = MyMock() """ Explanation: Define a mock for the resource Here we define an object with a method named document and assign it to the connection's mock attribute. Note: the method name matches the RESOURCE name. End of explanation """ async.wait(micro.connection.placeholder.document(1)) """ Explanation: Call the mocked resource With a mock in place, we can make the same call as earlier, but instead of making a network connection, the document method on the connection's mock attribute is called. End of explanation """ async.wait(micro.connection.placeholder.document(1, test='value')) """ Explanation: What is going on here? The mock is not called until the arguments provided to the partial are evaluated and prepared for the HTTP connection; this ensures that the mock data matches the actual connection data. The mock is called with: the HTTP method the path, with any substititions headers as a dict content as a dict, or None if no content Notes: The return value from the mock will be used as the partial's response. The final line, "foo", is the return from the mock document RESOURCE as printed by the default async.wait callback handler. If the mock throws an exception, the callback will be called with a non-zero result. The handler, setup and wrapper functions are not called. The example uses a class; it could also be a collection of functions in a module. Here is an example of content created from unused kwargs: End of explanation """
woters/ds101
1-pandas.ipynb
mit
import pandas as pd print("Pandas version: {}".format(pd.__version__)) # опции отображения pd.options.display.max_rows = 6 pd.options.display.max_columns = 6 pd.options.display.width = 100 """ Explanation: 1 - Введение в Pandas Pandas это очень мощная библиотека с множеством полезных функций, ею можно пользаться много лет так и не использовав весь ее потенциал. Цель воркшопа ознакомить вас основами, это: Чтение и запись данных. Пониманимание разных типов данных в Pandas. Работа с текстовыми данными и timeseries. Выбор данных. Группировка. Dataset Мы будем использовать датасет Amazon Product с отзывами о продуктах на Амазоне, его собрал Julian McAuley. <br/> Датасет выглядит таким образом: reviewerID - ID of the reviewer, e.g. A2SUAM1J3GNN3B asin - ID of the product, e.g. 0000013714 reviewerName - name of the reviewer helpful - helpfulness rating of the review, e.g. 2/3 reviewText - text of the review overall - rating of the product summary - summary of the review unixReviewTime - time of the review (unix time) reviewTime - time of the review (raw) Импорт pandas End of explanation """ import gzip # датасет на 47 мегабайт, мы возьмем только 10 review_lines = gzip.open('data/reviews/reviews_Clothing_Shoes_and_Jewelry_5.json.gz', 'rt').readlines(10*1024*1024) len(review_lines) """ Explanation: Чтение и запись данных End of explanation """ import json df = pd.DataFrame(list(map(json.loads, review_lines))) """ Explanation: Теперь мы получили list с текстовыми строками, нам нужно преобразовать их в dict и передать в DataFrame. <br/> Здесь json.loads - преобразует текстовые строки в dict. End of explanation """ df """ Explanation: Теперь мы можем взглянуть, что собой представляют наши данные. DataFrame позволяет их вывести в такой наглядной таблице. End of explanation """ df.head() """ Explanation: Данные вначале нашего df End of explanation """ df.tail() df.describe() """ Explanation: Данные вконце df End of explanation """ # ваш код здесь, используйте tab для того, чтобы увидеть список доступных для вызова функций """ Explanation: Упражнение: Сохраните и загрузите датасет в разные форматы (CSV, JSON...) End of explanation """ df.info() df['unixReviewTime'] = pd.to_datetime(df['unixReviewTime'], unit='s') pd.to_datetime? """ Explanation: http://pandas.pydata.org/pandas-docs/stable/io.html Pandas I/O API это набор высокоуровневых функций, которые можно вызвать как pd.read_csv(). to_csv to_excel to_hdf to_sql to_json ... read_csv read_excel read_hdf read_sql read_json ... Типы данных Pandas и их преобразование df.info позволяет нам получить сводную информацию про df: сколько в нем строк, названия и типы столбцов, сколько он занимает памяти... <br/> Мы видим, что столбец unixReviewTime (время, когда ревью было оставленно) имеет тип int64, давайте преобразуем его в datetime64 для более удобной работы с временными данными. End of explanation """ df.info() """ Explanation: Теперь мы видим, что столбец был преобразован в нужный нам тип данных. End of explanation """ df.summary """ Explanation: Работа с текстовыми данными. http://pandas.pydata.org/pandas-docs/stable/text.html .str accessor .str accessor - позволяет вызывать методы для работы с текстовыми строками для всего столбца сразу. <br/><br/> Это очень мощная штука, так как она позволяет легко создавать новые features, которые могут как-то описывать ваши данные.<br/> End of explanation """ df.summary.str.len() """ Explanation: Таким простым вызовом мы получаем новый столбец с длинной строки описания товара, который может быть хорошим индикатором для вашей модели. End of explanation """ # Your code here """ Explanation: Упражнение: Попробуйте использовать разные строковые методы: lower(), upper(), strip()... http://pandas.pydata.org/pandas-docs/stable/text.html#method-summary End of explanation """ df.summary.str.lower() """ Explanation: Нижний регистр. End of explanation """ df.summary.str.upper() """ Explanation: Верхний регистр. End of explanation """ pattern = 'durable' df.summary.str.contains(pattern) """ Explanation: Поиск строк, которые содержат определенную подстроку или regex End of explanation """ df.unixReviewTime.dt.dayofweek """ Explanation: Работа с timeseries .dt accessor Также как и .str, .dt позволяет вызывать методы для работы с временными данными для всего столбца. День недели End of explanation """ df.unixReviewTime.dt.weekofyear """ Explanation: Неделя в году End of explanation """ # ваш код """ Explanation: Упражнение: Получите столбец с кварталом года, в котором был оставлен отзыв. (qua...) End of explanation """ df.overall < 5 """ Explanation: Выбор данных DataFrame имеет очень мощный функционал для поиска необходимых данных. <br/> Таким простым вызовом мы можем выбрать индексы всех строк отзывов, у которых оценка ниже 5. End of explanation """ df[df.overall < 5] """ Explanation: Передав их как ключ, мы получим сами строки. End of explanation """ df.loc[df.overall < 5, ['overall', 'reviewText']] """ Explanation: Полученные индексы мы можем передать в метод loc, вторым аргументом он принимает список столбцов, которые мы хотим видеть. End of explanation """ df.loc[((df.overall == 5) & (df.reviewText.str.contains('awesome'))) | ((df.overall == 1) & (df.reviewText.str.contains('terrible'))), ['overall', 'reviewText']] """ Explanation: Также мы можем передать более сложные условия для выборки, например, здесь мы выбираем отзывы с оценкой 5, содержащие слово awesome и отзывы с оценкой 1, содержащие слово terrible. End of explanation """ # Your code here """ Explanation: Упражнение: Выберите строки с оценкой 5, которые были написанны во вторник и содержат слово love в summary. End of explanation """ # возвращает столбец, содержащий количество уникальных значений asin products = df.asin.value_counts() products products[0:3].index """ Explanation: isin isin работает по такому принцип: мы ему передаем набор значений, а он выбирает строки, которые им соответствуют. End of explanation """ df[df.asin.isin(products[0:3].index)] # df[df.asin.isin(['B0000C321X', 'B0001ZNZJM', 'B00012O12A'])] - даст тот же результат """ Explanation: Выбираем строки, которые содержат топ 3 популярные товары. End of explanation """ # ваш код days = df.unixReviewTime.value_counts() days df[df.unixReviewTime.isin(days[0:1].index)] """ Explanation: Упражнение: Выберите отзывы, которые были оставленны в дни, когда было оставленно больше всего отзывов :D End of explanation """ df.groupby('asin')['reviewText'].agg('count').sort_values() """ Explanation: Группировка http://pandas.pydata.org/pandas-docs/stable/groupby.html groupby работает по такому принципу: - Таблица делится на группы - К каждой группе применяется определенная функция - Результаты объединяются df.groupby( grouper ).agg('mean') End of explanation """ # ваш код """ Explanation: Упражнение: Вычислите среднюю оценку по каждому уникальному продукту. End of explanation """ # ваш код """ Explanation: Упражение: Вычислите среднюю оценку, которую оставил каждый уникальный пользователь. End of explanation """ df.groupby([pd.Grouper(key='unixReviewTime',freq='D')])['reviewerID'].count() df.groupby([pd.Grouper(key='unixReviewTime',freq='M')])['reviewerID'].count() """ Explanation: pd.Grouper End of explanation """ %matplotlib inline import seaborn as sns; sns.set() df.groupby([pd.Grouper(key='unixReviewTime',freq='A')])['reviewerID'].count().plot(figsize=(6,6)) """ Explanation: Plotting End of explanation """ # Your code here """ Explanation: EXERCISE: Plot the number of reviews timeseries by month, year End of explanation """ # Your code here import matplotlib.pyplot as plt by_weekday = df.groupby([df.unixReviewTime.dt.year, df.unixReviewTime.dt.dayofweek]).mean() by_weekday.columns.name = None # remove label for plot fig, ax = plt.subplots(1, 2, figsize=(16, 6), sharey=True) by_weekday.loc[2013].plot(title='Average Reviews Rating by Day of Week (2013)', ax=ax[0]); by_weekday.loc[2014].plot(title='Average Reviews Rating by Day of Week (2014)', ax=ax[1]); for axi in ax: axi.set_xticklabels(['Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat', 'Sun']) """ Explanation: EXERCISE: Draw two plots to compare average review rating per day of the week between 2013 and 2014 End of explanation """ import matplotlib.pyplot as plt by_month = df.groupby([df.unixReviewTime.dt.year, df.unixReviewTime.dt.day])['reviewerID'].count() fig, ax = plt.subplots(1, 2, figsize=(16, 6), sharey=True) by_month.loc[2012].plot(title='Average Reviews by Month (2012)', ax=ax[0]); by_month.loc[2013].plot(title='Average Reviews by Month (2013)', ax=ax[1]); """ Explanation: EXERCISE: Draw two plots to compare number of reviews per day of the month between 2012 and 2013 End of explanation """
thehackerwithin/berkeley
code_examples/spring17_survey/survey.ipynb
bsd-3-clause
import pandas as pd import matplotlib.pyplot as plt %matplotlib inline """ Explanation: The Hacker Within Spring 2017 survey by R. Stuart Geiger, freely licensed CC-BY 4.0, MIT license Importing and processing data Importing libraries End of explanation """ df = pd.read_csv("survey.tsv",sep="\t") df[0:4] """ Explanation: Importing data and previewing End of explanation """ df_topics = df df_topics = df_topics.drop(['opt_out', 'Skill level', 'Personal experience', 'Presentation style'], axis=1) df_meta = df df_meta = df[['Skill level', 'Personal experience', 'Presentation style']] """ Explanation: Creating two dataframes: df_topics for interest/experience about topics and df_meta for questions about THW End of explanation """ topic_interest = {} topic_teaching = {} for topic in df_topics: topic_interest[topic] = 0 topic_teaching[topic] = 0 for row in df_topics[topic]: # if row contains only value 1, increment interest dict by 1 if str(row).find('1')>=0 and str(row).find('2')==-1: topic_interest[topic] += 1 # if row contains value 2, increment interest dict by 3 if str(row).find('2')>=0: topic_interest[topic] += 3 if str(row).find('3')>=0: topic_teaching[topic] += 1 """ Explanation: Topic interest Each topic (e.g. Python, R, GitHub) has one cell, with a list based on the items checked. If someone clicked "I want this at THW", there will be a 1. If someone clicked "I really want this at THW," there will be a 2. If someone clicked "I know something about this..." there will be a 3. These are mutually independent -- if someone clicked all of them, the value would be "1, 2, 3" and so on. Assumptions for calculating interest: If someone clicked that they just wanted a topic, add 1 to the topic's score. If someone clicked that they really wanted it, add 3 to the topic's score. If they clicked both, just add 3, not 4. End of explanation """ topic_interest_df = pd.DataFrame.from_dict(topic_interest, orient="index") topic_interest_df.sort_values([0], ascending=False) topic_interest_df = topic_interest_df.sort_values([0], ascending=True) topic_interest_df.plot(figsize=[8,14], kind='barh', fontsize=20) """ Explanation: Results End of explanation """ topic_teaching_df = pd.DataFrame.from_dict(topic_teaching, orient="index") topic_teaching_df = topic_teaching_df[topic_teaching_df[0] != 0] topic_teaching_df.sort_values([0], ascending=False) topic_teaching_df = topic_teaching_df.sort_values([0], ascending=True) topic_teaching_df.plot(figsize=[8,10], kind='barh', fontsize=20) """ Explanation: Topic expertise End of explanation """ df_meta['Personal experience'].replace([1, 2, 3], ['1: Beginner', '2: Intermediate', '3: Advanced'], inplace=True) df_meta['Skill level'].replace([1, 2, 3], ['1: Beginner', '2: Intermediate', '3: Advanced'], inplace=True) df_meta['Presentation style'].replace([1,2,3,4,5], ["1: 100% presentation / 0% hackathon", "2: 75% presentation / 25% hackathon", "3: 50% presentation / 50% hackathon", "4: 25% presentation / 75% hackathon", "5: 100% hackathon"], inplace = True) df_meta = df_meta.dropna() df_meta[0:4] """ Explanation: Meta questions about THW End of explanation """ pe_df = df_meta['Personal experience'].value_counts(sort=False).sort_index(ascending=False) pe_plot = pe_df.plot(kind='barh', fontsize=20, figsize=[8,4]) plt.title("What is your personal experience with scientific computing?", size=20) """ Explanation: Personal experience with scientific computing End of explanation """ skill_df = df_meta['Skill level'].value_counts(sort=False).sort_values(ascending=False) skill_plot = skill_df.plot(kind='barh', fontsize=20, figsize=[8,4]) plt.title("What skill level should we aim for?", size=20) """ Explanation: What skill level should we aim for? End of explanation """ style_df = df_meta['Presentation style'].value_counts(sort=False).sort_index(ascending=False) style_plot = style_df.plot(kind='barh', fontsize=20, figsize=[8,4]) plt.title("Session format", size=20) """ Explanation: What should our sessions look like? End of explanation """
relopezbriega/mi-python-blog
content/notebooks/MyPy-Python-Tipado-estatico.ipynb
gpl-2.0
def saludo(nombre): return 'Hola {}'.format(nombre) """ Explanation: MyPy - Python y un sistema de tipado estático Esta notebook fue creada originalmente como un blog post por Raúl E. López Briega en Mi blog sobre Python. El contenido esta bajo la licencia BSD. Una de las razones por la que solemos amar a Python, es por su sistema de tipado dinámico, el cual lo convierte en un lenguaje de programación sumamente flexible y fácil de aprender; al no tener que preocuparnos por definir los tipos de los objetos, ya que Python los infiere por nosotros, podemos escribir programas en una forma mucho más productiva, sin verbosidad y utilizando menos líneas de código. Ahora bien, este sistema de tipado dinámico también puede convertirse en una pesadilla en proyectos de gran escala, requiriendo varias horas de pruebas unitarias para evitar que los objetos adquieran un tipo de datos que no deberían y complicando el su mantenimiento o futura refactorización. Por ejemplo, en un código tan trivial como el siguiente: End of explanation """ print (saludo('Raul')) print (saludo(1)) """ Explanation: Esta simple función nos va a devolver el texto 'Hola' seguido del nombre que le ingresemos; pero como no contiene ningún control sobre el tipo de datos que pude admitir la variable nombre, los siguientes casos serían igualmente válidos: End of explanation """ def saludo(nombre): if type(nombre) != str: return "Error: el argumento debe ser del tipo String(str)" return 'Hola {}'.format(nombre) print(saludo('Raul')) print(saludo(1)) """ Explanation: En cambio, si pusiéramos un control sobre el tipo de datos que admitiera la variable nombre, para que siempre fuera un string, entonces el segundo caso ya no sería válido y lo podríamos detectar fácilmente antes de que nuestro programa se llegara a ejecutar. Obviamente, para poder detectar el segundo error y que nuestra función saludo solo admita una variable del tipo string como argumento, podríamos reescribir nuestra función, agregando un control del tipo de datos de la siguiente manera: End of explanation """ %%writefile typeTest.py import typing def saludo(nombre: str) -> str: return 'Hola {}'.format(nombre) print(saludo('Raul')) print(saludo(1)) """ Explanation: Pero una solución más sencilla a tener que ir escribiendo condiciones para controlar los tipos de las variables o de las funciones es utilizar MyPy MyPy MyPy es un proyecto que busca combinar los beneficios de un sistema de tipado dinámico con los de uno de tipado estático. Su meta es tener el poder y la expresividad de Python combinada con los beneficios que otorga el chequeo de los tipos de datos al momento de la compilación. Algunos de los beneficios que proporciona utilizar MyPy son: Chequeo de tipos al momento de la compilación: Un sistema de tipado estático hace más fácil detectar errores y con menos esfuerzo de debugging. Facilita el mantenimiento: Las declaraciones explícitas de tipos actúan como documentación, haciendo que nuestro código sea más fácil de entender y de modificar sin introducir nuevos errores. Permite crecer nuestro programa desde un tipado dinámico hacia uno estático: Nos permite comenzar desarrollando nuestros programas con un tipado dinámico y a mediada que el mismo vaya madurando podríamos modificarlo hacia un tipado estático de forma muy sencilla. De esta manera, podríamos beneficiarnos no solo de la comodidad de tipado dinámico en el desarrollo inicial, sino también aprovecharnos de los beneficios de los tipos estáticos cuando el código crece en tamaño y complejidad. Tipos de datos Estos son algunos de los tipos de datos más comunes que podemos encontrar en Python: int: Número entero de tamaño arbitrario float: Número flotante. bool: Valor booleano (True o False) str: Unicode string bytes: 8-bit string object: Clase base del que derivan todos los objecto en Python. List[str]: lista de objetos del tipo string. Dict[str, int]: Diccionario de string hacia enteros Iterable[int]: Objeto iterable que contiene solo enteros. Sequence[bool]: Secuencia de valores booleanos Any: Admite cualquier valor. (tipado dinámico) El tipo Any y los constructores List, Dict, Iterable y Sequence están definidos en el modulo typing que viene junto con MyPy. Ejemplos Por ejemplo, volviendo al ejemplo del comienzo, podríamos reescribir la función saludo utilizando MyPy de forma tal que los tipos de datos sean explícitos y puedan ser chequeados al momento de la compilación. End of explanation """ !mypy typeTest.py """ Explanation: En este ejemplo estoy creando un pequeño script y guardando en un archivo con el nombre 'typeTest.py', en la primer línea del script estoy importando la librería typing que viene con MyPy y es la que nos agrega la funcionalidad del chequeo de los tipos de datos. Luego simplemente ejecutamos este script utilizando el interprete de MyPy y podemos ver que nos va a detectar el error de tipo de datos en la segunda llamada a la función saludo. End of explanation """ !python3 typeTest.py """ Explanation: Si ejecutáramos este mismo script utilizando el interprete de Python, veremos que obtendremos los mismos resultados que al comienzo de este notebook; lo que quiere decir, que la sintaxis que utilizamos al reescribir nuestra función saludo es código Python perfectamente válido! End of explanation """ %%writefile typeTest.py from typing import Undefined, List, Dict # Declaro los tipos de las variables texto = Undefined(str) entero = Undefined(int) lista_enteros = List[int]() dic_str_int = Dict[str, int]() # Asigno valores a las variables. texto = 'Raul' entero = 13 lista_enteros = [1, 2, 3, 4] dic_str_int = {'raul': 1, 'ezequiel': 2} # Intento asignar valores de otro tipo. texto = 1 entero = 'raul' lista_enteros = ['raul', 1, '2'] dic_str_int = {1: 'raul'} !mypy typeTest.py """ Explanation: Tipado explicito para variables y colecciones En el ejemplo anterior, vimos como es la sintaxis para asignarle un tipo de datos a una función, la cual utiliza la sintaxis de Python3, annotations. Si quisiéramos asignarle un tipo a una variable, podríamos utilizar la función Undefined que viene junto con MyPy. End of explanation """ %%writefile typeTest.py from typing import List, Dict # Declaro los tipos de las variables texto = '' # type: str entero = 0 # type: int lista_enteros = [] # type: List[int] dic_str_int = {} # type: Dict[str, int] # Asigno valores a las variables. texto = 'Raul' entero = 13 lista_enteros = [1, 2, 3, 4] dic_str_int = {'raul': 1, 'ezequiel': 2} # Intento asignar valores de otro tipo. texto = 1 entero = 'raul' lista_enteros = ['raul', 1, '2'] dic_str_int = {1: 'raul'} !mypy typeTest.py """ Explanation: Otra alternativa que nos ofrece MyPy para asignar un tipo de datos a las variables, es utilizar comentarios; así, el ejemplo anterior lo podríamos reescribir de la siguiente forma, obteniendo el mismo resultado: End of explanation """
benwaugh/NuffieldProject2016
notebooks/InvariantMassCalcExample.ipynb
mit
from ROOT import TLorentzVector """ Explanation: How to calculate the invariant mass of a pair of particles We will use the TLorentzVector class from ROOT, which has useful functions for converting coordinates, adding together four-momenta, and calculating the invariant mass. End of explanation """ pt1 = 25.0 eta1 = 1.0 phi1 = 0.25 energy1 = 50.0 pt2 = 29.0 eta2 = 1.5 phi2 = -0.35 energy2 = 62.0 """ Explanation: Some arbitrary numbers for illustration End of explanation """ p1 = TLorentzVector() p1.SetPtEtaPhiE(pt1, eta1, phi1, energy1) p2 = TLorentzVector() p2.SetPtEtaPhiE(pt2, eta2, phi2, energy2) """ Explanation: Put these numbers into two TLorentzVector objects representing the four-momenta of the two particles. End of explanation """ total = p1 + p2 m = total.M() print(m) """ Explanation: Add together the four-momenta of the two particles, and calculate the invariant mass of the combined system: End of explanation """
nmih/ssbio
docs/notebooks/SeqProp - Protein Sequence Properties.ipynb
mit
import sys import logging import os.path as op # Import the SeqProp class from ssbio.protein.sequence.seqprop import SeqProp # Printing multiple outputs per cell from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" """ Explanation: SeqProp - Protein Sequence Properties This notebook gives an overview the available calculations for properties of a single protein sequence. <div class="alert alert-info"> **Input:** Amino acid sequence </div> <div class="alert alert-info"> **Output:** Amino acid sequence properties </div> Imports End of explanation """ # Create logger logger = logging.getLogger() logger.setLevel(logging.INFO) # SET YOUR LOGGING LEVEL HERE # # Other logger stuff for Jupyter notebooks handler = logging.StreamHandler(sys.stderr) formatter = logging.Formatter('[%(asctime)s] [%(name)s] %(levelname)s: %(message)s', datefmt="%Y-%m-%d %H:%M") handler.setFormatter(formatter) logger.handlers = [handler] """ Explanation: Logging Set the logging level in logger.setLevel(logging.&lt;LEVEL_HERE&gt;) to specify how verbose you want the pipeline to be. Debug is most verbose. CRITICAL Only really important messages shown ERROR Major errors WARNING Warnings that don't affect running of the pipeline INFO (default) Info such as the number of structures mapped per gene DEBUG Really detailed information that will print out a lot of stuff End of explanation """ # SET IDS HERE PROTEIN_ID = 'YIAJ_ECOLI' PROTEIN_SEQ = 'MGKEVMGKKENEMAQEKERPAGSQSLFRGLMLIEILSNYPNGCPLAHLSELAGLNKSTVHRLLQGLQSCGYVTTAPAAGSYRLTTKFIAVGQKALSSLNIIHIAAPHLEALNIATGETINFSSREDDHAILIYKLEPTTGMLRTRAYIGQHMPLYCSAMGKIYMAFGHPDYVKSYWESHQHEIQPLTRNTITELPAMFDELAHIRESGAAMDREENELGVSCIAVPVFDIHGRVPYAVSISLSTSRLKQVGEKNLLKPLRETAQAISNELGFTVRDDLGAIT' # Create the SeqProp object my_seq = SeqProp(id=PROTEIN_ID, seq=PROTEIN_SEQ) # Write temporary FASTA file for property calculations that require FASTA file as input import tempfile ROOT_DIR = tempfile.gettempdir() my_seq.write_fasta_file(outfile=op.join(ROOT_DIR, 'tmp.fasta'), force_rerun=True) my_seq.sequence_path """ Explanation: Initialization of the project Set these two things: PROTEIN_ID Your protein ID PROTEIN_SEQ Your protein sequence End of explanation """ # Global properties using the Biopython ProteinAnalysis module my_seq.get_biopython_pepstats() {k:v for k,v in my_seq.annotations.items() if k.endswith('-biop')} # Global properties from the EMBOSS pepstats program my_seq.get_emboss_pepstats() {k:v for k,v in my_seq.annotations.items() if k.endswith('-pepstats')} # Aggregation propensity - the predicted number of aggregation-prone segments on an unfolded protein sequence my_seq.get_aggregation_propensity(outdir=ROOT_DIR, email='[email protected]', password='ssbiotest', cutoff_v=5, cutoff_n=5, run_amylmuts=False) {k:v for k,v in my_seq.annotations.items() if k.endswith('-amylpred')} # Kinetic folding rate - the predicted rate of folding for this protein sequence secstruct_class = 'mixed' my_seq.get_kinetic_folding_rate(secstruct=secstruct_class) {k:v for k,v in my_seq.annotations.items() if k.endswith('-foldrate')} # Thermostability - prediction of free energy of unfolding dG from protein sequence # Stores (dG, Keq) my_seq.get_thermostability(at_temp=32.0) my_seq.get_thermostability(at_temp=37.0) my_seq.get_thermostability(at_temp=42.0) {k:v for k,v in my_seq.annotations.items() if k.startswith('thermostability_')} """ Explanation: Computing and storing protein properties A SeqProp object is simply an extension of the Biopython SeqRecord object. Global properties which describe or summarize the entire protein sequence are stored in the annotations attribute, while local residue-specific properties are stored in the letter_annotations attribute. Basic global properties End of explanation """
davidrpugh/pyAM
examples/negative-assortative-matching.ipynb
mit
# define some workers skill x, loc1, mu1, sigma1 = sym.var('x, loc1, mu1, sigma1') skill_cdf = 0.5 + 0.5 * sym.erf((sym.log(x - loc1) - mu1) / sym.sqrt(2 * sigma1**2)) skill_params = {'loc1': 1e0, 'mu1': 0.0, 'sigma1': 1.0} workers = pyam.Input(var=x, cdf=skill_cdf, params=skill_params, bounds=(1.2, 1e1), # guesses for the alpha and (1 - alpha) quantiles! alpha=0.005, measure=1.0 ) # define some firms y, loc2, mu2, sigma2 = sym.var('y, loc2, mu2, sigma2') productivity_cdf = 0.5 + 0.5 * sym.erf((sym.log(y - loc2) - mu2) / sym.sqrt(2 * sigma2**2)) productivity_params = {'loc2': 1e0, 'mu2': 0.0, 'sigma2': 1.0} firms = pyam.Input(var=y, cdf=productivity_cdf, params=productivity_params, bounds=(1.2, 1e1), # guesses for the alpha and (1 - alpha) quantiles! alpha=0.005, measure=1.0 ) """ Explanation: Defining inputs Need to define some heterogenous factors of production... End of explanation """ xs = np.linspace(workers.lower, workers.upper, 1e4) plt.plot(xs, workers.evaluate_pdf(xs)) plt.xlabel('Worker skill, $x$', fontsize=20) plt.show() """ Explanation: Note that we are shifting the distributions of worker skill and firm productivity to the right by 1.0 in order to try and avoid issues with having workers (firms) with near zero skill (productivity). End of explanation """ # define symbolic expression for CES between x and y omega_A, sigma_A = sym.var('omega_A, sigma_A') A = ((omega_A * x**((sigma_A - 1) / sigma_A) + (1 - omega_A) * y**((sigma_A - 1) / sigma_A))**(sigma_A / (sigma_A - 1))) # define symbolic expression for CES between x and y r, l, omega_B, sigma_B = sym.var('r, l, omega_B, sigma_B') B = ((omega_B * r**((sigma_B - 1) / sigma_B) + (1 - omega_B) * l**((sigma_B - 1) / sigma_B))**(sigma_B / (sigma_B - 1))) F = A * B # negative assortativity requires that sigma_A * sigma_B > 1 F_params = {'omega_A':0.25, 'omega_B':0.5, 'sigma_A':2.0, 'sigma_B':1.0 } """ Explanation: Defining a production process Next need to define some production process... End of explanation """ problem = pyam.AssortativeMatchingProblem(assortativity='negative', input1=workers, input2=firms, F=sym.limit(F, sigma_B, 1), F_params=F_params) """ Explanation: Define a boundary value problem End of explanation """ solver = pycollocation.OrthogonalPolynomialSolver(problem) """ Explanation: Pick some collocation solver End of explanation """ initial_guess = pyam.OrthogonalPolynomialInitialGuess(solver) initial_polys = initial_guess.compute_initial_guess("Chebyshev", degrees={'mu': 40, 'theta': 70}, f=lambda x, alpha: x**alpha, alpha=1.0) # quickly plot the initial conditions xs = np.linspace(workers.lower, workers.upper, 1000) plt.plot(xs, initial_polys['mu'](xs)) plt.plot(xs, initial_polys['theta'](xs)) plt.grid('on') """ Explanation: Compute some decent initial guess Currently I guess that $\mu(x)$ is has the form... $$ \hat{\mu}(x) = \beta_0 + \beta_1 f(x) $$ (i.e., a linear translation) of some function $f$. Using my $\hat{\mu}(x)$, I can then back out a guess for $\theta(x)$ implied by the model... $$ \hat{\theta}(x) = \frac{H(x)}{\hat{\mu}'(x)} $$ End of explanation """ domain = [workers.lower, workers.upper] initial_coefs = {'mu': initial_polys['mu'].coef, 'theta': initial_polys['theta'].coef} solver.solve(kind="Chebyshev", coefs_dict=initial_coefs, domain=domain, method='hybr') solver.result.success """ Explanation: Solve the model! End of explanation """ viz = pyam.Visualizer(solver) viz.interpolation_knots = np.linspace(workers.lower, workers.upper, 1000) viz.residuals.plot() plt.show() viz.normalized_residuals[['mu', 'theta']].plot(logy=True) plt.show() viz.solution.tail() viz.solution[['mu', 'theta']].plot(subplots=True) plt.show() viz.solution[['Fxy', 'Fyl']].plot() plt.show() """ Explanation: Plot some results End of explanation """ viz.solution[['factor_payment_1', 'factor_payment_2']].plot(subplots=True) plt.show() """ Explanation: Plot factor payments Note the factor_payment_1 is wages and factor_payment_2 is profits... End of explanation """ fig, axes = plt.subplots(1, 2, sharey=True) axes[0].scatter(viz.solution.factor_payment_1, viz.solution.theta, alpha=0.5, edgecolor='none') axes[0].set_ylim(0, 1.05 * viz.solution.theta.max()) axes[0].set_xlabel('Wages, $w$') axes[0].set_ylabel(r'Firm size, $\theta$') axes[1].scatter(viz.solution.factor_payment_2, viz.solution.theta, alpha=0.5, edgecolor='none') axes[1].set_xlabel(r'Profits, $\pi$') plt.show() # to get correlation just use pandas! viz.solution.corr() # or a subset viz.solution[['theta', 'factor_payment_1']].corr() # or actual values! viz.solution.corr().loc['theta']['factor_payment_1'] """ Explanation: Plot firm size against wages and profits End of explanation """ fig, axes = plt.subplots(1, 3) theta_pdf = viz.compute_pdf('theta', normalize=True) theta_pdf.plot(ax=axes[0]) axes[0].set_xlabel(r'Firm size, $\theta$') axes[0].set_title(r'pdf') theta_cdf = viz.compute_cdf(theta_pdf) theta_cdf.plot(ax=axes[1]) axes[1].set_title(r'cdf') axes[1].set_xlabel(r'Firm size, $\theta$') theta_sf = viz.compute_sf(theta_cdf) theta_sf.plot(ax=axes[2]) axes[2].set_title(r'sf') axes[2].set_xlabel(r'Firm size, $\theta$') plt.tight_layout() plt.show() """ Explanation: Plot the density for firm size As you can see, the theta function is hump-shaped. Nothing special, but when calculating the pdf some arrangements have to be done for this: sort the thetas preserving the order (so we can relate them to their xs) and then use carefully the right x for calculating the pdf. The principle of Philipp's trick is: $pdf_x(x_i)$ can be interpreted as number of workers with ability x. $\theta_i$ is the size of the firms that employs workers of kind $x_i$. As all firms that match with workers type $x_i$ choose the same firm size, $pdf_x(x_i)/\theta_i$ is the number of firms of size $\theta_i$. Say there are 100 workers with ability $x_i$, and their associated firm size $\theta_i$ is 2. Then there are $100/2 = 50$ $ \theta_i$ firms End of explanation """ fig, axes = plt.subplots(1, 3) factor_payment_1_pdf = viz.compute_pdf('factor_payment_1', normalize=True) factor_payment_1_pdf.plot(ax=axes[0]) axes[0].set_title(r'pdf') factor_payment_1_cdf = viz.compute_cdf(factor_payment_1_pdf) factor_payment_1_cdf.plot(ax=axes[1]) axes[1].set_title(r'cdf') factor_payment_1_sf = viz.compute_sf(factor_payment_1_cdf) factor_payment_1_sf.plot(ax=axes[2]) axes[2].set_title(r'sf') plt.tight_layout() plt.show() fig, axes = plt.subplots(1, 3) factor_payment_2_pdf = viz.compute_pdf('factor_payment_2', normalize=True) factor_payment_2_pdf.plot(ax=axes[0]) axes[0].set_title(r'pdf') factor_payment_2_cdf = viz.compute_cdf(factor_payment_2_pdf) factor_payment_2_cdf.plot(ax=axes[1]) axes[1].set_title(r'cdf') factor_payment_2_sf = viz.compute_sf(factor_payment_2_cdf) factor_payment_2_sf.plot(ax=axes[2]) axes[2].set_title(r'sf') plt.tight_layout() plt.show() """ Explanation: Distributions of factor payments Can plot the distributions of average factor payments... End of explanation """ from IPython.html import widgets def interactive_plot(viz, omega_A=0.25, omega_B=0.5, sigma_A=0.5, sigma_B=1.0, loc1=1.0, mu1=0.0, sigma1=1.0, loc2=1.0, mu2=0.0, sigma2=1.0): # update new parameters as needed new_F_params = {'omega_A': omega_A, 'omega_B': omega_B, 'sigma_A': sigma_A, 'sigma_B': sigma_B} viz.solver.problem.F_params = new_F_params new_input1_params = {'loc1': loc1, 'mu1': mu1, 'sigma1': sigma1} viz.solver.problem.input1.params = new_input1_params new_input2_params = {'loc2': loc2, 'mu2': mu2, 'sigma2': sigma2} viz.solver.problem.input2.params = new_input2_params # solve the model using a hotstart initial guess domain = [viz.solver.problem.input1.lower, viz.solver.problem.input1.upper] initial_coefs = viz.solver._coefs_array_to_dict(viz.solver.result.x, viz.solver.degrees) viz.solver.solve(kind="Chebyshev", coefs_dict=initial_coefs, domain=domain, method='hybr') if viz.solver.result.success: viz._Visualizer__solution = None # should not need to access this! viz.interpolation_knots = np.linspace(domain[0], domain[1], 1000) viz.solution[['mu', 'theta']].plot(subplots=True) viz.normalized_residuals[['mu', 'theta']].plot(logy=True) else: print "Foobar!" viz_widget = widgets.fixed(viz) # widgets for the model parameters eps = 1e-2 omega_A_widget = widgets.FloatSlider(value=0.25, min=eps, max=1-eps, step=eps, description=r"$\omega_A$") sigma_A_widget = widgets.FloatSlider(value=0.5, min=eps, max=1-eps, step=eps, description=r"$\sigma_A$") omega_B_widget = widgets.FloatSlider(value=0.5, min=eps, max=1-eps, step=eps, description=r"$\omega_B$") sigma_B_widget = widgets.fixed(1.0) # widgets for input distributions loc_widget = widgets.fixed(1.0) mu_1_widget = widgets.FloatSlider(value=0.0, min=-1.0, max=1.0, step=eps, description=r"$\mu_1$") mu_2_widget = widgets.FloatSlider(value=0.0, min=-1.0, max=1.0, step=eps, description=r"$\mu_2$") sigma_1_widget = widgets.FloatSlider(value=1.0, min=eps, max=2-eps, step=eps, description=r"$\sigma_1$") sigma_2_widget = widgets.FloatSlider(value=1.0, min=eps, max=2-eps, step=eps, description=r"$\sigma_2$") widgets.interact(interactive_plot, viz=viz_widget, omega_A=omega_A_widget, sigma_A=sigma_A_widget, omega_B=omega_B_widget, sigma_B=sigma_B_widget, sigma1=sigma_1_widget, loc1=loc_widget, mu1 = mu_1_widget, loc2=loc_widget, sigma2=sigma_2_widget, mu2 = mu_2_widget) # widget is changing the parameters of the underlying solver solver.result.x """ Explanation: Widget End of explanation """
intel-analytics/analytics-zoo
apps/dogs-vs-cats/transfer-learning.ipynb
apache-2.0
import re from bigdl.nn.criterion import CrossEntropyCriterion from pyspark.ml import Pipeline from pyspark.sql.functions import col, udf from pyspark.sql.types import DoubleType, StringType from zoo.common.nncontext import * from zoo.feature.image import * from zoo.pipeline.api.keras.layers import Dense, Input, Flatten from zoo.pipeline.api.keras.models import * from zoo.pipeline.api.net import * from zoo.pipeline.nnframes import * sc = init_nncontext("ImageTransferLearningExample") """ Explanation: Transfer Learning Using the high level transfer learning APIs, you can easily customize pretrained models for feature extraction or fine-tuning. In this notebook, we will use a pre-trained Inception_V1 model. But we will operate on the pre-trained model to freeze first few layers, replace the classifier on the top, then fine tune the whole model. And we use the fine-tuned model to solve the dogs-vs-cats classification problem, Preparation 1. Get the dogs-vs-cats datasets Download the training dataset from https://www.kaggle.com/c/dogs-vs-cats and extract it. The following commands copy about 1100 images of cats and dogs into demo/cats and demo/dogs separately. shell mkdir -p demo/dogs mkdir -p demo/cats cp train/cat.7* demo/cats cp train/dog.7* demo/dogs 2. Get the pre-trained Inception-V1 model Download the pre-trained Inception-V1 model from Zoo Alternatively, user may also download pre-trained caffe/Tensorflow/keras model. End of explanation """ model_path = "path/to/model/bigdl_inception-v1_imagenet_0.4.0.model" image_path = "file://path/to/data/dogs-vs-cats/demo/*/*" imageDF = NNImageReader.readImages(image_path, sc) getName = udf(lambda row: re.search(r'(cat|dog)\.([\d]*)\.jpg', row[0], re.IGNORECASE).group(0), StringType()) getLabel = udf(lambda name: 1.0 if name.startswith('cat') else 2.0, DoubleType()) labelDF = imageDF.withColumn("name", getName(col("image"))) \ .withColumn("label", getLabel(col('name'))) (trainingDF, validationDF) = labelDF.randomSplit([0.9, 0.1]) labelDF.select("name","label").show(10) """ Explanation: manually set model_path and image_path for training model_path = path to the pre-trained models. (E.g. path/to/model/bigdl_inception-v1_imagenet_0.4.0.model) image_path = path to the folder of the training images. (E.g. path/to/data/dogs-vs-cats/demo/*/*) End of explanation """ transformer = ChainedPreprocessing( [RowToImageFeature(), ImageResize(256, 256), ImageCenterCrop(224, 224), ImageChannelNormalize(123.0, 117.0, 104.0), ImageMatToTensor(), ImageFeatureToTensor()]) """ Explanation: Fine-tune a pre-trained model We fine-tune a pre-trained model by removing the last few layers, freezing the first few layers, and adding some new layers. End of explanation """ full_model = Net.load_bigdl(model_path) """ Explanation: Load a pre-trained model We use the Net API to load a pre-trained model, including models saved by Analytics Zoo, BigDL, Torch, Caffe and Tensorflow. Please refer to Net API Guide. End of explanation """ for layer in full_model.layers: print (layer.name()) model = full_model.new_graph(["pool5/drop_7x7_s1"]) """ Explanation: Remove the last few layers Here we print all the model layers and you can choose which layer(s) to remove. When a model is loaded using Net, we can use the newGraph(output) api to define a Model with the output specified by the parameter. End of explanation """ model.freeze_up_to(["pool4/3x3_s2"]) """ Explanation: The returning model's output layer is "pool5/drop_7x7_s1". Freeze some layers We freeze layers from input to pool4/3x3_s2 inclusive. End of explanation """ inputNode = Input(name="input", shape=(3, 224, 224)) inception = model.to_keras()(inputNode) flatten = Flatten()(inception) logits = Dense(2)(flatten) lrModel = Model(inputNode, logits) classifier = NNClassifier(lrModel, CrossEntropyCriterion(), transformer) \ .setLearningRate(0.003).setBatchSize(40).setMaxEpoch(1).setFeaturesCol("image") \ .setCachingSample(False) pipeline = Pipeline(stages=[classifier]) """ Explanation: Add a few new layers End of explanation """ catdogModel = pipeline.fit(trainingDF) predictionDF = catdogModel.transform(validationDF).cache() predictionDF.select("name","label","prediction").sort("label", ascending=False).show(10) predictionDF.select("name","label","prediction").show(10) correct = predictionDF.filter("label=prediction").count() overall = predictionDF.count() accuracy = correct * 1.0 / overall print("Test Error = %g " % (1.0 - accuracy)) """ Explanation: Train the model The transfer learning can finish in a few minutes. End of explanation """ samplecat=predictionDF.filter(predictionDF.prediction==1.0).limit(3).collect() sampledog=predictionDF.filter(predictionDF.prediction==2.0).sort("label", ascending=False).limit(3).collect() from IPython.display import Image, display for cat in samplecat: print ("prediction:"), cat.prediction display(Image(cat.image.origin[5:])) for dog in sampledog: print ("prediction:"), dog.prediction display(Image(dog.image.origin[5:])) """ Explanation: As we can see, the model from transfer learning can achieve over 95% accuracy on the validation set. Visualize result We randomly select some images to show, and print the prediction results here. cat: prediction = 1.0 dog: prediction = 2.0 End of explanation """
mzszym/oedes
examples/light-emitting/doping-dynamics-orgel12.ipynb
agpl-3.0
%matplotlib inline from matplotlib import colors import matplotlib.pylab as plt from oedes.fvm import mesh1d from oedes import context,init_notebook,testing,models import numpy as np from oedes.functions import Aux2 init_notebook() """ Explanation: Transient simulation of organic light emitting electrochemical cell This is sample simulation of electrochemical doping in light-emitting electrochemical cell, visualizing the effect of assumed electronic mobility model. End of explanation """ class CustomMobility(models.MobilityModel): def mu_func(self, T, E, c): mu0 = 5e-11 mu1 = 5e-9 W = 0.04 f0 = 0.3 f = c / 0.3e27 return (mu1 - mu0) * Aux2((f0 - f) / W) + mu0 def mobility(self, parent, ctx, eq): mu_cell = self.mu_func(ctx.varsOf(eq.thermal)['T'], ctx.varsOf(eq.poisson)['Ecellm'], ctx.varsOf(eq)['c']) mu_face = eq.mesh.faceaverage(mu_cell) ctx.varsOf(eq).update(mu_face = mu_face, mu_cell = mu_cell) mesh = mesh1d(2e-6) def solve(mu_ions, mobility_model, additional_params=None, voltage=5.): model = models.BaseModel() models.std.electronic_device(model, mesh, 'pn', mobility_model = mobility_model) cation, anion, initial_salt = models.std.add_ions(model, mesh, zc=1, za=-1) model.setUp() xinit = initial_salt(0.1e27) params = {'T': 300., 'electron.energy': 0., 'electron.N0': 0.3e27, 'hole.energy': -2., 'hole.N0': 0.3e27, 'electrode0.workfunction': 2., 'electrode1.workfunction': 0., 'electrode0.voltage': voltage, 'electrode1.voltage': 0, 'cation.mu': mu_ions, 'anion.mu': mu_ions, 'npi': 0, 'epsilon_r': 3. } if additional_params is not None: params.update(additional_params) c = context(model,x=xinit) c.transient(params, 1, 1e-9) return c def transientplot(data): N0 = 5e27 n = 20 for it, t in enumerate(10**np.linspace(-5, -1, n + 1)): out = data.attime(t).output() c = 1 - (1. - it / n) ncolor = colors.rgb2hex((1,1 - c,1 - c)) pcolor = colors.rgb2hex((1 - c,1 - c,1)) plt.plot(mesh.cells['center'] * 1e9,out['electron.c'] / N0,ncolor) plt.plot(mesh.cells['center'] * 1e9,out['hole.c'] / N0,pcolor) testing.store(out['electron.c'], rtol=1e-7, atol=1e-3 * N0) testing.store(out['hole.c'], rtol=1e-7, atol=1e-3 * N0) plt.yscale('log') plt.ylim([1e-5, 1.]) plt.xlabel('$x$ [nm]') plt.ylabel('$c/N_0$') """ Explanation: Model and parameters End of explanation """ mu_params = {'electron.mu':5e-11,'hole.mu':5e-11} c=solve(5e-11, models.MobilityFromParams(), mu_params) transientplot(c) """ Explanation: Results In the plots below, electrons are shown in red, and holes are shown in blue. Concentration independent mobility End of explanation """ c=solve(mu_ions=5e-11, mobility_model=CustomMobility()) transientplot(c) """ Explanation: Concentration dependent mobility End of explanation """
jburos/survivalstan
example-notebooks/Test pem_survival_model_timevarying with simulated data.ipynb
apache-2.0
survivalstan.utils.print_stan_summary([testfit], pars='lp__') survivalstan.utils.plot_stan_summary([testfit], pars='log_baseline') """ Explanation: superficial check of convergence End of explanation """ survivalstan.utils.plot_coefs([testfit], element='baseline') survivalstan.utils.plot_coefs([testfit]) """ Explanation: summarize coefficient estimates End of explanation """ survivalstan.utils.plot_pp_survival([testfit], fill=False) survivalstan.utils.plot_observed_survival(df=d, event_col='event', time_col='t', color='green', label='observed') plt.legend() survivalstan.utils.plot_pp_survival([testfit], by='sex') survivalstan.utils.plot_pp_survival([testfit], by='sex', pal=['red', 'blue']) """ Explanation: posterior-predictive checks End of explanation """ survivalstan.utils.plot_coefs([testfit], element='beta_time', ylim=[-1, 2.5]) """ Explanation: summarize time-varying effect of sex on survival Standard behavior is to plot estimated betas at each timepoint, for each coefficient in the model. End of explanation """ survivalstan.utils.plot_time_betas(models=[testfit], by=['coef'], y='exp(beta)', ylim=[0, 10]) """ Explanation: accessing lower-level functions for plotting effects over time End of explanation """ testfit['time_beta'] = survivalstan.utils.extract_time_betas([testfit]) testfit['time_beta'].head() """ Explanation: Alternatively, you can extract the beta-estimates for each timepoint & plot them yourself. End of explanation """ first_beta = survivalstan.utils.extract_time_betas([testfit], coefs=['sex[T.male]']) first_beta.head() import seaborn as sns sns.boxplot(data=first_beta, x='timepoint_id', y='beta') survivalstan.utils.plot_time_betas(models=[testfit], y='beta', x='end_time', coefs=['sex[T.male]']) """ Explanation: You can also extract and/or plot data for single coefficients of interest at a time. End of explanation """ survivalstan.utils.plot_time_betas(df=first_beta, by=['coef'], y='beta', x='end_time') """ Explanation: Note that this same plot can be produced by passing data to plot_time_betas directly. End of explanation """
google/applied-machine-learning-intensive
content/00_prerequisites/01_intermediate_python/02-lambdas.ipynb
apache-2.0
# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/00_prerequisites/01_intermediate_python/02-lambdas.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Copyright 2019 Google LLC. End of explanation """ def my_function(): print("Hello ML") my_function() """ Explanation: Intermediate Python - Lambdas In this Colab, we will move on to the concept of lambda functions. You'll learn what a lambda function is and how to use them in your code. Lambdas You have previously seen how to create named functions and call them. End of explanation """ def say_hello(): print("Hello") greeting = say_hello greeting() """ Explanation: What you might not know is that functions are first-class objects in Python, so you can treat them as objects. For example, you can assign functions to variables: End of explanation """ def say_hello(): print("Hello") def call_another_function(f): print("Calling function {}".format(f)) f() call_another_function(say_hello) """ Explanation: And you can pass functions to other functions: End of explanation """ my_lambda_func = lambda: print("This is a lambda function") my_lambda_func() """ Explanation: Often, functions can be defined in one line, so it would be nice to have a shorthand rather than having to use the def notation. This is exactly the use case for a lambda function. Lambda functions start with the lambda keyword. A colon signals the start of the function body. End of explanation """ add_one = lambda x: x + 1 print(add_one(2)) add = lambda x, y: x + y print(add(1, 2)) """ Explanation: Lambda functions can accept arguments. Just put the variable names between the lambda keyword and the colon: End of explanation """ def call_another_function(f): print("Calling function {}".format(f)) f() call_another_function(lambda: print("This is a lambda function")) """ Explanation: Sometimes lambda functions do not even need to be named. For example, when you are passing a function to another object (e.g. another function), you can pass a lambda function directly. End of explanation """ my_list = [4, 7, 9, 12, 34, 67] def add_one(x): return x + 1 list(map(add_one, my_list)) """ Explanation: There are many places where it is Pythonic to use lambdas instead of named functions. Some examples are the map, filter, and sorted built-in functions. One of the most standard use cases of a lambda function is to apply a function to every element of a list. You can do this using map. The code below adds one to every element in the list. (We end up needing the list function to make the return value of map become a list again.) End of explanation """ my_list = [4, 7, 9, 12, 34, 67] list(map(lambda x: x + 1, my_list)) """ Explanation: With a lambda function, this can be done in one line. End of explanation """ my_strings = ["I", "love", "LaMbDa", "Functions"] ### YOUR CODE HERE ### """ Explanation: Exercises Exercise 1 Use the map function and a lambda to change all of the following strings to uppercase. Student Solution End of explanation """ my_list = [ {"value": 123, "sort_order": 1}, {"value": 543, "sort_order": 0}, {"value": 101, "sort_order": 4}, {"value": 654, "sort_order": 3}, ] ### YOUR CODE HERE ### my_list = [ {"value": 123, "sort_order": 1}, {"value": 543, "sort_order": 0}, {"value": 101, "sort_order": 4}, {"value": 654, "sort_order": 3}, ] sorted(my_list, key=lambda x: x["sort_order"]) """ Explanation: Exercise 2 Use the sorted function and a lambda to sort the values in my_list by the "sort_order" value in each dictionary contained within the list. Student Solution End of explanation """
specdb/specdb
docs/nb/Query_Meta.ipynb
gpl-3.0
# imports from astropy import units as u from astropy.coordinates import SkyCoord import specdb from specdb.specdb import SpecDB from specdb import specdb as spdb_spdb from specdb.cat_utils import flags_to_groups """ Explanation: Query Meta data in database Groups [v1.1] End of explanation """ db_file = specdb.__path__[0]+'/tests/files/IGMspec_DB_v02_debug.hdf5' reload(spdb_spdb) sdb = spdb_spdb.SpecDB(db_file=db_file) """ Explanation: Setup End of explanation """ ggg_meta = sdb['GGG'].meta ggg_meta[0:4] """ Explanation: Check one of the meta tables End of explanation """ qdict = {'TELESCOPE': 'Gemini-North', 'NPIX': (1580,1583), 'DISPERSER': ['B600', 'R400']} qmeta = sdb.query_meta(qdict) qmeta """ Explanation: Query meta with Query dict A simple example End of explanation """ qdict = {'R': (4000.,1e9), 'WV_MIN': (0., 4000.)} qmeta = sdb.query_meta(qdict) qmeta """ Explanation: Another example End of explanation """ qdict = {'R': (1800.,2500), 'WV_MIN': (0., 4000.)} qmeta = sdb.query_meta(qdict) qmeta['GROUP'].data """ Explanation: One more End of explanation """ meta = sdb.meta_from_position((0.0019,17.7737), 1*u.arcsec) meta """ Explanation: Query meta at position As with query catalog, the position coordinates can have a range of formats One simple source End of explanation """ meta = sdb.meta_from_position('001115.23+144601.8', 1*u.arcsec) meta['WV_MIN'].data """ Explanation: Multiple meta entries (GGG) End of explanation """ meta = sdb.meta_from_position((2.813500,14.767200), 20*u.deg) meta[0:3] meta['GROUP'].data """ Explanation: Multiple sources End of explanation """ meta = sdb.meta_from_position((2.813500,14.767200), 20*u.deg, groups=['GGG','HD-LLS_DR1']) meta['GROUP'].data """ Explanation: Restrict on groups End of explanation """ coord = SkyCoord(ra=0.0019, dec=17.7737, unit='deg') matches, meta = sdb.meta_from_coords(coord) meta """ Explanation: Query Meta with Coordinates list When querying with a coordinate list, there are two approaches to the data returned. The default is to return the meta data for the first spectrum matched to each coordinate. The other option is to retrieve all of the meta data for each coordinate input. The returned object is then a list of bool arrays and a Table of all the meta data. We provide examples for each. Meta for first match for each coordinate Returns a Table with a single meta data entry per coordinate even if multiple exist. If there is no match, the row is empty in the Table. If there are zero matches, return None. Single source (which matches) End of explanation """ coord = SkyCoord(ra=0.0019, dec=-17.7737, unit='deg') matches, meta = sdb.meta_from_coords(coord) print(meta) """ Explanation: Single source which fails to match End of explanation """ coord = SkyCoord(ra=2.813458, dec=14.767167, unit='deg') _, meta = sdb.meta_from_coords(coord) meta """ Explanation: Source where multiple spectra exist, but only the first record is returned End of explanation """ coords = SkyCoord(ra=[0.0028,2.813458], dec=[14.9747,14.767167], unit='deg') matches, meta = sdb.meta_from_coords(coords) print(matches) meta """ Explanation: Multiple coordinates, each matched End of explanation """ coords = SkyCoord(ra=[0.0028,9.99,2.813458], dec=[14.9747,-9.99,14.767167], unit='deg') matches, meta = sdb.meta_from_coords(coords) print(matches) meta """ Explanation: Multiple coordinates, one fails to match by coordinate End of explanation """ coords = SkyCoord(ra=[0.0028,2.813458], dec=[14.9747,14.767167], unit='deg') matches, meta = sdb.meta_from_coords(coords, groups=['GGG']) print(matches) print(meta['IGM_ID']) meta """ Explanation: Multiple coordiantes, one fails to match input group list End of explanation """ coords = SkyCoord(ra=[0.0028,2.813458], dec=[14.9747,14.767167], unit='deg') matches, list_of_meta, meta_stack = sdb.meta_from_coords(coords, first=False) print('Matches = ', matches) list_of_meta, meta_stack[list_of_meta[0]] """ Explanation: All Meta Data for each input coordinate Here, a list of bool arrays relative to stacked meta table is returned. This is a bit convoluted, but is *much* faster for large coordinate lists. If there is no match to a given coordinate, the entry in the list is None Two sources. The second one has two spectra in the database End of explanation """ matches, list_of_meta, meta_stack = sdb.meta_from_coords(coords, first=False, groups=['GGG']) list_of_meta, meta_stack[list_of_meta[1]] """ Explanation: Two sources, limit by groups End of explanation """ coords = SkyCoord(ra=[0.0028,9.99,2.813458], dec=[14.9747,-9.99,14.767167], unit='deg') matches, list_of_meta, meta_stack = sdb.meta_from_coords(coords, first=False) print('Matches = ', matches) meta_stack[list_of_meta[0]] """ Explanation: Three sources; second one has no match End of explanation """
JJINDAHOUSE/deep-learning
embeddings/Skip-Gram_word2vec.ipynb
mit
import time import numpy as np import tensorflow as tf import utils """ Explanation: Skip-gram word2vec In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation. Readings Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material. A really good conceptual overview of word2vec from Chris McCormick First word2vec paper from Mikolov et al. NIPS paper with improvements for word2vec also from Mikolov et al. An implementation of word2vec from Thushan Ganegedara TensorFlow word2vec tutorial Word embeddings When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit. Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension. <img src='assets/tokenize_lookup.png' width=500> There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well. Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning. Word2Vec The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram. <img src="assets/word2vec_architectures.png" width="500"> In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts. First up, importing packages. End of explanation """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import zipfile dataset_folder_path = 'data' dataset_filename = 'text8.zip' dataset_name = 'Text8 Dataset' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(dataset_filename): with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar: urlretrieve( 'http://mattmahoney.net/dc/text8.zip', dataset_filename, pbar.hook) if not isdir(dataset_folder_path): with zipfile.ZipFile(dataset_filename) as zip_ref: zip_ref.extractall(dataset_folder_path) with open('data/text8') as f: text = f.read() """ Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space. End of explanation """ words = utils.preprocess(text) print(words[:30]) print("Total words: {}".format(len(words))) print("Unique words: {}".format(len(set(words)))) """ Explanation: Preprocessing Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to &lt;PERIOD&gt;. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it. End of explanation """ vocab_to_int, int_to_vocab = utils.create_lookup_tables(words) int_words = [vocab_to_int[word] for word in words] """ Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words. End of explanation """ ## Your code here from collections import Counter import random threshold = 1e-5 word_counts = Counter(int_words) total_count = len(int_words) freqs = {word: count / total_count for word, count in word_counts.items()} p_drop = {word: 1 - np.sqrt(threshold / freqs[word]) for word in word_counts} train_words = [word for word in int_words if random.random() < (1 - p_drop[word])] """ Explanation: Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by $$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$ where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset. I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it. Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words. End of explanation """ def get_target(words, idx, window_size=5): ''' Get a list of words in a window around an index. ''' # Your code here R = np.random.randint(1, window_size + 1) start = idx - R if (idx - R) > 0 else 0 stop = idx + R target_words = set(words[start: idx] + words[idx + 1: stop + 1]) return list(target_words) """ Explanation: Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. From Mikolov et al.: "Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels." Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window. End of explanation """ def get_batches(words, batch_size, window_size=5): ''' Create a generator of word batches as a tuple (inputs, targets) ''' n_batches = len(words)//batch_size # only full batches words = words[:n_batches*batch_size] for idx in range(0, len(words), batch_size): x, y = [], [] batch = words[idx:idx+batch_size] for ii in range(len(batch)): batch_x = batch[ii] batch_y = get_target(batch, ii, window_size) y.extend(batch_y) x.extend([batch_x]*len(batch_y)) yield x, y """ Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory. End of explanation """ train_graph = tf.Graph() with train_graph.as_default(): inputs = tf.placeholder(tf.int32, [None], name = 'inputs') labels = tf.placeholder(tf.int32, [None, None], name = 'labels') """ Explanation: Building the graph From Chris McCormick's blog, we can see the general structure of our network. The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset. I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal. Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1. End of explanation """ n_vocab = len(int_to_vocab) n_embedding = 200 # Number of embedding features with train_graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) # create embedding weight matrix here embed = tf.nn.embedding_lookup(embedding, inputs) # use tf.nn.embedding_lookup to get the hidden layer output """ Explanation: Embedding The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary. Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform. End of explanation """ # Number of negative labels to sample n_sampled = 100 with train_graph.as_default(): softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev = 0.1)) # create softmax weight matrix here softmax_b = tf.Variable(tf.zeros(n_vocab)) # create softmax biases here # Calculate the loss using negative sampling loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab) cost = tf.reduce_mean(loss) optimizer = tf.train.AdamOptimizer().minimize(cost) """ Explanation: Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss. Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works. End of explanation """ with train_graph.as_default(): ## From Thushan Ganegedara's implementation valid_size = 16 # Random set of words to evaluate similarity on. valid_window = 100 # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent valid_examples = np.array(random.sample(range(valid_window), valid_size//2)) valid_examples = np.append(valid_examples, random.sample(range(1000,1000+valid_window), valid_size//2)) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # We use the cosine distance: norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True)) normalized_embedding = embedding / norm valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset) similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding)) # If the checkpoints directory doesn't exist: !mkdir checkpoints """ Explanation: Validation This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings. End of explanation """ epochs = 10 batch_size = 1000 window_size = 10 with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: iteration = 1 loss = 0 sess.run(tf.global_variables_initializer()) for e in range(1, epochs+1): batches = get_batches(train_words, batch_size, window_size) start = time.time() for x, y in batches: feed = {inputs: x, labels: np.array(y)[:, None]} train_loss, _ = sess.run([cost, optimizer], feed_dict=feed) loss += train_loss if iteration % 100 == 0: end = time.time() print("Epoch {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Avg. Training loss: {:.4f}".format(loss/100), "{:.4f} sec/batch".format((end-start)/100)) loss = 0 start = time.time() if iteration % 1000 == 0: ## From Thushan Ganegedara's implementation # note that this is expensive (~20% slowdown if computed every 500 steps) sim = similarity.eval() for i in range(valid_size): valid_word = int_to_vocab[valid_examples[i]] top_k = 8 # number of nearest neighbors nearest = (-sim[i, :]).argsort()[1:top_k+1] log = 'Nearest to %s:' % valid_word for k in range(top_k): close_word = int_to_vocab[nearest[k]] log = '%s %s,' % (log, close_word) print(log) iteration += 1 save_path = saver.save(sess, "checkpoints/text8.ckpt") embed_mat = sess.run(normalized_embedding) """ Explanation: Training Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words. End of explanation """ with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) embed_mat = sess.run(embedding) """ Explanation: Restore the trained network if you need to: End of explanation """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt from sklearn.manifold import TSNE viz_words = 500 tsne = TSNE() embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :]) fig, ax = plt.subplots(figsize=(14, 14)) for idx in range(viz_words): plt.scatter(*embed_tsne[idx, :], color='steelblue') plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7) """ Explanation: Visualizing the word vectors Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. End of explanation """
Phylliade/poppy-inverse-kinematics
tutorials/Hand follow.ipynb
gpl-2.0
import time import numpy as np from pypot.creatures import PoppyTorso """ Explanation: Hand Following example In this notebook, you will use Pypot and an Inverse Kinematics toolbox to make Torso's hands follow each other. Your Torso has two arms, and you can use simple methods to get and set the position of each hand. Requirements You will need a fully functionning torso, either IRL or in simulator (V-REP). More info here. The experiment To be more precise, we will tell the right hand to keep a constant distance with the moving left hand, like on the picture above : The left arm will be compliant, so you can move it and watch the right arm following it. Setting up the robot We begin by configuring the robot, to fit our needs for the experiment. Begin with some useful imports : End of explanation """ poppy = PoppyTorso() """ Explanation: Then, create your Pypot robot : End of explanation """ for m in poppy.motors: m.goto_position(0, 2) """ Explanation: Initialize your robot positions to 0 : End of explanation """ # Left arm is compliant, right arm is active for m in poppy.l_arm: m.compliant = False for m in poppy.r_arm: m.compliant = False # The torso itself must not be compliant for m in poppy.torso: m.compliant = False """ Explanation: The left arm must be compliant (so you can move it), and the right arm must be active End of explanation """ def follow_hand(poppy, delta): """Tell the right hand to follow the left hand""" right_arm_position = poppy.l_arm_chain.end_effector + delta poppy.r_arm_chain.goto(right_arm_position, 0.5, wait=True) """ Explanation: Following the left hand To follow the left hand, the script will do the following steps : * Find the 3D position of the left hand, with Forward Kinematics * Assign this position ( + a gap to avoid collision) as the target of the right hand * Tell the right hand to reach this target That's exactly what we do in the hand_follow function : End of explanation """ try: while True: follow_hand(poppy, target_delta) time.sleep(delay_time) # Close properly the object when finished except KeyboardInterrupt: poppy.close() """ Explanation: Now, do this repeatedly in a loop : End of explanation """
kfollette/AST337-Fall2017
Labs/Lab6/Unix_Programming_Refresher.ipynb
mit
ls pwd cd 2017oct04 """ Explanation: Appendix 1: Optional Refresher on the Unix Environment A1.1) A Quick Unix Overview In Jupyter, many of the same Unix commands we use to navigate in the regular terminal can be used. (However, this is not true when we write standalone code outside Jupyter.) As a quick refresher, try each of the following: End of explanation """ ls pwd """ Explanation: We're in a new folder now, so issue commands in the next two cells to look at the folder content and list your current path: End of explanation """ ls M52*fit ls M52-001*fit ls *V* """ Explanation: Now test out a few more things. In the blank cells below, try the following and discuss in your group what each does: End of explanation """ cd .. """ Explanation: What does the asterisk symbol * do? Answer: Is a placeholder (wildcard) for some text in a file/folder name. Now, return to where you started, by moving up a directory: (one directory up from where you are is denoted with .., while the current directory is denoted with .) End of explanation """ # Make a new directory, "temporary" # Move into temporary # Move the test_file.txt into this current location # Create a copy of the test_file.txt, name the copy however you like # Delete the original test_file.txt # Change directories to original location of notebook. """ Explanation: A1.2) A few more helpful commands mkdir to make a new directory: mkdir new_project_name cp to copy a file: cp existingfile newfilename or cp existingfile newlocation mv to move or rename a file: mv old_filename_oldlocation old_filename_newlocation or mv old_filename_oldlocation new_filename_oldlocation rm to PERMANENTLY delete (remove) a file... (use with caution): rm file_I_will_never_see_again In the six cells below: (1) Make a new directory, called temporary (2) Go into that new directory (3) Move the file test_file.txt from the original directory above (../test_file.txt) into your current location using the . (4) Create a copy of test_file.txt with a new, different filename of your choice. (5) Delete the original test_file.txt (6) Go back up into the original location where this notebook is located. End of explanation """ ls """ Explanation: If all went according to plan, the following command should show three directories, a zip file, a .png file, this notebook, and the Lab6 notebook: End of explanation """ ls ./temporary/ """ Explanation: And the following command should show the contents of the temporary folder, so only your new text file (a copy of test_file.txt, which is now gone forever) within it: End of explanation """ 2 < 5 3 > 7 x = 11 x > 10 2 * x < x 3.14 <= 3.14 # <= means less than or equal to; >= means greater than or equal to 42 == 42 3e8 != 3e9 # != means "not equal to" type(True) """ Explanation: Appendix 2: Optional Refresher on Conditional Statements and Iteration A2.1) Conditional Statements The use of tests or conditions to evaluate variables, values, etc., is a fundamental programming tool. Try executing each of the cells below: End of explanation """ temperature = float(input('What is the temperature in Fahrenheit? ')) if temperature > 70: print('Wear shorts.') else: print('Wear long pants.') """ Explanation: You see that conditions are either True or False (with no quotes!) These are the only possible Boolean values (named after 19th century mathematician George Boole). In Python, the name Boolean is shortened to the type bool. It is the type of the results of true-false conditions or tests. Now try executing the following two cells at least twice over, with inputs 50 and then 80. End of explanation """ names = ['Henrietta', 'Annie', 'Jocelyn', 'Vera'] for n in names: print('There are ' + str(len(n)) + ' letters in ' + n) """ Explanation: The four lines in the previous cell are an if-else statement. There are two indented blocks: One comes right after the if heading in line 1 and is executed when the condition in the if heading is true. This is followed by an else: in line 3, followed by another indented block that is only execued when the original condition is false. In an if-else statement, exactly one of the two possible indented blocks is executed. A2.2) Iteration Another important component in our arsenal of programming tools is iteration. Iteration means performing an operation repeatedly. We can execute a very simple example at the command line. Let's make a list of objects as follows: End of explanation """ for i in range(5): print(i) """ Explanation: This is an example of a for loop. The way a for loop works is a follows. We start with a list of objects -- in this example a list of strings, but it could be anything -- and then we say for variable in list:, followed by a block of code. The code inside the block will be executed once for every item in the list, and when it is executed the variable will be set equal to the appropriate list item. In this example, the list names had four objects in it, each a string. Thus the print statement inside the loop was executed four times. The first time it was executed, the variable n was set equal to Henrietta. The second time n was set equal to Annie, then Jocelyn, then Vera. One of the most common types of loop is where you want to loop over numbers: 0, 1, 2, 3, .... To handle loops of this sort, python provides a simple command to construct a list of numbers to iterate over, called range. The command range(n) produces a list of numbers from 0 to n-1. For example: End of explanation """ i = 0 # This starts the initial value off at zero while i < 11: print(i) i = i + 3 # This adds three to the value of i, then goes back to the line #3 to check if the condition is met """ Explanation: There are also other ways of iterating, which may be more convenient depending on what you're trying to do. A very common one is the while loop, which does exactly what it sounds like it should: it loops until some condition is met. For example: End of explanation """
jdhp-docs/python_notebooks
nb_misc/misc_read_ca_csv_fr.ipynb
mit
%matplotlib inline #%matplotlib notebook from IPython.display import display import matplotlib matplotlib.rcParams['figure.figsize'] = (9, 9) import pandas as pd import numpy as np !head -n30 /Users/jdecock/Downloads/CA20170725_1744.CSV #df = pd.read_csv("/Users/jdecock/Downloads/CA20170725_1744.CSV") df = pd.read_csv("/Users/jdecock/Downloads/CA20170725_1744.CSV", sep=';', index_col=0, usecols=range(4), # the last column is empty... skiprows=9, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, thousands=None, decimal=',', escapechar=None, encoding='iso-8859-1') df df.columns df['Débit Euros'].plot() """ Explanation: Read CA CSV Import directives End of explanation """ data_array = np.array([[1, 2, 3], [4, 5, 6]]) df = pd.DataFrame(data_array, index=[10, 20], columns=[100, 200, 300]) df """ Explanation: Export/import data (write/read files) See http://pandas.pydata.org/pandas-docs/stable/io.html Reader functions are accessibles from the top level pd object. Writer functions are accessibles from data objects (i.e. Series, DataFrame or Panel objects). End of explanation """ df.to_csv(path_or_buf="python_pandas_io_test.csv") !cat python_pandas_io_test.csv """ Explanation: CSV files See http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files Write CSV files See http://pandas.pydata.org/pandas-docs/stable/io.html#io-store-in-csv Simplest version: End of explanation """ # FYI, many other options are available df.to_csv(path_or_buf="python_pandas_io_test.csv", sep=',', columns=None, header=True, index=True, index_label=None, compression=None, # allowed values are 'gzip', 'bz2' or 'xz' date_format=None) !cat python_pandas_io_test.csv """ Explanation: Setting more options: End of explanation """ df = pd.read_csv("python_pandas_io_test.csv") df """ Explanation: Read CSV files See http://pandas.pydata.org/pandas-docs/stable/io.html#io-read-csv-table Simplest version: End of explanation """ df = pd.read_csv("python_pandas_io_test.csv", sep=',', delimiter=None, header='infer', names=None, index_col=0, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, iterator=False, chunksize=None, compression='infer', thousands=None, decimal=b'.', lineterminator=None, quotechar='"', quoting=0, escapechar=None, comment=None, encoding=None, dialect=None, tupleize_cols=False, error_bad_lines=True, warn_bad_lines=True, skipfooter=0, skip_footer=0, doublequote=True, delim_whitespace=False, as_recarray=False, compact_ints=False, use_unsigned=False, low_memory=True, buffer_lines=None, memory_map=False, float_precision=None) df !rm python_pandas_io_test.csv """ Explanation: Setting more options: End of explanation """ import io """ Explanation: JSON files See http://pandas.pydata.org/pandas-docs/stable/io.html#json End of explanation """ df.to_json(path_or_buf="python_pandas_io_test.json") !cat python_pandas_io_test.json """ Explanation: Write JSON files See http://pandas.pydata.org/pandas-docs/stable/io.html#io-json-writer Simplest version End of explanation """ df.to_json(path_or_buf="python_pandas_io_test_split.json", orient="split") !cat python_pandas_io_test_split.json """ Explanation: Setting orient="split" End of explanation """ df.to_json(path_or_buf="python_pandas_io_test_records.json", orient="records") !cat python_pandas_io_test_records.json """ Explanation: Setting orient="records" End of explanation """ df.to_json(path_or_buf="python_pandas_io_test_index.json", orient="index") !cat python_pandas_io_test_index.json """ Explanation: Setting orient="index" (the default option for Series) End of explanation """ df.to_json(path_or_buf="python_pandas_io_test_columns.json", orient="columns") !cat python_pandas_io_test_columns.json """ Explanation: Setting orient="columns" (the default option for DataFrame) (for DataFrame only) End of explanation """ df.to_json(path_or_buf="python_pandas_io_test_values.json", orient="values") !cat python_pandas_io_test_values.json """ Explanation: Setting orient="values" (for DataFrame only) End of explanation """ # FYI, many other options are available df.to_json(path_or_buf="python_pandas_io_test.json", orient='columns', # For DataFrame: 'split','records','index','columns' or 'values' date_format=None, # None, 'epoch' or 'iso' double_precision=10, force_ascii=True, date_unit='ms') !cat python_pandas_io_test.json """ Explanation: Setting more options End of explanation """ !cat python_pandas_io_test_split.json df = pd.read_json("python_pandas_io_test_split.json", orient="split") df """ Explanation: Read JSON files See http://pandas.pydata.org/pandas-docs/stable/io.html#io-json-reader Using orient="split" Dict like data {index -&gt; [index], columns -&gt; [columns], data -&gt; [values]} End of explanation """ !cat python_pandas_io_test_records.json df = pd.read_json("python_pandas_io_test_records.json", orient="records") df """ Explanation: Using orient="records" List like [{column -&gt; value}, ... , {column -&gt; value}] End of explanation """ !cat python_pandas_io_test_index.json df = pd.read_json("python_pandas_io_test_index.json", orient="index") df """ Explanation: Using orient="index" Dict like {index -&gt; {column -&gt; value}} End of explanation """ !cat python_pandas_io_test_columns.json df = pd.read_json("python_pandas_io_test_columns.json", orient="columns") df """ Explanation: Using orient="columns" Dict like {column -&gt; {index -&gt; value}} End of explanation """ !cat python_pandas_io_test_values.json df = pd.read_json("python_pandas_io_test_values.json", orient="values") df """ Explanation: Using orient="values" (for DataFrame only) Just the values array End of explanation """ df = pd.read_json("python_pandas_io_test.json", orient=None, typ='frame', dtype=True, convert_axes=True, convert_dates=True, keep_default_dates=True, numpy=False, precise_float=False, date_unit=None, encoding=None, lines=False) df !rm python_pandas_io_test*.json """ Explanation: Setting more options End of explanation """ data_array = np.array([np.arange(1, 10, 1), np.arange(10, 100, 10), np.arange(100, 1000, 100)]).T df = pd.DataFrame(data_array, index=np.arange(1, 10, 1), columns=['A', 'B', 'C']) df df.B df["B"] df.loc[:,"B"] df.loc[:,['A','B']] """ Explanation: Other file formats Many other file formats can be used to import or export data with JSON. See the following link for more information: http://pandas.pydata.org/pandas-docs/stable/io.html Select columns End of explanation """ data_array = np.array([np.arange(1, 10, 1), np.arange(10, 100, 10), np.arange(100, 1000, 100)]).T df = pd.DataFrame(data_array, index=np.arange(1, 10, 1), columns=['A', 'B', 'C']) df df.B < 50. df[df.B < 50.] """ Explanation: Select rows End of explanation """ df.iloc[:5] """ Explanation: Select over index: select the 5 first rows End of explanation """ data_array = np.array([np.arange(1, 10, 1), np.arange(10, 100, 10), np.arange(100, 1000, 100)]).T df = pd.DataFrame(data_array, index=np.arange(1, 10, 1), columns=['A', 'B', 'C']) df df[df.B < 50][df.A >= 2].loc[:,['A','B']] """ Explanation: Select rows and columns End of explanation """ data_array = np.array([np.arange(1, 10, 1), np.arange(10, 100, 10), np.arange(100, 1000, 100)]).T df = pd.DataFrame(data_array, index=np.arange(1, 10, 1), columns=['A', 'B', 'C']) df df.B *= 2. df df.B = pow(df.B, 2) df """ Explanation: Apply a function to selected colunms values End of explanation """ data_array = np.array([np.arange(1, 10, 1), np.arange(10, 100, 10), np.arange(100, 1000, 100)]).T df = pd.DataFrame(data_array, index=np.arange(1, 10, 1), columns=['A', 'B', 'C']) df df[df.B < 50.] *= -1. df df[df.B < 50.] = pow(df[df.B < 50.], 2) df """ Explanation: Apply a function to selected rows values End of explanation """ a1 = np.array([np.arange(1, 5, 1), np.arange(10, 50, 10), np.arange(100, 500, 100)]).T df1 = pd.DataFrame(a1, columns=['ID', 'B', 'C']) a2 = np.array([np.arange(1, 5, 1), np.arange(1000, 5000, 1000), np.arange(10000, 50000, 10000)]).T df2 = pd.DataFrame(a2, columns=['ID', 'B', 'C']) display(df1) display(df2) df = pd.merge(df1, df2, on="ID", suffixes=('_1', '_2')) #.dropna(how='any') display(df) """ Explanation: Merge See: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html#pandas.merge End of explanation """ a1 = np.array([np.arange(1, 5, 1), np.arange(10, 50, 10), np.arange(100, 500, 100)]).T df1 = pd.DataFrame(a1, columns=['ID', 'B', 'C']) a2 = np.array([np.arange(1, 5, 1), np.arange(1000, 5000, 1000), np.arange(10000, 50000, 10000)]).T df2 = pd.DataFrame(a2, columns=['ID', 'B', 'C']) df1.iloc[0,2] = np.nan df1.iloc[1,1] = np.nan df1.iloc[2,2] = np.nan df1.iloc[3,1] = np.nan df2.iloc[0,1] = np.nan df2.iloc[1,2] = np.nan df2.iloc[2,1] = np.nan df2.iloc[3,2] = np.nan df = pd.merge(df1, df2, on="ID", suffixes=('_1', '_2')) #.dropna(how='any') display(df1) display(df2) display(df) """ Explanation: Merge with NaN End of explanation """ a1 = np.array([np.arange(1, 5, 1), np.arange(10, 50, 10), np.arange(100, 500, 100)]).T df1 = pd.DataFrame(a1, columns=['ID', 'B', 'C']) a2 = np.array([np.arange(1, 3, 1), np.arange(1000, 3000, 1000), np.arange(10000, 30000, 10000)]).T df2 = pd.DataFrame(a2, columns=['ID', 'B', 'C']) display(df1) display(df2) print("Left: use only keys from left frame (SQL: left outer join)") df = pd.merge(df1, df2, on="ID", how="left", suffixes=('_1', '_2')) #.dropna(how='any') display(df) print("Right: use only keys from right frame (SQL: right outer join)") df = pd.merge(df1, df2, on="ID", how="right", suffixes=('_1', '_2')) #.dropna(how='any') display(df) print("Inner: use intersection of keys from both frames (SQL: inner join) [DEFAULT]") df = pd.merge(df1, df2, on="ID", how="inner", suffixes=('_1', '_2')) #.dropna(how='any') display(df) print("Outer: use union of keys from both frames (SQL: full outer join)") df = pd.merge(df1, df2, on="ID", how="outer", suffixes=('_1', '_2')) #.dropna(how='any') display(df) """ Explanation: Merge with missing rows End of explanation """ a = np.array([[3, 5, 5, 5, 7, 7, 7, 7], [2, 4, 4, 3, 1, 3, 3, 2], [3, 4, 5, 6, 1, 8, 9, 8]]).T df = pd.DataFrame(a, columns=['A', 'B', 'C']) df """ Explanation: GroupBy See: http://pandas.pydata.org/pandas-docs/stable/groupby.html End of explanation """ df.groupby(["A"]).count() df.groupby(["A"]).sum().B df.groupby(["A"]).mean().B """ Explanation: GroupBy with single key End of explanation """ df.groupby(["A","B"]).count() """ Explanation: GroupBy with multiple keys End of explanation """ df.A.value_counts() df.A.value_counts().plot.bar() """ Explanation: Count the number of occurrences of a column value End of explanation """ a = np.array([[3, np.nan, 5, np.nan, 7, 7, 7, 7], [2, 4, 4, 3, 1, 3, 3, 2], [3, 4, 5, 6, 1, 8, 9, 8]]).T df = pd.DataFrame(a, columns=['A', 'B', 'C']) df df.A.isnull().sum() """ Explanation: Count the number of NaN values in a column End of explanation """ #help(df.plot) """ Explanation: Plot See https://pandas.pydata.org/pandas-docs/stable/visualization.html End of explanation """ x = np.arange(0, 6, 0.1) y1 = np.cos(x) y2 = np.sin(x) Y = np.array([y1, y2]).T df = pd.DataFrame(Y, columns=['cos(x)', 'sin(x)'], index=x) df.iloc[:10] df.plot(legend=True) """ Explanation: Line plot End of explanation """ df.plot.line(legend=True) """ Explanation: or End of explanation """ x = np.arange(0, 6, 0.5) y1 = np.cos(x) y2 = np.sin(x) Y = np.array([y1, y2]).T df = pd.DataFrame(Y, columns=['cos(x)', 'sin(x)'], index=x) df """ Explanation: Bar plot End of explanation """ df.plot.bar(legend=True) df.plot.bar(legend=True, stacked=True) """ Explanation: Vertical End of explanation """ df.plot.barh(legend=True) """ Explanation: Horizontal End of explanation """ x1 = np.random.normal(size=(10000)) x2 = np.random.normal(loc=3, scale=2, size=(10000)) X = np.array([x1, x2]).T df = pd.DataFrame(X, columns=[r'$\mathcal{N}(0,1)$', r'$\mathcal{N}(3,2)$']) df.plot.hist(alpha=0.2, bins=100, legend=True) """ Explanation: Histogram End of explanation """ x1 = np.random.normal(size=(10000)) x2 = np.random.normal(loc=3, scale=2, size=(10000)) X = np.array([x1, x2]).T df = pd.DataFrame(X, columns=[r'$\mathcal{N}(0,1)$', r'$\mathcal{N}(3,2)$']) df.plot.box() """ Explanation: Box plot End of explanation """ df = pd.DataFrame(np.random.randn(1000, 2), columns=['a', 'b']) df['b'] = df['b'] + np.arange(1000) df.plot.hexbin(x='a', y='b', gridsize=25) """ Explanation: Hexbin plot End of explanation """ x1 = np.random.normal(size=(10000)) x2 = np.random.normal(loc=3, scale=2, size=(10000)) X = np.array([x1, x2]).T df = pd.DataFrame(X, columns=[r'$\mathcal{N}(0,1)$', r'$\mathcal{N}(3,2)$']) df.plot.kde() """ Explanation: Kernel Density Estimation (KDE) plot End of explanation """ df = pd.DataFrame(np.random.rand(10, 4), columns=['a', 'b', 'c', 'd']) df.plot.area() """ Explanation: Area plot End of explanation """ x = np.random.randint(low=0, high=6, size=(50)) df = pd.DataFrame(x, columns=["A"]) df.A.value_counts() df.A.value_counts().plot.pie(y="A") """ Explanation: Pie chart End of explanation """ x1 = np.random.normal(size=(10000)) x2 = np.random.normal(loc=3, scale=2, size=(10000)) X = np.array([x1, x2]).T df = pd.DataFrame(X, columns=[r'$\mathcal{N}(0,1)$', r'$\mathcal{N}(3,2)$']) df.plot.scatter(x=r'$\mathcal{N}(0,1)$', y=r'$\mathcal{N}(3,2)$', alpha=0.2) """ Explanation: Scatter plot End of explanation """
ceos-seo/data_cube_notebooks
notebooks/animation/3D/GA_Water_3D_Reservoir/GA_Water_3DReservoir.ipynb
apache-2.0
import sys import os sys.path.append(os.environ.get('NOTEBOOK_ROOT')) # Supress Warning import warnings warnings.filterwarnings('ignore') import datacube import glob import rasterio import scipy import xarray as xr import numpy as np import pandas as pd import geopandas as gpd from skimage import filters from skimage import exposure import matplotlib.pyplot as plt from rasterstats import zonal_stats import time from utils.data_cube_utilities.clean_mask import \ landsat_qa_clean_mask, landsat_clean_mask_invalid from utils.data_cube_utilities.dc_display_map import display_map from utils.data_cube_utilities.import_export import export_xarray_to_netcdf from ga_utils import contour_extract from ga_utils import contours_to_arrays from ga_utils import interpolate_timeseries from datacube.utils.aws import configure_s3_access configure_s3_access(requester_pays=True) dc = datacube.Datacube(app = 'my_app') import os sub_dir = 'example' output_dir = f'output/{sub_dir}' if not os.path.exists(output_dir): os.makedirs(output_dir) no_data = -9999 %load_ext autoreload %autoreload 2 """ Explanation: Derive waterbody relative topography using Landsat This notebook demonstrates how to load Landsat time series data, compute a water index, generate a rolling median water index composites, extract contours along the land-water boundary, and finally interpolate between contours to produce a 3D relative topographic surface. This relative topography could be easily calibrated to obtain absolute bathymetry (and accordingly, volume estimates) with a simple GPS transect from the highest to the deepest part of the lake during a dry period. Original Author: Robbi Bishop-Taylor Original Date: 30 October 2018 Original Notebook: https://github.com/digitalearthafrica/deafrica-sandbox-notebooks/blob/master/RCMRD_Demo/colombo_workshop/GA_Water_3DReservoir.ipynb Chunking Modifications Author: John Rattz Chunking Modification Date: 4 October 2019 End of explanation """ # Lake Sulunga # platform = "LANDSAT_8" # product = "ls8_usgs_sr_scene" # collection = 'c1' # level = 'l2' # lat = (-5.86, -6.27) # lon = (34.97, 35.38) # time_extents = ('2018-01-01', '2018-12-31') # Lake Balangida # platform = "LANDSAT_8" # product = "ls8_usgs_sr_scene" # collection = 'c1' # level = 'l2' # lat = (-4.60, -4.76) # lon = (35.135, 35.295) # time_extents = ('2013-01-01', '2018-11-01') # Lake Chala (small) platform = "LANDSAT_8" product = "ls8_usgs_sr_scene" collection = 'c1' level = 'l2' lat = (-3.3282, -3.3065) lon = (37.6871, 37.7140) time_extents = ('2014-01-01', '2014-06-30') # Lake Nakuru # platform = "LANDSAT_8" # product = "ls8_usgs_sr_scene" # collection = 'c1' # level = 'l2' # lat = (-0.30, -0.42) # lon = (36.05, 36.13) # time_extents = ('2013-01-01', '2018-11-01') # Lake Volta, Ghana # platform = "LANDSAT_8" # product = "ls8_usgs_sr_scene" # collection = 'c1' # level = 'l2' # Full # lat = (6.1914, 8.9334) # lon = (-1.4526, 0.8276) # time_extents = ('2013-01-01', '2018-12-31') # small subset in Eastern Region # lat = (6.7219, 6.8092) # lon = (-0.6406, -0.5033) # time_extents = ('2016-01-01', '2018-12-31') # Lake Naivasha # platform = "LANDSAT_8" # product = "ls8_usgs_sr_scene" # collection = 'c1' # level = 'l2' # lat = (-0.8350, -0.6700) # lon = (36.2600, 36.4300) # time_extents = ('2013-01-01', '2013-12-31') # Lake in Singida, Tanzania # platform = "LANDSAT_8" # product = "ls8_usgs_sr_scene" # collection = 'c1' # level = 'l2' # lat = (-4.665, -4.751) # lon = (34.80, 34.885) # time_extents = ('2013-01-01', '2018-11-01') # Marigot de Bignone # platform = "LANDSAT_8" # product = "ls8_lasrc_senegal" # collection = 'c1' # level = 'l2' # lon = (-16.43, -16.21) # lat = (12.83, 12.65) # time_extents = ('2013-01-01', '2018-11-01') display_map(lat, lon) """ Explanation: Set up analysis End of explanation """ ## Settings ## # Contour extraction and interpolation parameters min_vertices = 5 # This can be used to remove noise by dropping contours with less than X vertices guassian_sigma = 0 # Controls amount of smoothing to apply to interpolated raster. Higher = smoother # The water index to use as a proxy of water extent. water_index = 'mndwi' # Can be any of ['mndwi', 'ndwi', 'awei'] ## End Settings ## water_index_req_bands = {'mndwi': ['green', 'swir1'], 'ndwi': ['green', 'nir'], 'awei': ['green', 'swir1','nir','swir2']} measurements = list(set(water_index_req_bands[water_index] + ['pixel_qa'] +\ ['red', 'green', 'blue'])) data = dc.load(latitude = lat, longitude = lon, platform = platform, time = time_extents, product = product, measurements = measurements, group_by='solar_day', dask_chunks={'time':5, 'latitude':1000, 'longitude':1000}) """ Explanation: Obtain data for outputs End of explanation """ from utils.data_cube_utilities.clean_mask import landsat_clean_mask_full ## Clean the data. ## clean_mask = landsat_clean_mask_full(dc, data, product=product, platform=platform, collection=collection, level=level) cleaned_data = data.where(clean_mask) ## Compute water index. ## if water_index == 'mndwi': cleaned_data[water_index] = (cleaned_data.green - cleaned_data.swir1) / \ (cleaned_data.green + cleaned_data.swir1) elif water_index == 'ndwi': cleaned_data[water_index] = (cleaned_data.green - cleaned_data.nir) / \ (cleaned_data.green + cleaned_data.nir) else: # AWEI cleaned_data[water_index] = 4 * (cleaned_data.green * 0.0001 - cleaned_data.swir1 * 0.0001) - \ (0.25 * cleaned_data.nir * 0.0001 + 2.75 * cleaned_data.swir2 * 0.0001) ## Obtain the max water mask. ## max_water_mask = (cleaned_data[water_index].fillna(-1) > 0).max('time').persist() """ Explanation: Checkpointing Get max water extent mask End of explanation """ ## Compute percentages of valid data and inundation. ## # Create mask of max extent of water (land = 0, water = 1) and set all pixels # outside max extent area to NaN. water_masked = cleaned_data[water_index].where(max_water_mask) # Calculate the valid data percentage for each time step by dividing the number of # non-NaN pixels in timestep by the total number of pixels in the max extent water layer data_perc = water_masked.count(['latitude', 'longitude']) /\ max_water_mask.sum() cleaned_data['data_perc'] = data_perc ## Calculate inundation percent. ## inundation_perc = (water_masked > 0).sum(['latitude', 'longitude']) \ / max_water_mask.sum() cleaned_data['inundation_perc'] = inundation_perc # Restrict to scenes with greater than 20% valid data and select variables for further analysis cleaned_data = cleaned_data.sortby('inundation_perc', ascending=False) times_to_keep = cleaned_data.data_perc > 0.2 cleaned_subset = cleaned_data.sel(time=times_to_keep) """ Explanation: Get data and inundation percents for each time End of explanation """ # Determine the minimum and maximum time indices for the median water composites. min_max_indices_comp = [] for i, time_ind in enumerate(np.arange(0, len(cleaned_subset.time), 5)): # identify min and max index to extract rolling median min_index = max(time_ind - 15, 0) max_index = min(time_ind + 15, len(cleaned_subset.time)-1) min_max_indices_comp.append((min_index, max_index)) num_rolling_composites = len(min_max_indices_comp) rolling_water_composites = [] time_range_strs = [] # Used to label output files. for comp_ind, time_inds in enumerate(min_max_indices_comp): time_range_strs.append( '_'.join([np.datetime_as_string(t, unit='D') for t in cleaned_subset.time.values[list(time_inds)]])) rolling_water_composite = cleaned_subset[[water_index, 'inundation_perc']].isel(time=slice(*time_inds)).mean('time') rolling_water_composites.append(rolling_water_composite) combined = xr.concat(rolling_water_composites, dim='time_period').sortby('time_period') """ Explanation: Obtain rolling median water composites End of explanation """ # Plot only observations with greater than 20% valid data timeseries_subset = cleaned_data.inundation_perc.sel(time = times_to_keep) # # Interpolate to one point per week, then take a rolling mean to smooth line for plotting timeseries_subset = interpolate_timeseries(timeseries_subset.sortby('time').chunk({'time': -1}), freq='3D', method='linear') timeseries_subset = timeseries_subset.rolling(time=5, min_periods=1).mean() timeseries_subset.plot(size=5) # Export to text file name = 'inundation_perc' timeseries_subset_df = timeseries_subset.to_dataframe(name=name) timeseries_subset_df['date'] = timeseries_subset_df.index.floor('d') timeseries_subset_df.set_index('date') timeseries_subset_df.to_csv(output_dir + '/{}_timeseries.csv'.format(name)) """ Explanation: Create visualizations Export time series End of explanation """ observations = combined.inundation_perc for i, observation in enumerate(observations): output_shp = f"{output_dir}/{name}_{time_range_strs[i]}.shp" if os.path.exists(output_shp): continue cleaned_subset_i = combined.isel(time_period=i) # Compute area area = float(cleaned_subset_i.inundation_perc.values) * 100 # Prepare attributes as input to contour extract attribute_data = {'in_perc': [area]} attribute_dtypes = {'in_perc': 'float'} # Set threshold thresh = 0 # Extract contours with custom attribute fields: contour_dict = contour_extract(z_values=[thresh], ds_array=cleaned_subset_i[water_index].values, ds_crs='epsg:4326', ds_affine=data.geobox.transform, output_shp=output_shp, min_vertices=min_vertices, attribute_data=attribute_data, attribute_dtypes=attribute_dtypes) # Combine all shapefiles into one file shapefiles = glob.glob(output_dir + '/{}_*.shp'.format(name)) gdf = pd.concat([gpd.read_file(shp) for shp in shapefiles], sort=False).pipe(gpd.GeoDataFrame) # Set CRS gdf['crs'] = 'EPSG:4326' # Plot contours fig, ax = plt.subplots(figsize=(16, 16)) gdf.plot(ax=ax, column='in_perc', cmap='viridis', linewidth=0.8) plt.show() """ Explanation: Combine the contours and plot them End of explanation """ # Extract x, y and z points for interpolation all_contours = contours_to_arrays(gdf=gdf, col='in_perc') points_xy = all_contours[:, [1, 0]] values_elev = all_contours[:, 2] # Create grid to interpolate into x_size, _, upleft_x, _, y_size, upleft_y = data.geobox.transform[0:6] xcols = len(data.longitude) yrows = len(data.latitude) bottomright_x = upleft_x + (x_size * xcols) bottomright_y = upleft_y + (y_size * yrows) grid_y, grid_x = np.mgrid[upleft_y:bottomright_y:1j * yrows, upleft_x:bottomright_x:1j * xcols] # Interpolate x, y and z values using linear/TIN interpolation out = scipy.interpolate.griddata(points_xy, values_elev, (grid_y, grid_x), method='linear') # Set areas outside of water composite to highest inundation percentage test = (combined[water_index] > 0).max(dim='time_period') out[~test] = np.nanmax(out) out[np.isnan(out)] = np.nanmax(out) # Apply guassian blur to smooth transitions between z values (optional) out = filters.gaussian(out, sigma=guassian_sigma) out = exposure.rescale_intensity(out, out_range=(timeseries_subset.min().values - 0.001, timeseries_subset.max().values + 0.001)) # Plot interpolated surface fig, ax = plt.subplots(figsize=(16, 16)) ax.imshow(out, cmap='magma_r', extent=[upleft_x, bottomright_x, bottomright_y, upleft_y]) gdf.plot(ax=ax, edgecolor='white', linewidth=0.5, alpha=0.5) plt.show() """ Explanation: Interpolate DEM values End of explanation """ kwargs = {'driver': 'GTiff', 'width': xcols, 'height': yrows, 'count': 1, 'dtype': rasterio.float64, 'crs': 'EPSG:4326', 'transform': data.geobox.transform, 'nodata': no_data} with rasterio.open(output_dir + '/{}_dem.tif'.format(name), 'w', **kwargs) as target: target.write_band(1, out) # Ensure that only one of the options below is uncommented. # Option 1: # Select nearly cloud-free images with low inundation. # You may need to tune the thresholds for `data_perc` and # `inundation_perc`. There may be no suitable data if # (1) the water body does not recede much during the selected time, or # (2) there is too much cloud cover for the selected time. # rgb_times = time_coords[(data.data_perc > 0.9) & (data.inundation_perc < 0.6)] # rgb_times = cleaned_data.time.values\ # [(cleaned_data.data_perc > 0.9).values & \ # (cleaned_data.inundation_perc < 0.6).values] # Option 2 (if Option 1 is untenable): # Get rgb values from a composite of all the data. rgb_times = cleaned_data.time.values[[0,-1]] # Obtain a mean composite of the RGB values. rgb_composite = \ cleaned_data[['red', 'green', 'blue']]\ .sel(time = rgb_times)\ .mean('time') data_array = rgb_composite.to_array().values # Optimise colours using a percentile stretch rgb_array = np.transpose(data_array, [1, 2, 0]) p_low, p_high = np.nanpercentile(rgb_array, [2, 98]) img_toshow = exposure.rescale_intensity(rgb_array, in_range=(p_low, p_high), out_range=(0, 1)) # Change dtype to int16 scaled between 0 and 10000 to save disk space img_toshow = (img_toshow * 10000).astype(rasterio.int16) kwargs = {'driver': 'GTiff', 'width': xcols, 'height': yrows, 'count': 3, 'dtype': rasterio.int16, 'crs': 'EPSG:4326', 'transform': data.geobox.transform, 'nodata': no_data} with rasterio.open(output_dir + '/{}_rgb.tif'.format(name), 'w', **kwargs) as target: target.write(np.transpose(img_toshow, [2, 0, 1])) """ Explanation: Export DEM and RGB arrays to file End of explanation """
hunterherrin/phys202-2015-work
assignments/assignment06/DisplayEx01.ipynb
mit
from IPython.display import Image from IPython.display import HTML from IPython.display import display assert True # leave this to grade the import statements """ Explanation: Display Exercise 1 Imports Put any needed imports needed to display rich output the following cell: End of explanation """ Image(url='http://images.mentalfloss.com/sites/default/files/styles/insert_main_wide_image/public/einstein1_7.jpg', embed=True, width=600, height=600) assert True # leave this to grade the image display """ Explanation: Basic rich display Find a Physics related image on the internet and display it in this notebook using the Image object. Load it using the url argument to Image (don't upload the image to this server). Make sure the set the embed flag so the image is embedded in the notebook data. Set the width and height to 600px. End of explanation """ %%html <caption>Quarks</caption> <table> <thead> <tr> <th>Name</th> <th>Symbol</th> <th>Antiparticle</th> <th>Charge (e)</th> <th>Mass ($MeV/c^2$)</th> </tr> </thead> <tbody> <tr> <td>up</td> <td>$u$</td> <td>$\overline{u}$</td> <td>$+\frac{2}{3}$</td> <td>$1.5-3.3$</td> </tr> <tr> <td>down</td> <td>$d$</td> <td>$\overline{d}$</td> <td>$-\frac{1}{3}$</td> <td>$3.5-6.0$</td> </tr> <tr> <td>charm</td> <td>$c$</td> <td>$\overline{c}$</td> <td>$+\frac{2}{3}$</td> <td>$1,160-1,340$</td> </tr> <tr> <td>strange</td> <td>$s$</td> <td>$\overline{s}$</td> <td>$-\frac{1}{3}$</td> <td>$70-130$</td> </tr> <tr> <td>top</td> <td>$t$</td> <td>$\overline{t}$</td> <td>$+\frac{2}{3}$</td> <td>$169,100-173,300$</td> </tr> <tr> <td>bottom</td> <td>$b$</td> <td>$\overline{b}$</td> <td>$-\frac{1}{3}$</td> <td>$4,130-4,370$</td> </tr> </tbody> assert True # leave this here to grade the quark table """ Explanation: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate. End of explanation """
karst87/ml
01_openlibs/tensorflow/01_examples/0_prerequisite/mnist_dataset_intro.ipynb
mit
# 导入MNIST from tensorflow.examples.tutorials.mnist import input_data # 加载数据 X_train = mnist.train.images Y_train = mnist.train.labels X_test = mnist.test.images Y_test = mnist.test.labels print(X_train.shape) print(Y_train.shape) print(X_test.shape) print(Y_test.shape) """ Explanation: MNIST数据集介绍 大多数例子使用了手写数字的MNIST数据集。它包含了60000个训练数据和10000个测试数据。这些数字的尺寸已标准化,同时做了居中处理,所以每个数据可以表示成一个值为0到1大小为28 * 28矩阵。 预览 用法 在例子中,我们使用TFinput_data.py脚本来加载数据集。这对管理数据相当好用,具体操作: 数据集下载 加载整个数据集成numpy数组 End of explanation """ # 获取下一组64张图像数组与分类列表 batch_X, batch_Y = mnist.train.next_batch(64) print(batch_X.shape) print(batch_Y.shape) """ Explanation: 通过'next_batch'方法遍历整个数据集,只返回需要的部分数据(为了节省内存,避免加载整个数据集) End of explanation """
Radiomics/pyradiomics
notebooks/helloFeatureClass.ipynb
bsd-3-clause
from __future__ import print_function import os import collections import SimpleITK as sitk import numpy import six import radiomics from radiomics import firstorder, glcm, imageoperations, shape, glrlm, glszm """ Explanation: Hello Feature Class example: using the feature classes to calculate features This example shows how to use the Radiomics package to directly instantiate the feature classes for feature extraction. Note that this is not the intended standard use. For an example on the standard use with feature extractor, see the helloRadiomics example. End of explanation """ imageName, maskName = radiomics.getTestCase('brain1') if imageName is None or maskName is None: # Something went wrong, in this case PyRadiomics will also log an error raise Exception('Error getting testcase!') # Raise exception to prevent cells below from running in case of "run all" image = sitk.ReadImage(imageName) mask = sitk.ReadImage(maskName) """ Explanation: Getting the test case Test cases can be downloaded to temporary files. This is handled by the radiomics.getTestCase() function, which checks if the requested test case is available and if not, downloads it. It returns a tuple with the location of the image and mask of the requested test case, or (None, None) if it fails. Alternatively, if the data is available somewhere locally, this directory can be passed as a second argument to radiomics.getTestCase(). If that directory does not exist or does not contain the testcase, functionality reverts to default and tries to download the test data. If getting the test case fails, PyRadiomics will log an error explaining the cause. End of explanation """ settings = {} settings['binWidth'] = 25 settings['resampledPixelSpacing'] = None # settings['resampledPixelSpacing'] = [3, 3, 3] # This is an example for defining resampling (voxels with size 3x3x3mm) settings['interpolator'] = 'sitkBSpline' settings['verbose'] = True """ Explanation: Preprocess the image Extraction Settings End of explanation """ # Resample if necessary interpolator = settings.get('interpolator') resampledPixelSpacing = settings.get('resampledPixelSpacing') if interpolator is not None and resampledPixelSpacing is not None: image, mask = imageoperations.resampleImage(image, mask, **settings) """ Explanation: If enabled, resample the image End of explanation """ # Crop the image # bb is the bounding box, upon which the image and mask are cropped bb, correctedMask = imageoperations.checkMask(image, mask, label=1) if correctedMask is not None: mask = correctedMask croppedImage, croppedMask = imageoperations.cropToTumorMask(image, mask, bb) """ Explanation: Calculate features using original image End of explanation """ firstOrderFeatures = firstorder.RadiomicsFirstOrder(croppedImage, croppedMask, **settings) # Set the features to be calculated firstOrderFeatures.enableFeatureByName('Mean', True) # firstOrderFeatures.enableAllFeatures() # Print out the docstrings of the enabled features print('Will calculate the following first order features: ') for f in firstOrderFeatures.enabledFeatures.keys(): print(f) print(getattr(firstOrderFeatures, 'get%sFeatureValue' % f).__doc__) # Calculate the features and print(out result) print('Calculating first order features...',) result = firstOrderFeatures.execute() print('done') print('Calculated first order features: ') for (key, val) in six.iteritems(result): print(' ', key, ':', val) """ Explanation: Calculate Firstorder features End of explanation """ shapeFeatures = shape.RadiomicsShape(croppedImage, croppedMask, **settings) # Set the features to be calculated # shapeFeatures.enableFeatureByName('Volume', True) shapeFeatures.enableAllFeatures() # Print out the docstrings of the enabled features print('Will calculate the following shape features: ') for f in shapeFeatures.enabledFeatures.keys(): print(f) print(getattr(shapeFeatures, 'get%sFeatureValue' % f).__doc__) # Calculate the features and print(out result) print('Calculating shape features...',) result = shapeFeatures.execute() print('done') print('Calculated shape features: ') for (key, val) in six.iteritems(result): print(' ', key, ':', val) """ Explanation: Calculate Shape Features End of explanation """ glcmFeatures = glcm.RadiomicsGLCM(croppedImage, croppedMask, **settings) # Set the features to be calculated # glcmFeatures.enableFeatureByName('SumEntropy', True) glcmFeatures.enableAllFeatures() # Print out the docstrings of the enabled features print('Will calculate the following GLCM features: ') for f in glcmFeatures.enabledFeatures.keys(): print(f) print(getattr(glcmFeatures, 'get%sFeatureValue' % f).__doc__) # Calculate the features and print(out result) print('Calculating GLCM features...',) result = glcmFeatures.execute() print('done') print('Calculated GLCM features: ') for (key, val) in six.iteritems(result): print(' ', key, ':', val) """ Explanation: Calculate GLCM Features End of explanation """ glrlmFeatures = glrlm.RadiomicsGLRLM(croppedImage, croppedMask, **settings) # Set the features to be calculated # glrlmFeatures.enableFeatureByName('ShortRunEmphasis', True) glrlmFeatures.enableAllFeatures() # Print out the docstrings of the enabled features print('Will calculate the following GLRLM features: ') for f in glrlmFeatures.enabledFeatures.keys(): print(f) print(getattr(glrlmFeatures, 'get%sFeatureValue' % f).__doc__) # Calculate the features and print(out result) print('Calculating GLRLM features...',) result = glrlmFeatures.execute() print('done') print('Calculated GLRLM features: ') for (key, val) in six.iteritems(result): print(' ', key, ':', val) """ Explanation: Calculate GLRLM Features End of explanation """ glszmFeatures = glszm.RadiomicsGLSZM(croppedImage, croppedMask, **settings) # Set the features to be calculated # glszmFeatures.enableFeatureByName('LargeAreaEmphasis', True) glszmFeatures.enableAllFeatures() # Print out the docstrings of the enabled features print('Will calculate the following GLSZM features: ') for f in glszmFeatures.enabledFeatures.keys(): print(f) print(getattr(glszmFeatures, 'get%sFeatureValue' % f).__doc__) # Calculate the features and print(out result) print('Calculating GLSZM features...',) result = glszmFeatures.execute() print('done') print('Calculated GLSZM features: ') for (key, val) in six.iteritems(result): print(' ', key, ':', val) """ Explanation: Calculate GLSZM Features End of explanation """ logFeatures = {} sigmaValues = [1.0, 3.0, 5.0] for logImage, imageTypename, inputSettings in imageoperations.getLoGImage(image, mask, sigma=sigmaValues): logImage, croppedMask = imageoperations.cropToTumorMask(logImage, mask, bb) logFirstorderFeatures = firstorder.RadiomicsFirstOrder(logImage, croppedMask, **inputSettings) logFirstorderFeatures.enableAllFeatures() logFeatures[imageTypename] = logFirstorderFeatures.execute() # Show result for sigma, features in six.iteritems(logFeatures): for (key, val) in six.iteritems(features): laplacianFeatureName = '%s_%s' % (str(sigma), key) print(' ', laplacianFeatureName, ':', val) """ Explanation: Calculate Features using Laplacian of Gaussian Filter Calculating features on filtered images is very similar to calculating features on the original image. All filters in PyRadiomics have the same input and output signature, and there is even one for applying no filter. This enables to loop over a list of requested filters and apply them in the same piece of code. It is applied like this in the execute function in feature extractor. The input for the filters is the image, with additional keywords. If no additional keywords are supplied, the filter uses default values where applicable. It returns a generator object, allowing to define the generators to be applied before the filters functions are actually called. Calculate Firstorder on LoG filtered images End of explanation """ waveletFeatures = {} for decompositionImage, decompositionName, inputSettings in imageoperations.getWaveletImage(image, mask): decompositionImage, croppedMask = imageoperations.cropToTumorMask(decompositionImage, mask, bb) waveletFirstOrderFeaturs = firstorder.RadiomicsFirstOrder(decompositionImage, croppedMask, **inputSettings) waveletFirstOrderFeaturs.enableAllFeatures() print('Calculate firstorder features with ', decompositionName) waveletFeatures[decompositionName] = waveletFirstOrderFeaturs.execute() # Show result for decompositionName, features in six.iteritems(waveletFeatures): for (key, val) in six.iteritems(features): waveletFeatureName = '%s_%s' % (str(decompositionName), key) print(' ', waveletFeatureName, ':', val) """ Explanation: Calculate Features using Wavelet filter Calculate Firstorder on filtered images End of explanation """
wanderer2/pymc3
docs/source/notebooks/stochastic_volatility.ipynb
apache-2.0
import numpy as np import pymc3 as pm from pymc3.distributions.timeseries import GaussianRandomWalk from scipy import optimize %pylab inline """ Explanation: Stochastic Volatility model End of explanation """ n = 400 returns = np.genfromtxt("../data/SP500.csv")[-n:] returns[:5] plt.plot(returns) """ Explanation: Asset prices have time-varying volatility (variance of day over day returns). In some periods, returns are highly variable, while in others very stable. Stochastic volatility models model this with a latent volatility variable, modeled as a stochastic process. The following model is similar to the one described in the No-U-Turn Sampler paper, Hoffman (2011) p21. $$ \sigma \sim Exponential(50) $$ $$ \nu \sim Exponential(.1) $$ $$ s_i \sim Normal(s_{i-1}, \sigma^{-2}) $$ $$ log(\frac{y_i}{y_{i-1}}) \sim t(\nu, 0, exp(-2 s_i)) $$ Here, $y$ is the daily return series and $s$ is the latent log volatility process. Build Model First we load some daily returns of the S&P 500. End of explanation """ model = pm.Model() with model: sigma = pm.Exponential('sigma', 1./.02, testval=.1) nu = pm.Exponential('nu', 1./10) s = GaussianRandomWalk('s', sigma**-2, shape=n) r = pm.StudentT('r', nu, lam=pm.math.exp(-2*s), observed=returns) """ Explanation: Specifying the model in pymc3 mirrors its statistical specification. End of explanation """ with model: trace = pm.sample(2000) figsize(12,6) pm.traceplot(trace, model.vars[:-1]); figsize(12,6) title(str(s)) plot(trace[s][::10].T,'b', alpha=.03); xlabel('time') ylabel('log volatility') """ Explanation: Fit Model For this model, the full maximum a posteriori (MAP) point is degenerate and has infinite density. To get good convergence with NUTS we use ADVI (autodiff variational inference) for initialization. This is done under the hood by the sample_init() function. End of explanation """ plot(np.abs(returns)) plot(np.exp(trace[s][::10].T), 'r', alpha=.03); sd = np.exp(trace[s].T) xlabel('time') ylabel('absolute returns') """ Explanation: Looking at the returns over time and overlaying the estimated standard deviation we can see how the model tracks the volatility over time. End of explanation """
gtzan/mir_book
data_mining_random_variables.ipynb
cc0-1.0
%matplotlib inline import matplotlib.pyplot as plt from scipy import stats import numpy as np class Random_Variable: def __init__(self, name, values, probability_distribution): self.name = name self.values = values self.probability_distribution = probability_distribution if all(type(item) is np.int64 for item in values): self.type = 'numeric' self.rv = stats.rv_discrete(name = name, values = (values, probability_distribution)) elif all(type(item) is str for item in values): self.type = 'symbolic' self.rv = stats.rv_discrete(name = name, values = (np.arange(len(values)), probability_distribution)) self.symbolic_values = values else: self.type = 'undefined' def sample(self,size): if (self.type =='numeric'): return self.rv.rvs(size=size) elif (self.type == 'symbolic'): numeric_samples = self.rv.rvs(size=size) mapped_samples = [values[x] for x in numeric_samples] return mapped_samples """ Explanation: Discrete Random Variables and Sampling George Tzanetakis, University of Victoria In this notebook we will explore discrete random variables and sampling. After defining a helper class and associated functions we will be able to create both symbolic and numeric random variables and generate samples from them. Define a helper random variable class based on the scipy discrete random variable functionality providing both numeric and symbolic RVs End of explanation """ values = ['H', 'T'] probabilities = [0.9, 0.1] coin = Random_Variable('coin', values, probabilities) samples = coin.sample(20) print(samples) values = ['1', '2', '3', '4', '5', '6'] probabilities = [1/6.] * 6 dice = Random_Variable('dice', values, probabilities) samples = dice.sample(10) print(samples); [100] * 10 [1 / 6.] * 3 """ Explanation: Let's first create some random samples of symbolic random variables corresponding to a coin and a dice End of explanation """ values = np.arange(1,7) probabilities = [1/6.] * 6 dice = Random_Variable('dice', values, probabilities) samples = dice.sample(100) plt.stem(samples, markerfmt= ' ') """ Explanation: Now let's look at a numeric random variable corresponding to a dice so that we can more easily make plots and histograms End of explanation """ plt.figure() plt.hist(samples,bins=[1,2,3,4,5,6,7],normed=1, rwidth=0.5,align='left'); """ Explanation: Let's now look at a histogram of these generated samples. Notice that even with 500 samples the bars are not equal length so the calculated frequencies are only approximating the probabilities used to generate them End of explanation """ plt.hist(samples,bins=[1,2,3,4,5,6,7],normed=1, rwidth=0.5,align='left', cumulative=True); """ Explanation: Let's plot the cumulative histogram of the samples End of explanation """ # we can also write the predicates directly using lambda notation est_even = len([x for x in samples if x%2==0]) / len(samples) est_2 = len([x for x in samples if x==2]) / len(samples) est_4 = len([x for x in samples if x==4]) / len(samples) est_6 = len([x for x in samples if x==6]) / len(samples) print(est_even) # Let's print some estimates print('Estimates of 2,4,6 = ', (est_2, est_4, est_6)) print('Direct estimate = ', est_even) print('Sum of estimates = ', est_2 + est_4 + est_6) print('Theoretical value = ', 0.5) """ Explanation: Let's now estimate the frequency of the event roll even number in different ways. First let's count the number of even numbers in the generated samples. Then let's take the sum of the counts of the individual estimated probabilities. End of explanation """
jsvine/spectra
docs/walkthrough.ipynb
mit
import spectra """ Explanation: Spectra Walkthrough This notebook provides basic documentation of the spectra Python library, which aims to simplify the process of creating color scales and converting colors from one "color space" to another. End of explanation """ from IPython.display import HTML swatch_template = """ <div style="float: left;"> <div style="width: 50px; height: 50px; background: {0};"></div> <div>{0}</div> </div> """ swatch_outer = """ <div style='width: 500px; overflow: auto; font-size: 10px; font-weight: bold; text-align: center; line-height: 1.5;'>{0}</div> """ def swatches(colors): hexes = (c.hexcode.upper() for c in colors) html = swatch_outer.format("".join(map(swatch_template.format, hexes))) return HTML(html) """ Explanation: Note: To visually display the colors we create, let's define and use this swatches function: End of explanation """ swatches([ spectra.html("tomato") ]) swatches([ spectra.rgb(1, 0.39, 0.28) ]) """ Explanation: Creating Colors The easiest way to create a color is to use these shortcuts, one for each "color space" spectra supports: spectra.rgb(r, g, b) spectra.hsl(h, s, l) spectra.hsv(h, s, v) spectra.lab(l, a, b) spectra.lch(l, c, h) spectra.cmy(c, m, y) spectra.cmyk(c, m, y, k) spectra.xyz(x, y, z) You can also pass a WC3 color name (e.g., "papayawhip") or hexcode (e.g., "#fefefe") to spectra.html(color), which will create the corresponding rgb color. For example: End of explanation """ tomato = spectra.lab(62.28, 57.67, 46.29) tomato.values tomato.rgb tomato.clamped_rgb tomato.hexcode """ Explanation: Getting Color Values Instances of spectra.Color have four main properties: .values: An array representation of the color's values in its own color space, e.g. (L, a, b) for an lab color. .hexcode: The hex encoding of this color, e.g. #ffffff for rgb(255, 255, 255)/html("white"). .rgb: The (r, g, b) values for this color in the rgb color space; these are allowed to go out of gamut. .clamped_rgb: The "clamped" (r, g, b) values for this color in the rgb color space. Note on .rgb and .rgb_clamped: Spectra follows colormath's convention: RGB spaces tend to have a smaller gamut than some of the CIE color spaces. When converting to RGB, this can cause some of the coordinates to end up being out of the acceptable range (0.0-1.0 or 1-255, depending on whether your RGB color is upscaled). [...] Rather than clamp these for you, we leave them as-is. End of explanation """ tomato.to("lch").values """ Explanation: Converting Colors Any spectra.Color can be converted to any supported color space, using the .to(colorspace) method. E.g.,: End of explanation """ yellow = spectra.html("yellow").to("lab") tomato.blend(yellow, 0.25).hexcode swatches([ tomato, tomato.blend(yellow, 0.25), tomato.blend(yellow, 0.75), yellow ]) swatches([ tomato.brighten(30), tomato, tomato.darken(30) ]) swatches([ tomato.saturate(40), tomato, tomato.desaturate(40) ]) """ Explanation: Color Operations The following spectra.Color methods return new colors: .blend(other_color, ratio=0.5) .brighten(amount=10) .darken(amount=10) .saturate(amount=10) .desaturate(amount=10) The parameter for .brighten/.darken is a positive/negative linear adjustment to the L(ightness) value of the color's Lab representation. (Spectra converts the color to Lab, makes the change, and then converts back to the original color space.) Likewise, the parameter for .saturate/.desaturate is a positive/negative linear adjustment to the c(hroma) value of the color's Lch representation. End of explanation """ start = spectra.html("#21313E") end = spectra.html("#EFEE69") swatches([ start, end ]) scale = spectra.scale([ start, end ]) scale(0.5) scale(0.5).hexcode swatches([ scale(0), scale(0.5), scale(1) ]) """ Explanation: Color Scales Color scales translate numbers into colors, based on a set of colors and a domain (default: 0->1). End of explanation """ ten_twenty_scale = scale.domain([ 10, 20 ]) swatches([ ten_twenty_scale(10), ten_twenty_scale(15), ten_twenty_scale(20) ]) """ Explanation: To set a custom domain, call .domain([ start_num, end_num ]): End of explanation """ my_range = ten_twenty_scale.range(10) [ x.hexcode for x in my_range ] swatches(my_range) """ Explanation: The .range(count) method produces an evenly-spaced list of colors: End of explanation """ swatches(spectra.range([ start, end ], 10)) """ Explanation: spectra.range(colors, count) provides a shortcut to the same results: End of explanation """ swatches(spectra.range([ "#21313E", "#EFEE69" ], 10)) """ Explanation: You can also pass plain hexcode or web-color strings to range and scale: End of explanation """ ranges_html = "" for space in sorted(spectra.COLOR_SPACES.keys()): converted_scale = scale.colorspace(space) ranges_html += "<div style='margin-top: 0.5em;'>" + space + "</div>" ranges_html += swatches(converted_scale.range(10)).data HTML(ranges_html) """ Explanation: The colors produced by scales and ranges depend on the color space you're using. You can change the color space by calling .colorspace(space). To wit: End of explanation """ red, gray, green = [ spectra.html(x).to("lab") for x in ("red", "#CCC", "green") ] polylinear_scale = spectra.scale([ red, gray, green ]) swatches(polylinear_scale.range(9)) """ Explanation: (Credit to Gregor Aisch for that example.) Polylinear Scales You can construct a scale along any number of colors. Constructing a scale along three colors can be handy for divergent color schemes. For example, here's a scale that goes from red -> gray -> green instead of directly from red -> green: End of explanation """ polylinear_negpos = polylinear_scale.domain([ -1, 0, 1 ]) swatches([ polylinear_negpos(-0.75), polylinear_negpos(0.2), polylinear_negpos(1) ]) """ Explanation: Note: If you want to customize a polylinear scale's domain, the domain must be the same length as the scale itself. For example: End of explanation """
tpin3694/tpin3694.github.io
machine-learning/bernoulli_naive_bayes_classifier.ipynb
mit
# Load libraries import numpy as np from sklearn.naive_bayes import BernoulliNB """ Explanation: Title: Bernoulli Naive Bayes Classifier Slug: bernoulli_naive_bayes_classifier Summary: How to train a Bernoulli naive bayes classifer in Scikit-Learn Date: 2017-09-22 12:00 Category: Machine Learning Tags: Naive Bayes Authors: Chris Albon The Bernoulli naive Bayes classifier assumes that all our features are binary such that they take only two values (e.g. a nominal categorical feature that has been one-hot encoded). Preliminaries End of explanation """ # Create three binary features X = np.random.randint(2, size=(100, 3)) # Create a binary target vector y = np.random.randint(2, size=(100, 1)).ravel() """ Explanation: Create Binary Feature And Target Data End of explanation """ # View first ten observations X[0:10] """ Explanation: View Feature Data End of explanation """ # Create Bernoulli Naive Bayes object with prior probabilities of each class clf = BernoulliNB(class_prior=[0.25, 0.5]) # Train model model = clf.fit(X, y) """ Explanation: Train Bernoulli Naive Bayes Classifier End of explanation """
mikekestemont/lot2016
Chapter 1 - Variables.ipynb
mit
print("Mike") """ Explanation: Chapter 1: Variables -- A Python Course for the Humanities by Folgert Karsdorp and Maarten van Gompel, with modifications by Mike Kestemont and Lars Wieneke First steps Everyone can learn how to program and the best way to learn it is by doing it. This tutorial on the Python programming language for people from the Humanities is extremely hands-on: you will have to write a lot of programming code yourself from the very beginning onwards. For writing the Python code in this tutorial, you can use the many 'code blocks' you will encounter, such as the grey block immediately below. Place your cursor inside this block and press ctrl+enter to "run" or execute the code. Let's begin right away: run your first little program! End of explanation """ # insert your own code here! """ Explanation: Can you describe what this code did? Can you adapt the code in this box yourself and make it print your own name? Apart from printing words to your screen, you can also use Python to do calculations. Use the code block below to calculate how many minutes there are in seven weeks? (Hint: multiplication is done using the * symbol in Python.) End of explanation """ x = 5 print(x) """ Explanation: Excellent! You have just written and executed your very first program! Please make sure to run every single one of the following code blocks in the same manner - otherwise a lot of the examples won't properly work. So far, we used only Python as a pretty minimalistic calculator, but there is more to discover. Variables and values Imagine that we want to store the number we just calculated so we can use it later. To do this we need to 'assign' a name to a value using the = symbol. End of explanation """ x = 2 print(x) print(x * x) print(x + x) print(x - 6) """ Explanation: If you vaguely remember your math-classes in school, this should look familiar. It is basically the same notation with the name of the variable on the left, the value on the right, and the = sign in the middle. In the code block above, two things happen. First, we fill x with a value, in our case 2. This variable x behaves pretty much like a box on which we write an x with a thick, black marker to find it back later. We then print the contents of this box, using the print() command. Now copy the outcome of your code calculating the number of minutes in seven weeks and assign this number to x. Run the code again. The box metaphor for a variable goes a long way: in such a box you can put whatever value you want, e.g. the number of minutes in seven weeks. When you re-assign a variable, you remove the content of the box and put something new in it. In Python, the term 'variable' refers to such a box, whereas the term 'value' refers to what is inside this box. When we have stored values inside variables, we can do interesting things with these variables. You can, for instance, run the calculations in the block below to see the effect of the following five lines of code. Symbols like =, +, - and * are called 'operators' in programming: they all provide a very basic functionality such as assigning values to variables or doing multiplication and subtraction. End of explanation """ seconds_in_seven_weeks = 70560 print(seconds_in_seven_weeks) """ Explanation: So far, we have only used a variable called x. Nevertheless, we are entirely free to change the names of our variables, as long as these names do not contain strange characters, such as spaces, numbers or punctuation marks. (Underscores, however, are allowed inside names!) In the following block, we assign the outcome of our calculation to a variable that has a more meaningful name than the abstract name x. End of explanation """ first_number = 5 second_number = first_number first_number = 3 print(first_number) print(second_number) """ Explanation: In Python we can also copy the contents of a variable into another variable, which is what happens in the block below. You should of course watch out in such cases: make sure that you keep track of the value of each individual variable in your code. Each variable will always contain the value that you last assigned to it: End of explanation """ # not recommended... months = 70560 print(months) """ Explanation: Remember: as with boxes in real life, it is always a good idea to give the box a clear, yet short name, with your black marker - the name should accurately reflect what is inside the box. Just like you don't write cookies on a box that in reality contains bananas, it is important to always give your Python variables a sensible name. In the code block below, for instance, we make the stupid mistake of calling a variable months, while it actually contains seconds... End of explanation """ print(months) print(Months) """ Explanation: Variables are also case sensitive, accessing months is not the same as Months End of explanation """ some_float = 23.987 print(some_float) some_float = -4.56 print(some_float) """ Explanation: So far we have only assigned numbers such as 2 or 70560 to our variables. Such whole numbers are called 'integers' in programming, because they don't have anymore digits 'after the dot'. Numbers that do have digits after the dot (e.g. 67.278 or 191.200), are called 'floating-point numbers' in programming or simply 'floats'. Note that Python uses dots in floats, whereas some European languages use a comma here. Both integers and floats can be positive numbers (e.g. 70 or 4.36) as well as negative numbers (e.g. -70 or 4.36). You can just as easily assign floats to variables: End of explanation """ x = 5/2 print(x) """ Explanation: On the whole, the difference between integers and floats is of course important for divisions where you often end up with floats: End of explanation """ nr1 = 10-2/4 nr2 = (10-2)/4 nr3 = 10-(2/4) print(nr1) print(nr2) print(nr3) """ Explanation: You will undoubtedly remember from your math classes in high school that there is something called 'operator precedence', meaning that multiplication, for instance, will always be executed before subtraction. In Python you can explicitly set the order in which arithmetic operations are executed, using round brackets. Compare the following lines of code: End of explanation """ number_of_books = 100 """ Explanation: Using the operators we have learned about above, we can change the variables in our code as many times as we want. We can assign new values to old variables, just like we can put new or more things in the boxes which we already had. Say, for instance, that yesterday we counted how many books we have in our office and that we stored this count in our code as follows: End of explanation """ number_of_books = number_of_books + 1 print(number_of_books) """ Explanation: Suppose that we buy a new book for the office today: we can now update our book count accordingly, by adding one to our previous count: End of explanation """ number_of_books += 5 print(number_of_books) """ Explanation: Updates like these happen a lot. Python therefore provides a shortcut and you can write the same thing using +=. End of explanation """ number_of_books -= 5 print(number_of_books) number_of_books *= 2 print(number_of_books) number_of_books /= 2 print(number_of_books) """ Explanation: This special shortcut (+=) is called an operator too. Apart from multiplication (+=), the operator has variants for subtraction (-=), multiplication (*=) and division (/=) too: End of explanation """ book = "The Lord of the Flies" print(book) """ Explanation: What we have learnt To finish this section, here is an overview of the concepts you have learnt. Go through the list and make sure you understand all the concepts. variable value assignment operator (=) difference between variables and values integers vs. floats operators for multiplication (*), subtraction (-), addition (+), division (/) the shortcut operators: +=, -=, *=, /= print() Text strings So far, we have only worked with variables that contain numbers (integers like -5 or 72 or floats like 45.89 or -5.609). Note, however, that variables can also contain other things than numbers. Many disciplines within the humanities work on texts. Quite naturally, programming skills for the humanities will have to focus a lot on manipulating texts. Have a look at the code block below, for instance. Here we put text, namely the title of a book, as a value inside the variable book. Then, we print what is inside the book variable. End of explanation """ name = "Bonny" Bonny = "name" Clyde = "Clyde" print(name) print (Bonny) print(Clyde) """ Explanation: Such a piece of text ("The Lord of the Flies") is called a 'string' in Python (cf. a string of characters). Strings in Python must always be enclosed with 'quotes' (either single or double quotes). Without those quotes, Python will think it's dealing with the name of some variable that has been defined earlier, because variable names never take quotes. The following distinction is confusing, but extremely important: variable names (without quotes) and string values (with quotes) look similar, but they serve a completely different purpose. Compare: End of explanation """ original_string = "bla" new_string = 2*original_string print(new_string) new_string = new_string+"h" print(new_string) """ Explanation: Some of the arithmetic operators we saw earlier can also be used to do useful things with strings. Both the multiplication operator (*) and the addition operator (+) provide interesting functionality for dealing with strings, as the block below illustrates. End of explanation """ original_string = "blabla" # add an 'h'... print(original_string) """ Explanation: Adding strings together is called 'string concatenation' or simply 'concatenation' in programming. Use the block below to find out whether you could can also use the shortcut += operator for adding an 'h' to the variable original_string. Don't forget to check the result by printing it! End of explanation """ # your name code goes here... """ Explanation: We now would like you to write some code that defines a variable, name, and assign to it a string that is your name. If your first name is shorter than 5 characters, use your last name. If your last name is also shorter than 5 characters, use the combination of your first and last name. Now print the variable containing your name to the screen. End of explanation """ first_letter = name[0] print(first_letter) """ Explanation: Strings are called strings because they consist of a series (or 'string') of individual characters. We can access these individual characters in Python with the help of 'indexing', because each character in a string has a unique 'index'. To print the first letter of your name, you can type: End of explanation """ last_letter = name[# fill in the last index of your name (tip indexes start at 0)] print(last_letter) """ Explanation: Take a look at the string "Mr White". We use the index 0 to access the first character in the string. This might seem odd, but remember that all indexes in Python start at zero. Whenever you count in Python, you start at 0 instead of 1. Note that the space character gets an index too, namely 2. This is something you will have to get used to! Because you know the length of your name you can ask for the last letter of your name: End of explanation """ last_letter = name[-1] print(last_letter) """ Explanation: It is rather inconvenient having to know how long our strings are if we want to find out what its last letter is. Python provides a simple way of accessing a string 'from the rear': End of explanation """ print(len(name)) """ Explanation: To access the last character in a string you have to use the index [-1]. Alternatively, there is the len() command which returns the length of a string: End of explanation """ print(name[len(name)-1]) """ Explanation: Do you understand the following code block? Can you explain what is happening? End of explanation """ but_last_letter = name[# insert your code here] print(but_last_letter) """ Explanation: Now can you write some code that defines a variable but_last_letter and assigns to this variable the one but last letter of your name? End of explanation """ first_two_letters = name[0:2] print(first_two_letters) """ Explanation: You're starting to become a real expert in indexing strings. Now what if we would like to find out what the last two or three letters of our name are? In Python we can use so-called 'slice-indexes' or 'slices' for short. To find the first two letters of our name we type in: End of explanation """ without_first_two_letters = name[2:] print(without_first_two_letters) """ Explanation: The 0 index is optional, so we could just as well type in name[:2]. This says: take all characters of name until you reach index 2 (i.e. up to the third letter, but not including the third letter). We can also start at index 2 and leave the end index unspecified: End of explanation """ last_two_letters = name[-2:] print(last_two_letters) """ Explanation: Because we did not specify the end index, Python continues until it reaches the end of our string. If we would like to find out what the last two letters of our name are, we can type in: End of explanation """ # insert your middle_letters code here """ Explanation: DIY Can you define a variable middle_letters and assign to it all letters of your name except for the first two and the last two? End of explanation """ word1 = "human" word2 = "opportunities" """ Explanation: Given the following two words, can you write code that prints out the word humanities using only slicing and concatenation? (So, no quotes are allowed in your code.) Can you print out how many characters the word humanities counts? End of explanation """ x = "5" y = 2 print(x + y) """ Explanation: "Casting" variables Above, we have already learned that each variable as a data type: variables can be strings, floats, integers, etc. Sometimes it is necessary to convert one type into the other. Consider this: End of explanation """ x = "5" y = 2 print(x + str(y)) print(int(x) + y) """ Explanation: This should raise an error on your machine: does the error message gives you a hint as to why this doesn't work? x is a string, and y is an integer. Because of this, you cannot sum them. Luckily there exist ways to 'cast' variables from one type of variable into another type of variable. Do you understand the outcome of the following code? Can you comment in your own words on the effect of applying int() and str() to variables? End of explanation """ # comment: insert your code here. # BTW: Have you noticed that everything behind the hashtag print("Something...") # on a line is ignored by your python interpreter? print("and something else..") # this is really helpful to comment on your code! """Another way of commenting on your code is via triple quotes -- these can be distributed over multiple """ # lines print("Done.") """ Explanation: Other types of conversions are possible as well, and we will see a couple of them in the next chapters. Because variables can change data type anytime they want, we say that Python uses 'dynamic typing', as opposed to other more 'strict' languages that use 'strong typing'. You can check a variable's type using the type()command. DIY When you exchange code with fellow programmers (as you will often have to do in the real world), it is really helpful if you include some useful information about your scripts. Have a look at the code block below and read about commenting on Python code in the comments: End of explanation """ # your code goes here """ Explanation: So, how many ways are there to comment on your code in Python? What we have learnt To finish this section, here is an overview of what we have learnt. Go through the list and make sure you understand all the concepts. concatenation index slicing zero-indexed numbering len() type casting: int() and str() type() code commenting via hashtags and triple double quotes Final Exercises Chapter 1 Inspired by Think Python by Allen B. Downey (http://thinkpython.com), Introduction to Programming Using Python by Y. Liang (Pearson, 2013). Some exercises below have been taken from: http://www.ling.gu.se/~lager/python_exercises.html. Ex. 1: Suppose the cover price of a book is 24.95 EUR, but bookstores get a 40 percent discount. Shipping costs 3 EUR for the first copy and 75 cents for each additional copy. What is the total wholesale cost for 60 copies? Print the result in a pretty fashion, using casting where necessary! End of explanation """ print("A message"). print("A message') print('A messagef"') """ Explanation: Ex. 2: Can you identify and explain the errors in the following lines of code? Correct them please! End of explanation """ # ZeroDivisionError """ Explanation: Ex. 3: When something is wrong with your code, Python will raise errors. Often these will be 'syntax errors' that signal that something is wrong with the form of your code (i.e. a SyntaxError like the one thrown in the previous exercice). There are also 'runtime errors' that signal that your code was in itself formally correct, but that something went wrong during the code's execution. A good example is the ZeroDivisionError. Try to make Python throw such a ZeroDivisionError! End of explanation """ # insert your code here """ Explanation: Ex. 4: Write a program that assigns the result of 9.5 * 4.5 - 2.5 * 345.5 - 3.5 to a variable. Print this variable. Use round brackets to indicate 'operator precedence' and make sure that subtractions are performed before multiplications. When you convert the outcome to a string, how many characters does it count? End of explanation """ # numbers """ Explanation: Ex. 5: Define the variables a=2, b=20007 and c=5. Using only the operations you learned about above, can you now print the following numbers: 2005, 252525252, 2510, -60025 and 2002507? (Hint: use type casting and string slicing to access parts of the original numbers!) End of explanation """ # average """ Explanation: Ex. 6: Define three variables var1, var2 and var3. Calculate the average of these variables and assign it to average. Print the result in a fancy manner. Add three comments to this piece of code using three different ways. End of explanation """ # circle code """ Explanation: Ex. 7: Write a little program that can compute the surface of circle, using the variables radius and pi=3.14159. The formula is of course radius, multiplied by radius, multiplied by pi. Print the outcome of your program as follows: 'The surface area of a circle with radius ... is: ...'. End of explanation """ # try out the modulus operator! """ Explanation: Ex. 8: There is one operator (like the ones for multiplication and subtraction) that we did not mention yet, namely the modulus operator %. Could you figure by yourself what it does when you place it between two numbers (e.g. 113 % 9)? (PS: It's OK to get help online...) You don't need this operator all that often, but when you do, it comes in really handy! End of explanation """ # cashier code """ Explanation: Ex. 9: Can you use the modulus operator you just learned about to solve the following task? Write a code block that classifies a given amount of money into smaller monetary units. Set the amount variable to 11.56. You code should outputs a report listing the monetary equivalent in dollars, quarters, dimes, nickels, and pennies. Your program should report the maximum number of dollars, then the number of quarters, dimes, nickels, and pennies, in this order, to result in the minimum number of coins. Here are the steps in developing the program: Convert the amount (11.56) into cents (1156). Divide the cents by 100 to find the number of dollars, but first subtract the rest using the modulus operator! Divide the remaining cents by 25 to find the number of quarters, but, again, first subtract the rest using the modulus operator! Divide the remaining cents by 10 to find the number of dimes, etc. Divide the remaining cents by 5 to find the number of nickels, etc. The remaining cents are the pennies. Now display the result for your cashier! End of explanation """ from IPython.core.display import HTML def css_styling(): styles = open("styles/custom.css", "r").read() return HTML(styles) css_styling() """ Explanation: You've reached the end of Chapter 1! You can safely ignore the code block below -- it's only there to make the page prettier. End of explanation """
dereneaton/ipyrad
tests/quickguide_API.ipynb
gpl-3.0
import ipyrad as ip """ Explanation: Quick guide to the ipyrad API Getting Started Welcome! This tutorial will introduce you to the basics of working with ipyrad to assemble RADseq data. Note: this tutorial was created in a Jupyter Notebook and assumes that you’re following-along in a notebook of your own. If you installed ipyrad then you will also have jupyter installed by default. For a little background on how jupyter notebooks see Notebooks. All of the code below is written in Python. Follow along in your own notebook. To begin, we're going to import the ipyrad module and name it ip so that we can refer to it more easily: End of explanation """ ## create an Assembly object named data1. data1 = ip.Assembly("data1") """ Explanation: Assembly objects Assembly objects are used by ipyrad to access data stored on disk and to manipulate it. Each biological sample in a data set is represented as a Sample object, and a set of Samples is stored inside an Assembly object. The Assembly object has functions to assemble data, and to view or plot the resulting assembled files and statistics. Assembly objects can be copied or merged to allow branching events where different parameters can subsequently be applied to different Assembly objects. Examples of this are shown in the workflow. Every analysis begins by creating at least one Assembly object. Here we will name it "data1". End of explanation """ ## create an Assembly object linked to 8 engines using MPI data1 = ip.Assembly("data1", N=4, controller="MPI") """ Explanation: The printout tells us that we created the object data1, and also that it found 4 engines on our system that can be used for computation. An engine is simply a CPU. When working on a single machine it will usually be easiest to simply let the Assembly object connect to all available local engines. However, on HPC clusters you may need to modify the controller or the number of engines, as shown below: End of explanation """ ## setting/modifying parameters for this Assembly object data1.set_params('project_dir', "./test_rad") data1.set_params('raw_fastq_path', "./data/sim_rad_test_R1_.fastq.gz") data1.set_params('barcodes_path', "./data/sim_rad_test_barcodes.txt") data1.set_params('filter_adapters', 0) data1.set_params('datatype', 'rad') ## print the parameters for `data` data1.get_params() """ Explanation: For more information about connecting CPUs for parallelization see ipyparallel setup. Modifying Assembly parameters The arguments get_params() and set_params() are used to view and modify parameter settings of an Assembly object, respectively. End of explanation """ ip.get_params_info(10) """ Explanation: To get more detailed information about each parameter use ip.get_params_info(), or look up their funcion in the documentation (Parameters). To quickly look up the proper formatting for a parameter, you can use ip.get_params_info(N), where N is the number of a parameter. Example: End of explanation """ ## This would link fastq files from the 'sorted_fastq_path' if present ## Here it does nothing b/c there are no files in the sorted_fastq_path data1.link_fastqs() """ Explanation: Sample Objects Each biological sample in a data set is represented as a Sample object. Sample objects are created during step1() of an analysis, at which time they are linked to an Assembly object. When getting started raw data should be in one of two forms: + Non-demultiplexed data files (accompanied by a barcode file). + Demultiplexed data files. Note: For additional ways to add raw data files to a data set see link_fastqs. If the data are already demultiplexed then fastq files can be linked directly to the Assembly object, which in turn will create new Sample objects from them, or link them to existing Sample objects based on the file names (or pair of fastq files for paired data files). The files may be gzip compressed. If the data are not demultiplexed then you will have to run the step1 function below to demultiplex the raw data. End of explanation """ ## run step 1 to demultiplex the data data1.step1() ## print the results for each Sample in data1 print data1.stats ## remove the lane control sequence #data1.samples.pop("FGXCONTROL") """ Explanation: Step 1: Demultiplexing raw data files Step1 uses barcode information to demultiplex data files found in param 2 ['raw_fastq_path']. It will create a Sample object for each barcoded sample. Below we use the step1() function to demultiplex. The stats attribute of an Assembly object is returned as a pandas data frame. End of explanation """ ## example of ways to run step 2 to filter and trim reads #data1.step2(["1A_0"]) ## run on a single sample #data1.step2(["1B_0", "1C_0"]) ## run on one or more samples data1.step2(force=True) ## run on all samples, overwrite finished ## print the results print data1.stats #data1.samples["veitchii"].files """ Explanation: Step 2: Filter reads If for some reason we wanted to execute on just a subsample of our data, we could do this by selecting only certain samples to call the step2 function on. Because step2 is a function of data, it will always execute with the parameters that are linked to data. End of explanation """ ## create a copy of our Assembly object data2 = data1.branch(newname="data2") ## set clustering threshold to 0.90 data2.set_params(11, 0.90) ## look at inherited parameters data2.get_params() """ Explanation: Branching Assembly objects Let's imagine at this point that we are interested in clustering our data at two different clustering thresholds. We will try 0.90 and 0.85. First we need to make a copy/branch of the Assembly object. This will inherit the locations of the data linked in the first object, but diverge in any future applications to the object. Thus, the two Assembly objects can share the same working directory, and inherit shared files, but will diverge in creating new files linked to only one or the other. You can view the directories linked to an Assembly object with the .dirs argument, shown below. The prefix_outname (param 14) of the new object is automatically set to the Assembly object name. End of explanation """ import ipyrad as ip data1 = ip.load_assembly("test_rad/data1") ## run step 3 to cluster reads within samples using vsearch data1.step3(force=True) ## print the results print data1.stats ## run step 3 to cluster reads in data2 at 0.90 sequence similarity data2.step3(force=True) ## print the results print data2.stats """ Explanation: Step 3: clustering within-samples End of explanation """ print "data1 directories:" for (i,j) in data1.dirs.items(): print "{}\t{}".format(i, j) print "\ndata2 directories:" for (i,j) in data2.dirs.items(): print "{}\t{}".format(i, j) ## TODO, just make a [name]_stats directory in [work] for each data obj data1.statsfiles """ Explanation: Branched Assembly objects And you can see below that the two Assembly objects are now working with several shared directories (working, fastq, edits) but with different clust directories (clust_0.85 and clust_0.9). End of explanation """ data1.stats.to_csv("data1_results.csv", sep="\t") data1.stats.to_latex("data1_results.tex") """ Explanation: Saving stats outputs Example: two simple ways to save the stats data frame to a file. End of explanation """ import ipyrad.plotting as iplot ## plot for one or more selected samples #iplot.depthplot(data1, ["1A_0", "1B_0"]) ## plot for all samples in data1 iplot.depthplot(data1) ## save plot as pdf and html #iplot.depthplot(data1, outprefix="testfig") """ Explanation: Example of plotting with ipyrad There are a a few simple plotting functions in ipyrad useful for visualizing results. These are in the module ipyrad.plotting. Below is an interactive plot for visualizing the distributions of coverages across the 12 samples in the test data set. End of explanation """ ## run step 4 data1.step4() ## print the results print data1.stats """ Explanation: Step 4: Joint estimation of heterozygosity and error rate End of explanation """ #import ipyrad as ip ## reload autosaved data. In case you quit and came back #data1 = ip.load_dataobj("test_rad/data1.assembly") ## run step 5 #data1.step5() ## print the results #print data1.stats """ Explanation: Step 5: Consensus base calls End of explanation """ ip.get_params_info(10) """ Explanation: Quick parameter explanations are always on-hand End of explanation """ for i in data1.log: print i print "\ndata 2 log includes its pre-branching history with data1" for i in data2.log: print i """ Explanation: Log history A common problem at the end of an analysis, or while troubleshooting it, is that you find you've completely forgotten which parameters you used at what point, and when you changed them. Documenting or executing code inside Jupyter notebooks (like the one you're reading right now) is a great way to keep track of all of this. In addition, ipyrad also stores a log history which time stamps all modifications to Assembly objects. End of explanation """ ## save assembly object #ip.save_assembly("data1.p") ## load assembly object #data = ip.load_assembly("data1.p") #print data.name """ Explanation: Saving Assembly objects Assembly objects can be saved and loaded so that interactive analyses can be started, stopped, and returned to quite easily. The format of these saved files is a serialized 'dill' object used by Python. Individual Sample objects are saved within Assembly objects. These objects to not contain the actual sequence data, but only link to it, and so are not very large. The information contained includes parameters and the log of Assembly objects, and the statistics and state of Sample objects. Assembly objects are autosaved each time an assembly step function is called, but you can also create your own checkpoints with the save command. End of explanation """
rastala/mmlspark
notebooks/samples/101 - Adult Census Income Training.ipynb
mit
import numpy as np import pandas as pd import mmlspark # help(mmlspark) """ Explanation: 101 - Training and Evaluating Classifiers with mmlspark In this example, we try to predict incomes from the Adult Census dataset. First, we import the packages (use help(mmlspark) to view contents), End of explanation """ dataFile = "AdultCensusIncome.csv" import os, urllib if not os.path.isfile(dataFile): urllib.request.urlretrieve("https://mmlspark.azureedge.net/datasets/"+dataFile, dataFile) data = spark.createDataFrame(pd.read_csv(dataFile, dtype={" hours-per-week": np.float64})) data = data.select([" education", " marital-status", " hours-per-week", " income"]) train, test = data.randomSplit([0.75, 0.25], seed=123) train.limit(10).toPandas() """ Explanation: Now let's read the data and split it to train and test sets: End of explanation """ from mmlspark import TrainClassifier from pyspark.ml.classification import LogisticRegression model = TrainClassifier(model=LogisticRegression(), labelCol=" income", numFeatures=256).fit(train) model.write().overwrite().save("adultCensusIncomeModel.mml") """ Explanation: TrainClassifier can be used to initialize and fit a model, it wraps SparkML classifiers. You can use help(mmlspark.TrainClassifier) to view the different parameters. Note that it implicitly converts the data into the format expected by the algorithm: tokenize and hash strings, one-hot encodes categorical variables, assembles the features into vector and so on. The parameter numFeatures controls the number of hashed features. End of explanation """ from mmlspark import ComputeModelStatistics, TrainedClassifierModel predictionModel = TrainedClassifierModel.load("adultCensusIncomeModel.mml") prediction = predictionModel.transform(test) metrics = ComputeModelStatistics().transform(prediction) metrics.limit(10).toPandas() """ Explanation: After the model is trained, we score it against the test dataset and view metrics. End of explanation """ model.write().overwrite().save("AdultCensus.mml") """ Explanation: Finally, we save the model so it can be used in a scoring program. End of explanation """
tpin3694/tpin3694.github.io
python/pandas_create_column_with_loop.ipynb
mit
import pandas as pd import numpy as np """ Explanation: Title: Create A Pandas Column With A For Loop Slug: pandas_create_column_with_loop Summary: Create A Pandas Column With A For Loop Date: 2016-05-01 12:00 Category: Python Tags: Data Wrangling Authors: Chris Albon Preliminaries End of explanation """ raw_data = {'student_name': ['Miller', 'Jacobson', 'Ali', 'Milner', 'Cooze', 'Jacon', 'Ryaner', 'Sone', 'Sloan', 'Piger', 'Riani', 'Ali'], 'test_score': [76, 88, 84, 67, 53, 96, 64, 91, 77, 73, 52, np.NaN]} df = pd.DataFrame(raw_data, columns = ['student_name', 'test_score']) """ Explanation: Create an example dataframe End of explanation """ # Create a list to store the data grades = [] # For each row in the column, for row in df['test_score']: # if more than a value, if row > 95: # Append a letter grade grades.append('A') # else, if more than a value, elif row > 90: # Append a letter grade grades.append('A-') # else, if more than a value, elif row > 85: # Append a letter grade grades.append('B') # else, if more than a value, elif row > 80: # Append a letter grade grades.append('B-') # else, if more than a value, elif row > 75: # Append a letter grade grades.append('C') # else, if more than a value, elif row > 70: # Append a letter grade grades.append('C-') # else, if more than a value, elif row > 65: # Append a letter grade grades.append('D') # else, if more than a value, elif row > 60: # Append a letter grade grades.append('D-') # otherwise, else: # Append a failing grade grades.append('Failed') # Create a column from the list df['grades'] = grades # View the new dataframe df """ Explanation: Create a function to assign letter grades End of explanation """
ucsd-ccbb/jupyter-genomics
notebooks/networkAnalysis/network_differential_expression_viz/network_differential_expression_viz.ipynb
mit
# import some useful packages import numpy as np import pandas as pd import networkx as nx import matplotlib.pyplot as plt % matplotlib inline """ Explanation: Visualize and analyze differential expression data in a network In analysis of differential expression data, it is often useful to analyze properties of the local neighborhood of specific genes. I developed a simple interactive tool for this purpose, which takes as input diferential expression data, and gene interaction data (from http://www.genemania.org/). The network is then plotted in an interactive widget, where the node properties, edge properties, and layout can be mapped to different network properties. The interaction type (of the 6 options from genemania) can also be selected. This tool will also serve as an example for how to create, modify, visualize and analyze weighted and unweighted gene interaction networks using the highly useful and flexible python package NetworkX (https://networkx.github.io/) This tool is most useful if you have a reasonably small list of genes (~100) with differential expression data, and want to explore properties of their interconnections and their local neighborhoods. End of explanation """ dataDE = pd.read_csv('DE_data/DE_ayyagari_data_genename_foldchange.csv',sep='\t') print(dataDE.head()) # genes in dataDE gene_list = list(dataDE['IDENTIFIER']) # only use the average fold-change (because there are multiple entries for some genes #dataDE_mean = dataDE.DiffExp.groupby(dataDE['IDENTIFIER']).mean() dataDE_mean = dataDE['fold_change'] dataDE_mean.index=gene_list print(dataDE_mean) # load the gene-gene interactions (from genemania) #filename = 'DE_data/DE_experiment_interactions.txt' filename = 'DE_data/DE_ayyagari_interactions.txt' DE_network = pd.read_csv(filename, sep='\t', header=6) DE_network.columns = ['Entity 1','Entity 2', 'Weight','Network_group','Networks'] # create the graph, and add some edges (and nodes) G_DE = nx.Graph() idxCE = DE_network['Network_group']=='Co-expression' edge_list = zip(list(DE_network['Entity 1'][idxCE]),list(DE_network['Entity 2'][idxCE])) G_DE.add_edges_from(edge_list) print('number of edges = ' + str(len(G_DE.edges()))) print('number of nodes = '+ str(len(G_DE.nodes()))) # create version with weighted edges G_DE_w = nx.Graph() edge_list_w = zip(list(DE_network['Entity 1']),list(DE_network['Entity 2']),list(DE_network['Weight'])) G_DE_w.add_weighted_edges_from(edge_list_w) """ Explanation: Import a real network (from this experiment http://www.ncbi.nlm.nih.gov/sites/GDSbrowser?acc=GDS4419) This experiment contains fold change information for genes in an experiment studying 'alveolar macrophage response to bacterial endotoxin lipopolysaccharide exposure in vivo'. We selected a list of genes from the experiment which had high differential expression, and were enriched for 'immune response' and 'response to external biotic stimulus' in the gene ontology. This experiment and gene list were selected purely as examples for how to use this tool for an initial exploration of differential expression data. NOTE: change paths/filenames in this cell to apply network visualizer to other datasets. Network format comes from genemania (e.g. columns are 'Entity 1', 'Entity 2', 'Weight', 'Network_group', 'Networks') NOTE: File format is tsv, and needs to contain columns for 'IDENTIFIER', 'DiffExp', and 'absDiffExp'. Other columns are optional End of explanation """ import imp import plot_network imp.reload(plot_network) """ Explanation: Import plotting tool (and reload it if changes have been made) End of explanation """ from IPython.html.widgets import interact from IPython.html import widgets import matplotlib.colorbar as cb import seaborn as sns import community # import network plotting module from plot_network import * # temporary graph variable Gtest = nx.Graph() # check whether you have differential expression data diff_exp_analysis=True # replace G_DE_w with G_DE in these two lines if unweighted version is desired Gtest.add_nodes_from(G_DE_w.nodes()) Gtest.add_edges_from(G_DE_w.edges(data=True)) # prep border colors nodes = Gtest.nodes() #gene_list = gene_list if diff_exp_analysis: diff_exp = dataDE_mean genes_intersect = np.intersect1d(gene_list,nodes) border_cols = Series(index=nodes) for i in genes_intersect: if diff_exp[i]=='Unmeasured': border_cols[i] = np.nan else: border_cols[i] = diff_exp[i] else: # if no differential expression data border_cols = [None] numnodes = len(Gtest) # make these three global to feed into widget global Gtest global boder_cols global DE_network def plot_network_shell(focal_node_name,edge_thresh=.5,network_algo='spl', map_degree=True, plot_border_col=False, draw_shortest_paths=True, coexpression=True, colocalization=True, other=False,physical_interactions=False, predicted_interactions=False,shared_protein_domain=False): # this is the main plotting function, called from plot_network module fig = plot_network(Gtest, border_cols, DE_network, focal_node_name, edge_thresh, network_algo, map_degree, plot_border_col, draw_shortest_paths, coexpression, colocalization, other, physical_interactions, predicted_interactions, shared_protein_domain) return fig # threshold slider parameters min_thresh = np.min(DE_network['Weight']) max_thresh = np.max(DE_network['Weight']/10) thresh_step = (max_thresh-min_thresh)/1000.0 interact(plot_network_shell, focal_node_name=list(np.sort(nodes)), edge_thresh=widgets.FloatSliderWidget(min=min_thresh,max=max_thresh,step=thresh_step,value=min_thresh,description='edge threshold'), network_algo = ['community','clustering_coefficient','pagerank','spl']); """ Explanation: Run the plotting tool on prepared data Description of options: focal_node_name: Select gene to focus on (a star will be drawn on this node) edge_threshold: Change the number of edges included in the network by moving the edge_threshold slider. Higher values of edge_threshold means fewer edges will be included in the graph (and may improve interpretability). The threshold is applied to the 'Weight' column of DE_network, so the less strongly weighted edges are not included as the threshold increases network_algo: Select the network algorithm to apply to the graph. Choices are: 'spl' (shortest path length): Plot the network in a circular tree layout, with the focal gene at the center, with nodes color-coded by log fold-change. 'clustering coefficient': Plot the network in a circular tree layout, with nodes color-coded by the local clustering coefficient (see https://en.wikipedia.org/wiki/Clustering_coefficient). 'pagerank': Plot the network in a spring layout, with nodes color-coded by page rank score (see https://en.wikipedia.org/wiki/PageRank for algorithm description) 'community': Group the nodes in the network into communities, using the Louvain modularity maximization algorithm, which finds groups of nodes optimizing for modularity (a metric which measures the number of edges within communities compared to number of edges between communities, see https://en.wikipedia.org/wiki/Modularity_(networks) for more information). The nodes are then color-coded by these communities, and the total modularity of the partition is printed above the graph (where the maximal value for modularity is 1 which indicates a perfectly modular network so that there are no edges connecting communities). Below the network the average fold-change in each community is shown with box-plots, where the focal node's community is indicated by a white star, and the colors of the boxes correspond to the colors of the communities above. map_degree: Choose whether to map the node degree to node size plot_border_col: Choose whether to plot the log fold-change as the node border color draw_shortest_paths: If checked, draw the shortest paths between the focal node and all other nodes in blue transparent line. More opaque lines indicate that section of path was traveled more often. coexpression, colocalization, other, physical_interactions, predicted_interactions, shared_protein_domain: Select whether to include interactions of these types (types come from GeneMania- http://pages.genemania.org/data/) End of explanation """
GoogleCloudPlatform/professional-services
examples/kubeflow-fairing-example/Fairing_XGBoost.ipynb
apache-2.0
import argparse import logging import joblib import sys import pandas as pd from sklearn.metrics import roc_auc_score from sklearn.model_selection import train_test_split from sklearn.impute import SimpleImputer from xgboost import XGBClassifier logging.basicConfig(format='%(message)s') logging.getLogger().setLevel(logging.INFO) import os import fairing # Setting up google container repositories (GCR) for storing output containers # You can use any docker container registry istead of GCR # For local notebook, GCP_PROJECT should be set explicitly GCP_PROJECT = fairing.cloud.gcp.guess_project_name() GCP_Bucket = os.environ['GCP_BUCKET'] # e.g., 'gs://kubeflow-demo-g/' # This is for local notebook instead of that in kubeflow cluster # os.environ['GOOGLE_APPLICATION_CREDENTIALS']= """ Explanation: Train and deploy Xgboost (Scikit-learn) on Kubeflow from Notebooks This notebook introduces you the usage of Kubeflow Fairing to train and deploy a model to Kubeflow on Google Kubernetes Engine (GKE), and Google Cloud AI Platform training. This notebook demonstrate how to: Train an XGBoost model in a local notebook, Use Kubeflow Fairing to train an XGBoost model remotely on Kubeflow cluster, Use Kubeflow Fairing to train an XGBoost model remotely on AI Platform training, Use Kubeflow Fairing to deploy a trained model to Kubeflow, and Call the deployed endpoint for predictions. You need Python 3.6 to use Kubeflow Fairing. Setups Pre-conditions Deployed a kubeflow cluster through https://deploy.kubeflow.cloud/ Have the following environment variable ready: PROJECT_ID # project host the kubeflow cluster or for running AI platform training DEPLOYMENT_NAME # kubeflow deployment name, the same the cluster name after delpoyed GCP_BUCKET # google cloud storage bucket Create service account bash export SA_NAME = [service account name] gcloud iam service-accounts create ${SA_NAME} gcloud projects add-iam-policy-binding ${PROJECT_ID} \ --member serviceAccount:${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com \ --role 'roles/editor' gcloud iam service-accounts keys create ~/key.json \ --iam-account ${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com Authorize for Source Repository bash gcloud auth configure-docker Update local kubeconfig (for submiting job to kubeflow cluster) bash export CLUSTER_NAME=${DEPLOYMENT_NAME} # this is the deployment name or the kubenete cluster name export ZONE=us-central1-c gcloud container clusters get-credentials ${CLUSTER_NAME} --region ${ZONE} Set the environmental variable: GOOGLE_APPLICATION_CREDENTIALS bash export GOOGLE_APPLICATION_CREDENTIALS = .... python os.environ['GOOGLE_APPLICATION_CREDENTIALS']=... Install the lastest version of fairing python pip install git+https://github.com/kubeflow/fairing@master Upload training file ```bash upload the train.csv to GCS bucket that can be accessed from both CMLE and Kubeflow cluster gsutil cp ./train.csv ${GCP_Bucket}/train.csv ``` Please not that the above configuration is required for notebook service running outside Kubeflow environment. And the examples demonstrated in the notebook is fully tested on notebook service outside Kubeflow cluster also. The environemt variables, e.g. service account, projects and etc, should have been pre-configured while setting up the cluster. Set up your notebook for training an XGBoost model Import the libraries required to train this model. End of explanation """ def gcs_copy(src_path, dst_path): import subprocess print(subprocess.run(['gsutil', 'cp', src_path, dst_path], stdout=subprocess.PIPE).stdout[:-1].decode('utf-8')) def gcs_download(src_path, file_name): import subprocess print(subprocess.run(['gsutil', 'cp', src_path, file_name], stdout=subprocess.PIPE).stdout[:-1].decode('utf-8')) def read_input(source_path, test_size=0.25): """Read input data and split it into train and test.""" file_name = source_path.split('/')[-1] gcs_download(source_path, file_name) data = pd.read_csv(file_name) data.dropna(axis=0, inplace=True) y = data.Class X = data.drop(['Class', 'Amount', 'Time'], axis=1).select_dtypes(exclude=['object']) train_X, test_X, train_y, test_y = train_test_split(X.values, y.values, test_size=test_size, shuffle=True) imputer = SimpleImputer() train_X = imputer.fit_transform(train_X) test_X = imputer.transform(test_X) return (train_X, train_y), (test_X, test_y) """ Explanation: Define the model logic Define a function to split the input file into training and testing datasets. End of explanation """ def train_model(train_X, train_y, test_X, test_y, n_estimators, learning_rate): """Train the model using XGBRegressor.""" model = XGBClassifier(n_estimators=n_estimators, learning_rate=learning_rate) model.fit(train_X, train_y, early_stopping_rounds=40, eval_set=[(test_X, test_y)]) print("Best loss on eval: %.2f with %d rounds", model.best_score, model.best_iteration+1) return model def eval_model(model, test_X, test_y): """Evaluate the model performance.""" predictions = model.predict_proba(test_X) logging.info("auc=%.2f", roc_auc_score(test_y, predictions[:,1])) def save_model(model, model_file): """Save XGBoost model for serving.""" joblib.dump(model, model_file) gcs_copy(model_file, GCP_Bucket + model_file) logging.info("Model export success: %s", model_file) """ Explanation: Define functions to train, evaluate, and save the trained model. End of explanation """ class FraudServe(object): def __init__(self): self.train_input = GCP_Bucket + "train_fraud.csv" self.n_estimators = 50 self.learning_rate = 0.1 self.model_file = "trained_fraud_model.joblib" self.model = None def train(self): (train_X, train_y), (test_X, test_y) = read_input(self.train_input) model = train_model(train_X, train_y, test_X, test_y, self.n_estimators, self.learning_rate) eval_model(model, test_X, test_y) save_model(model, self.model_file) def predict(self, X, feature_names): """Predict using the model for given ndarray.""" if not self.model: self.model = joblib.load(self.model_file) # Do any preprocessing prediction = self.model.predict(data=X) # Do any postprocessing return [[prediction.item(0), prediction.item(0)]] """ Explanation: Define a class for your model, with methods for training and prediction. End of explanation """ FraudServe().train() """ Explanation: Train an XGBoost model in a notebook Call FraudServe().train() to train your model, and then evaluate and save your trained model. End of explanation """ # In this demo, I use gsutil, therefore i compile a special image to install GoogleCloudSDK as based image base_image = 'gcr.io/{}/fairing-predict-example:latest'.format(GCP_PROJECT) !docker build --build-arg PY_VERSION=3.6.4 . -t {base_image} !docker push {base_image} DOCKER_REGISTRY = 'gcr.io/{}/fairing-job-xgboost'.format(GCP_PROJECT) BASE_IMAGE = base_image """ Explanation: Make Use of Fairing Spicify a image registry that will hold the image built by fairing End of explanation """ from fairing import TrainJob from fairing.backends import GKEBackend train_job = TrainJob(FraudServe, BASE_IMAGE, input_files=["requirements.txt"], docker_registry=DOCKER_REGISTRY, backend=GKEBackend()) train_job.submit() """ Explanation: Train an XGBoost model remotely on Kubeflow Import the TrainJob and GKEBackend classes. Kubeflow Fairing packages the FraudServe class, the training data, and the training job's software prerequisites as a Docker image. Then Kubeflow Fairing deploys and runs the training job on Kubeflow. End of explanation """ from fairing import TrainJob from fairing.backends import GCPManagedBackend train_job = TrainJob(FraudServe, BASE_IMAGE, input_files=["requirements.txt"], docker_registry=DOCKER_REGISTRY, backend=GCPManagedBackend()) train_job.submit() """ Explanation: Train an XGBoost model remotely on Cloud ML Engine Import the TrainJob and GCPManagedBackend classes. Kubeflow Fairing packages the FraudServe class, the training data, and the training job's software prerequisites as a Docker image. Then Kubeflow Fairing deploys and runs the training job on Cloud ML Engine. End of explanation """ from fairing import PredictionEndpoint from fairing.backends import KubeflowGKEBackend # The trained_ames_model.joblib is exported during the above local training endpoint = PredictionEndpoint(FraudServe, BASE_IMAGE, input_files=['trained_fraud_model.joblib', "requirements.txt"], docker_registry=DOCKER_REGISTRY, backend=KubeflowGKEBackend()) endpoint.create() """ Explanation: Deploy the trained model to Kubeflow for predictions Import the PredictionEndpoint and KubeflowGKEBackend classes. Kubeflow Fairing packages the FraudServe class, the trained model, and the prediction endpoint's software prerequisites as a Docker image. Then Kubeflow Fairing deploys and runs the prediction endpoint on Kubeflow. This part only works for fairing version >=0.5.2 End of explanation """ # Deploy model to gcp # from fairing.deployers.gcp.gcpserving import GCPServingDeployer # deployer = GCPServingDeployer() # deployer.deploy(VERSION_DIR, MODEL_NAME, VERSION_NAME) """ Explanation: Deploy to GCP End of explanation """ (train_X, train_y), (test_X, test_y) = read_input(GCP_Bucket + "train_fraud.csv") endpoint.predict_nparray(test_X) """ Explanation: Call the prediction endpoint Create a test dataset, then call the endpoint on Kubeflow for predictions. End of explanation """ endpoint.delete() """ Explanation: Clean up the prediction endpoint Delete the prediction endpoint created by this notebook. End of explanation """
flyinactor91/Find-Me
FindMe.ipynb
mit
import cv2 import numpy as np CASCADE = cv2.CascadeClassifier('findme/haar_cc_front_face.xml') def find_faces(img: np.ndarray, sf=1.16, mn=5) -> np.array([[int]]): """Returns a list of bounding boxes for every face found in an image""" return CASCADE.detectMultiScale( cv2.cvtColor(img, cv2.COLOR_RGB2GRAY), scaleFactor=sf, minNeighbors=mn, minSize=(45, 45), flags=cv2.CASCADE_SCALE_IMAGE ) """ Explanation: Find Me Michael duPont - CodeCamp 2017 Find Faces The first thing we need to do is pick out faces from a larger image. Because the model for this is not user or case specific, we can use an existing model, load it with OpenCV, and tune the hyperparameters instead of building one from scratch, which we will have to do later. End of explanation """ import matplotlib.pyplot as plt from matplotlib.image import imread, imsave %matplotlib inline plt.imshow(imread('test_imgs/initial/group0.jpg')) from glob import glob def draw_boxes(bboxes: [[int]], img: 'np.array', line_width: int=2) -> 'np.array': """Returns an image array with the bounding boxes drawn around potential faces""" for x, y, w, h in bboxes: cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 0), line_width) return img #Find faces for each test image for fname in glob('test_imgs/initial/group*.jpg'): img = imread(fname) bboxes = find_faces(img) print(bboxes) imsave(fname.replace('initial', 'find_faces'), draw_boxes(bboxes, img)) plt.imshow(imread('test_imgs/find_faces/group0.jpg')) """ Explanation: That's really all we need. Now let's test it by drawing rectangles around a few images of groups. Here's one example: End of explanation """ #Creates cropped faces for imgs matching 'test_imgs/group*.jpg' def crop(img: np.ndarray, x: int, y: int, width: int, height: int) -> np.ndarray: """Returns an image cropped to a given bounding box of top-left coords, width, and height""" return img[y:y+height, x:x+width] def pull_faces(glob_in: str, path_out: str) -> int: """Pulls faces out of images found in glob_in and saves them as path_out Returns the total number of faces found """ i = 0 for fname in glob(glob_in): print(fname) img = imread(fname) bboxes = find_faces(img) for bbox in bboxes: cropped = crop(img, *bbox) imsave(path_out.format(i), cropped) i += 1 return i found = pull_faces('test_imgs/initial/group*.jpg', 'test_imgs/corpus/face{}.jpg') print('Total number of base corpus faces found:', found) plt.imshow(imread('test_imgs/corpus/face0.jpg')) """ Explanation: After tuning the hyperparameters, we're getting good face identification over our test images. Build Dataset Base Corpus Now let's use this to build a base corpus of "these faces are not mine" so we can augment it later with the face we want to target. End of explanation """ from pickle import dump #Creates base_corpus.pkl from face imgs in test_imgs/corpus imgs = [imread(fname) for fname in glob('test_imgs/corpus/face*.jpg')] dump(imgs, open('findme/base_corpus.pkl', 'wb')) """ Explanation: Now that we have some faces to work with, let's save them to a pickle file for use later on. End of explanation """ found = pull_faces('test_imgs/initial/me*.jpg', 'test_imgs/corpus/me{}.jpg') print('Total number of target faces found:', found) plt.imshow(imread('test_imgs/corpus/me0.jpg')) """ Explanation: Target Corpus Now we need to add our target data. Since this is going to power a personal project, I'm going to train it to recognize my face. Other than adding some new images, we can reuse the code from before but just supplying a different glob string. End of explanation """ #Load the two sets of images from pickle import load notme = load(open('findme/base_corpus.pkl', 'rb')) me = [imread(fname) for fname in glob('test_imgs/corpus/me*.jpg')] #Create features and labels features = notme + me labels = [0] * len(notme) + [1] * len(me) #Preprocess images for the model def preprocess(img: np.ndarray) -> np.ndarray: """Resizes a given image and remove alpha channel""" img = cv2.resize(img, (45, 45), interpolation=cv2.INTER_AREA)[:,:,:3] return img features = [preprocess(face) for face in features] """ Explanation: That was easy enough. In order to have a large enough corpus of target faces, I included pictures of myself with other people and deleted their faces after the code block ran. It ended up having eleven target faces. Model Training Data Now that we have our faces, we need to create the features and labels that will be used to train our facial recognition model. We've already classified our data based on the face's filename; all we need to do is assign a 1 or 0 to each group for our labels. We'll also need to scale each image to a standard size. Thankfully the output for each bounding box is a square, so we don't have to worry about introducing distortions. End of explanation """ print('Is the target:', labels[0] == 1) plt.imshow(features[0], cmap='gray') """ Explanation: Simple enough. Let's do a quick check before shuffling. The first image should be part of the base corpus: End of explanation """ print('Is the target:', labels[-1] == 1) plt.imshow(features[-1], cmap='gray') """ Explanation: And the last image should be of the target: End of explanation """ #Convert into numpy arrays features = np.array(features) labels = np.array(labels) dump(features, open('test_imgs/features.pkl', 'wb')) dump(labels, open('test_imgs/labels.pkl', 'wb')) """ Explanation: Looks good. Let's create a quick data and file checkpoint. This means we'll be able to load the file in from this point on without having to run most of the above code. End of explanation """ # DATA/FILE CHECKPOINT from pickle import load import numpy as np import matplotlib.pyplot as plt from matplotlib.image import imread, imsave %matplotlib inline from findme.imageutil import crop, draw_boxes, preprocess from findme.models import find_faces features = load(open('findme/features.pkl', 'rb')) labels = load(open('findme/labels.pkl', 'rb')) features = features[-24:] labels = labels[-24:] """ Explanation: DATA/FILE CHECKPOINT The notebook can be run from scratch from this point onward. End of explanation """ from sklearn.preprocessing import OneHotEncoder enc = OneHotEncoder() labels = enc.fit_transform(labels.reshape(-1, 1)).toarray() print('Not target label:', labels[0]) print('Is target label:', labels[-1]) """ Explanation: That's it for our data. You'll notice that we only loaded a subset of our dataset. This ensures that the number of target and non-target images matches, which leads to a better model even though it has less data overall. We'll split our data in the next section. Am I in This? We've already created all of our data. Now for the model we're going to train. First, we need to convert our labels to one-hot encoding for use in the model. This means our output layer will have two nodes: True and False. End of explanation """ from keras.layers import Activation, Convolution2D, Dense, Dropout, Flatten, MaxPooling2D from keras.metrics import binary_accuracy from keras.models import Sequential SHAPE = features[0].shape NB_FILTER = 16 def make_model() -> Sequential: """Create a Sequential Keras model to boolean classify faces""" model = Sequential() #First Convolution model.add(Convolution2D(NB_FILTER, (3, 3), input_shape=SHAPE)) model.add(Activation('relu')) model.add(MaxPooling2D()) model.add(Dropout(0.1)) # Second Convolution model.add(Convolution2D(NB_FILTER*2, (2, 2))) model.add(Activation('relu')) model.add(MaxPooling2D()) model.add(Dropout(0.2)) # Third Convolution model.add(Convolution2D(NB_FILTER*4, (2, 2))) model.add(Activation('relu')) model.add(MaxPooling2D()) model.add(Dropout(0.3)) # Flatten for Fully Connected model.add(Flatten()) # First Fully Connected model.add(Dense(1024)) model.add(Activation('relu')) model.add(Dropout(0.4)) # Second Fully Connected model.add(Dense(1024)) model.add(Activation('relu')) model.add(Dropout(0.5)) # Output model.add(Dense(2)) model.compile(loss = 'mean_squared_error', optimizer = 'rmsprop', metrics=[binary_accuracy]) return model print(make_model().summary()) """ Explanation: Now we need to define our model architecture one layer at a time. We'll create three convolutional layers, two fully-connected layers, and the output layer. End of explanation """ from keras.wrappers.scikit_learn import KerasClassifier from sklearn.utils import shuffle model = KerasClassifier(build_fn=make_model, epochs=500, batch_size=len(labels), verbose=0) X, Y = shuffle(features, labels, random_state=42) model.fit(X, Y) """ Explanation: Now we need to train the model. Even though we have a large model in terms of its parameters, we can still let the model train for many epochs because our feature set is so small. On a MacBook Air, it takes around 30 seconds to train the model with 500 epochs. To save space, I've disabled the full training printout that Keras provides, but you can watch the accuracy progress yourself by changing verbose from 0 to 1. We also need to shuffle our data because feeding all of the non-target and target faces into the model in order will lead to a biased model. Scikit-Learn has a convenient function to do this for us. Rather than just calling random, this function preserves the relationship between the feature and label indexes. End of explanation """ preds = model.predict(features) print('Non-target faces predicted correctly:', np.all(preds[:12] == 0)) print('Non-target faces predicted correctly:', preds[-12:] == 1)) """ Explanation: Let's quickly see how well it trained to the given data. Because the dataset is so small, we didn't want to keep any for a test or validation set. We'll test it on a new image later. End of explanation """ test_img = imread('test_imgs/evaluate/me1.jpg') plt.imshow(test_img) """ Explanation: That's it. While Keras has its own mechanisms for training and validating models, we're using a wrapper around our Keras model so it conforms to the Scikit-Learn model API. We can use fit and predict when working with the model in our code, and it let's us train and use our model with the other helper modules sk-learn provides. For example, we could have evaluated the model using StratifiedKFold and cross_val_score which would look like this: ```python from keras.wrappers.scikit_learn import KerasClassifier from sklearn.model_selection import StratifiedKFold, cross_val_score model = KerasClassifier(build_fn=make_model, epochs=5, batch_size=len(labels), verbose=0) evaluate using 10-fold cross validation kfold = StratifiedKFold(n_splits=3, shuffle=True, random_state=42) result = cross_val_score(model, features, labels, cv=kfold) print(result.mean()) ``` This method allows us to determine how effective our model is but does not return a trained model for us to use. Putting It Together Lastly, let's create a single function that takes in an image and returns if the target was found and where. First we'll load in our test image. Keep in mind that the model we just trained has never seen this image before and it contains multiple people (and a manatee statue). End of explanation """ def target_in_img(img: np.ndarray) -> (bool, np.array([int])): """Returns whether the target is in a given image and where""" for bbox in find_faces(img): face = preprocess(crop(img, *bbox)) if model.predict(np.array([face])) == 1: return True, bbox return False, None """ Explanation: Now for the function itself. Because we've already made function around the core parts of our data pipeline, this function is going to be incredibly short yet powerful. End of explanation """ found, bbox = target_in_img(test_img) print('Target face found in test image:', found) if found: plt.imshow(draw_boxes([bbox], test_img, line_width=20)) """ Explanation: Yeah. That's it. Let's break down the steps: find_faces returns a list of bounding boxes containing faces We prepare each face by cropping the image to the bounding box, scaling to 45x45, and removing the alpha channel The model predicts whether the face is or is not the target If the target is found (pred == 1), return True and the current bounding box If there aren't any faces or none of the faces belongs to the target, return False and None Now let's test it. If it works properly, we should see a bounding bx appear around the target's face. End of explanation """
ThunderShiviah/code_guild
wk3/notebooks/wk3.0.ipynb
mit
def turn_clockwise(direction): compass = {"N":"E" , "E": "S", "S":"W", "W":"N"} return compass[direction] assert turn_clockwise("N") == "E" assert turn_clockwise("W") == "N" turn_clockwise("N") """ Explanation: wk3.0 Warm-up The four compass points can be abbreviated by single-letter strings as “N”, “E”, “S”, and “W”. Write a function turn_clockwise that takes one of these four compass points as its parameter, and returns the next compass point in the clockwise direction. Here are some tests that should pass: assert turn_clockwise("N") == "E" assert turn_clockwise("W") == "N" End of explanation """ def turn_clockwise(direction): wk3.0 Warm-up The four compass points can be abbreviated by single-letter strings as “N”, “E”, “S”, and “W”. Write a function turn_clockwise that takes one of these four compass points as its parameter, and returns the next compass point in the clockwise direction. Here are some tests that should pass: assert turn_clockwise("N") == "E" assert turn_clockwise("W") == "N" In [ ]: def turn_clockwise(direction): compass = {"N":"E" , "E": "S", "S":"W", "W":"N"} return compass[direction] ​ assert turn_clockwise("N") == "E" assert turn_clockwise("W") == "N" ​ turn_clockwise("N") You might ask “What if the argument to the function is some other value?” For all other cases, the function should return the value None: assert turn_clockwise(42) == None assert turn_clockwise("rubbish") == None In [ ]: ​ ​ def turn_clockwise(direction): compass = {"N":"E" , "E": "S", "S":"W", "W":"N"} try: return compass[direction] except KeyError: print("That's not a direction ya dingus!") return None try: turn_clockwise() except KeyError: print("Enter a direction") """ assert turn_clockwise() == None assert turn_clockwise(47) == None assert turn_clockwise("rubbish") == None """ """ Explanation: You might ask “What if the argument to the function is some other value?” For all other cases, the function should return the value None: assert turn_clockwise(42) == None assert turn_clockwise("rubbish") == None End of explanation """ def day_name(num): days = ("Sun", "Mon", "Tues", "Wed", "Thur") try: return days[num] except: return None assert day_name(3) == "Wed" assert day_name(0) == "Sun" assert day_name(42) == None day_name(2) """ Explanation: Write a function day_name that converts an integer number 0 to 6 into the name of a day. Assume day 0 is “Sunday”. Once again, return None if the arguments to the function are not valid. Here are some tests that should pass: assert day_name(3) == "Wednesday" assert day_name(6) == "Saturday" assert day_name(42) == None End of explanation """ days = ("Sun", "Mon", "Tues", "Wed", "Thur") days.index("Mon") lst = [] for day in days: lst.append((days.index(day), day) ) dct = dict(lst) dct """ Explanation: Write the inverse function day_num which is given a day name, and returns its number: assert day_num("Friday") == 5 assert day_num("Sunday") == 0 assert day_num(day_name(3)) == 3 assert day_name(day_num("Thursday")) == "Thursday" End of explanation """ def day_name(num): """Takes in day index and returns day name""" days = ("Sun", "Mon", "Tues", "Wed", "Thur", "Fri", "Sat") try: return days[num] except: return None def day_num(day): days = ("Sun", "Mon", "Tues", "Wed", "Thur", "Fri", "Sat") try: return days.index(day) except: return None def day_add(day, delta): number = (day_num(day) + delta) % 7 return day_name(number) assert day_name(3) == "Wed" assert day_name(0) == "Sun" assert day_name(42) == None assert day_num("Fri") == 5 assert day_num("Sun") == 0 assert day_num(day_name(3)) == 3 assert day_add("Tues", 0) == "Tues" assert day_add("Fri", 3) == "Mon" assert day_add("Fri", -2) == "Wed" -1%7 """ Explanation: Once again, if this function is given an invalid argument, it should return None: assert day_num("Halloween") == None Write a function that helps answer questions like ‘“Today is Wednesday. I leave on holiday in 19 days time. What day will that be?”’ So the function must take a day name and a delta argument — the number of days to add — and should return the resulting day name: assert day_add("Monday", 4) == "Friday" assert day_add("Tuesday", 0) == "Tuesday" test(day_add("Tuesday", 14) == "Tuesday" test(day_add("Sunday", 100) == "Tuesday" Hint: use the first two functions written above to help you write this one. End of explanation """ def mult(x1, x2, x3, x4): multi = x1*x2*x3*x4 add = x1 + x2 + x3 + x4 return multi*add mult(1,1,1,2) %quickref """ Explanation: Can your day_add function already work with negative deltas? For example, -1 would be yesterday, or -7 would be a week ago: assert day_add("Sunday", -1) == "Saturday" assert day_add("Sunday", -7) == "Sunday" assert day_add("Tuesday", -100) == "Sunday" If your function already works, explain why. If it does not work, make it work. Hint: Play with some cases of using the modulus function % (introduced at the beginning of the previous chapter). Specifically, explore what happens to x % 7 when x is negative. Write a function days_in_month which takes the name of a month, and returns the number of days in the month. Ignore leap years: assert days_in_month("February") == 28 assert days_in_month("December") == 31 If the function is given invalid arguments, it should return None. Write a function to_secs that converts hours, minutes and seconds to a total number of seconds. Here are some tests that should pass: assert to_secs(2, 30, 10) == 9010 assert to_secs(2, 0, 0) == 7200 assert to_secs(0, 2, 0) == 120 assert to_secs(0, 0, 42) == 42 assert to_secs(0, -10, 10) == -590 Extend to_secs so that it can cope with real values as inputs. It should always return an integer number of seconds (truncated towards zero): assert to_secs(2.5, 0, 10.71) == 9010 assert to_secs(2.433,0,0) == 8758 Write three functions that are the “inverses” of to_secs: hours_in returns the whole integer number of hours represented by a total number of seconds. minutes_in returns the whole integer number of left over minutes in a total number of seconds, once the hours have been taken out. seconds_in returns the left over seconds represented by a total number of seconds. You may assume that the total number of seconds passed to these functions is an integer. Here are some test cases: assert hours_in(9010) == 2 assert minutes_in(9010) == 30 assert seconds_in(9010) == 10 Fruitful functions temporary variables End of explanation """ def bad(): print("Hi") return "bye" bad() """ Explanation: dead code, or unreachable code End of explanation """ def bad_absolute_value(x): if x <= 0: return -x elif x > 0: return x bad_absolute_value(0) """ Explanation: Make sure that your code accesses the whole range of input. Ex. def bad_absolute_value(x): if x &lt; 0: return -x elif x &gt; 0: return x End of explanation """ def find_first_2_letter_word(xs): """ Returns the first two letter word in a list. If no two letter word exists, returns an empty string""" for index, wd in enumerate(xs): if len(wd) == 2: return (wd, index) return ("", index) print('res1', find_first_2_letter_word(["This", "is", "a", "dead", "parrot"])) print('res2', find_first_2_letter_word(["I", "like", "cheese", "bah"])) """ Explanation: Sometimes sticking a return in a for loop is a good idea: End of explanation """ day_name(day_num("Wed")) """ Explanation: Incremental development The key aspects of the process are: 1. Start with a working skeleton program and make small incremental changes. At any point, if there is an error, you will know exactly where it is. 2. Use temporary variables to refer to intermediate values so that you can easily inspect and check them. 3. Once the program is working, relax, sit back, and play around with your options. (There is interesting research that links “playfulness” to better understanding, better learning, more enjoyment, and a more positive mindset about what you can achieve — so spend some time fiddling around!) You might want to consolidate multiple statements into one bigger compound expression, or rename the variables you’ve used, or see if you can make the function shorter. A good guideline is to aim for making code as easy as possible for others to read. Debugging Another powerful technique for debugging (an alternative to single-stepping and inspection of program variables), is to insert extra print functions in carefully selected places in your code. Then, by inspecting the output of the program, you can check whether the algorithm is doing what you expect it to. Be clear about the following, however: You must have a clear solution to the problem, and must know what should happen before you can debug a program. Work on solving the problem on a piece of paper (perhaps using a flowchart to record the steps you take) before you concern yourself with writing code. Writing a program doesn’t solve the problem — it simply automates the manual steps you would take. So first make sure you have a pen-and-paper manual solution that works. Programming then is about making those manual steps happen automatically. Do not write chatterbox functions. A chatterbox is a fruitful function that, in addition to its primary task, also asks the user for input, or prints output, when it would be more useful if it simply shut up and did its work quietly. For example, we’ve seen built-in functions like range, max and abs. None of these would be useful building blocks for other programs if they prompted the user for input, or printed their results while they performed their tasks. So a good tip is to avoid calling print and input functions inside fruitful functions, unless the primary purpose of your function is to perform input and output. The one exception to this rule might be to temporarily sprinkle some calls to print into your code to help debug and understand what is happening when the code runs, but these will then be removed once you get things working. Composition End of explanation """ def tester(line): tests a bunch of stuff if all the stuff is good: return True else: return False def main_func(emails): for line in emails: if tester(line): return line """ Explanation: Boolean functions for test hiding End of explanation """ import random # Create a black box object that generates random numbers rng = random.Random() dice_throw = rng.randrange(1,7) # Return an int, one of 1,2,3,4,5,6 delay_in_seconds = rng.random() * 5.0 print('dice_throw', dice_throw) print('delay_in_seconds', delay_in_seconds) """ Explanation: Lecture 2 Modules Random numbers A few uses of random numbers: * To play a game of chance where the computer needs to throw some dice, pick a number, or flip a coin, * To shuffle a deck of playing cards randomly, * To allow/make an enemy spaceship appear at a random location and start shooting at the player, * To simulate possible rainfall when we make a computerized model for estimating the environmental impact of building a dam, * For encrypting banking sessions on the Internet. End of explanation """ for num in range(10): print(rng.randrange(1,100,2)) """ Explanation: How would we get odd numbers between 1 and 100 (exclusive)? End of explanation """ random.random() # returns a number in interval [0,1). We need to scale it! for num in range(10): print(random.random()) cards = list(range(52)) # Generate ints [0 .. 51] # representing a pack of cards. rng.shuffle(cards) # Shuffle the pack cards """ Explanation: random.Random() returns a uniform distribution. There are other distributions as well. End of explanation """ drng = random.Random(15) # Create generator with known starting state for n in range(10): print(drng.randint(1,100)) # Always 7. """ Explanation: Repeatability and Testing deterministic algorithm pseudo-random generators End of explanation """ import random def make_random_ints(num, lower_bound, upper_bound): """ Generate a list containing num random ints between lower_bound and upper_bound. upper_bound is an open bound. """ rng = random.Random() # Create a random number generator result = [] for i in range(num): result.append(rng.randrange(lower_bound, upper_bound)) return result make_random_ints(4, 3,10) make_random_ints(5, 1, 13) # Pick 5 random month numbers """ Explanation: Picking balls from bags, throwing dice, shuffling a pack of cards End of explanation """ xs = list(range(1,13)) # Make list 1..12 (there are no duplicates) rng = random.Random() # Make a random number generator rng.shuffle(xs) # Shuffle the list result = xs[:5] # Take the first five elements result """ Explanation: Getting unique values End of explanation """ import random def make_random_ints_no_dups(num, lower_bound, upper_bound): """ Generate a list containing num random ints between lower_bound and upper_bound. upper_bound is an open bound. The result list cannot contain duplicates. """ result = [] rng = random.Random() for i in range(num): while True: candidate = rng.randrange(lower_bound, upper_bound) if candidate not in result: break result.append(candidate) return result xs = make_random_ints_no_dups(5, 1, 10000000) print(xs) """ def make_random_ints_no_dups(num, lower_bound, upper_bound): """ Generate a list containing num random ints between lower_bound and upper_bound. upper_bound is an open bound. The result list cannot contain duplicates. """ result = [] rng = random.Random() while len(result) < num: candidate = rng.randrange(lower_bound, upper_bound) if candidate in result: continue result.append(candidate) return result make_random_ints_no_dups(5, 1, 1000) """ """ Explanation: The 'shuffle and slice' method is okay for small numbers but would not be so great if you only wanted a few elements, but from a very large domain. Suppose I wanted five numbers between one and ten million, without duplicates. Generating a list of ten million items, shuffling it, and then slicing off the first five would be a performance disaster! So let us have another try: End of explanation """ xs = make_random_ints_no_dups(10, 1, 6) # Yikes! """ Explanation: This method is okay but still has some problems. Can you see what's going to happen in the next case? End of explanation """ from timeit import default_timer as timer t1 = timer() print("hi") t2 = timer() print(t2 - t1) from timeit import default_timer as timer def do_my_sum(xs): sum = 0 for v in xs: sum += v return sum sz = 10000000 # Lets have 10 million elements in the list testdata = range(sz) t0 = timer() my_result = do_my_sum(testdata) t1 = timer() print("my_result = {0} (time taken = {1:.4f} seconds)" .format(my_result, t1-t0)) t2 = timer() their_result = sum(testdata) t3 = timer() print("their_result = {0} (time taken = {1:.4f} seconds)" .format(their_result, t3-t2)) def do_my_sum(xs): sum = 0 for v in xs: sum += v return sum sz = 10000000 # Lets have 10 million elements in the list testdata = range(sz) %%timeit my_result = do_my_sum(testdata) %%timeit their_result = sum(testdata) """ Explanation: The time module Looking at code efficiency End of explanation """ def range(n): return 123*n print(range(10)) # What will this print? n = 10 m = 3 def f(n): m = 7 return 2*n+m print(f(5), n, m) # What about this one? """ Explanation: Creating your own modules Save as a script and import! The init.py file. Namespaces Each function, script, system has its own namespace. Scope and lookup rules The scope of an identifier is the region of program code in which the identifier can be accessed, or used. There are three important scopes in Python: Local scope refers to identifiers declared within a function. These identifiers are kept in the namespace that belongs to the function, and each function has its own namespace. Global scope refers to all the identifiers declared within the current module, or file. Built-in scope refers to all the identifiers built into Python — those like range and min that can be used without having to import anything, and are (almost) always available. Python (like most other computer languages) uses precedence rules: the same name could occur in more than one of these scopes, but the innermost, or local scope, will always take precedence over the global scope, and the global scope always gets used in preference to the built-in scope. Let’s start with a simple example: End of explanation """ import math x = math.sqrt(10) from math import cos, sin, sqrt x = sqrt(10) from math import * # Import all the identifiers from math, # adding them to the current namespace. x = sqrt(10) # Use them without qualification. # Here's a freebie since I like you guys import math as m m.pi def area(radius): import math return math.pi * radius * radius x = math.sqrt(10) # This gives an error """ Explanation: Now we know why we use a return in our functions: to pass between namespaces! Attributes and the dot operator Variables defined inside a module are called attributes of the module. We’ve seen that objects have attributes too: for example, most objects have a doc attribute, some functions have a annotations attribute. Attributes are accessed using the dot operator (.). Three import statement variants End of explanation """
mdeff/ntds_2017
projects/reports/course_suggester/Weighting Metrics and Graph Diffusion.ipynb
mit
%matplotlib inline import os import pandas as pd import numpy as np import pickle from pygsp import graphs, filters, plotting from scipy.spatial import distance import matplotlib.pyplot as plt import itertools from tqdm import tqdm plt.rcParams['figure.figsize'] = (17, 5) plotting.BACKEND = 'matplotlib' do_prints = False random = True plt.rcParams['figure.figsize'] = (17, 5) plotting.BACKEND = 'matplotlib' %matplotlib inline """ Explanation: In this notebook we attempted to do a linear combination of feature graphs in order to construct the graph. The idea was to use a grid search to find a combination which would optimize the weights. The output was given by heat diffusion from a starting node (course of interest) compared to the estimated probability of the other courses being taken if the course of interest was. End of explanation """ pkl_file = open(os.path.join(os.getcwd(), 'Graphs','students_graph_STI.pkl'), 'rb') students_graph = pickle.load(pkl_file) pkl_file.close() pkl_file = open(os.path.join(os.getcwd(), 'Graphs','assistants_graph_STI.pkl'), 'rb') assistants_graph = pickle.load(pkl_file) pkl_file.close() pkl_file = open(os.path.join(os.getcwd(), 'Graphs','prof_graph_STI.pkl'), 'rb') prof_graph = pickle.load(pkl_file) pkl_file.close() pkl_file = open(os.path.join(os.getcwd(), 'Graphs','section_graph_STI.pkl'), 'rb') sections_graph = pickle.load(pkl_file) pkl_file.close() #pkl_file = open(os.path.join(os.getcwd(), 'Graphs','topics_graph.pkl'), 'rb') #topics_graph = pickle.load(pkl_file) #pkl_file.close() pkl_file = open(os.path.join(os.getcwd(), 'Graphs','req_course_same_req_graph_STI.pkl'), 'rb') course_same_req_graph = pickle.load(pkl_file) pkl_file.close() pkl_file = open(os.path.join(os.getcwd(), 'Graphs','req_course_to_req_graph_STI.pkl'), 'rb') course_to_req_graph = pickle.load(pkl_file) pkl_file.close() pkl_file = open(os.path.join(os.getcwd(), 'Graphs','req_same_course_graph_STI.pkl'), 'rb') same_course_req_graph = pickle.load(pkl_file) pkl_file.close() if do_prints: print("students ", np.shape(students_graph)) print("assistants ", np.shape(assistants_graph)) print("prof ", np.shape(prof_graph)) print("sections ", np.shape(sections_graph)) #print("topics ", np.shape(topics_graph)) print("course same req ", np.shape(course_same_req_graph)) print("course to req ", np.shape(course_to_req_graph)) print("same course req ", np.shape(same_course_req_graph)) assert np.shape(students_graph) == np.shape(assistants_graph) assert np.shape(assistants_graph) == np.shape(prof_graph) assert np.shape(prof_graph) == np.shape(sections_graph) #assert np.shape(sections_graph) == np.shape(topics_graph) #assert np.shape(topics_graph) == np.shape(course_same_req_graph) assert np.shape(sections_graph) == np.shape(course_same_req_graph) assert np.shape(course_same_req_graph) == np.shape(course_to_req_graph) assert np.shape(course_to_req_graph) == np.shape(same_course_req_graph) courses = pd.read_pickle("../data/cleaned_courses_STI.pickle") full_courses_list = courses.index.tolist() """ Explanation: 1. Loading the Feature Graphs End of explanation """ weight_matrices = [students_graph, assistants_graph, prof_graph, sections_graph, course_same_req_graph, course_to_req_graph, same_course_req_graph] for i in range(len(weight_matrices)): # Set the diagonal of the matrix to 0 np.fill_diagonal(weight_matrices[i], 0) max_val = np.max(np.reshape(weight_matrices[i], (-1,1))) weight_matrices[i] = weight_matrices[i]/np.max(np.reshape(weight_matrices[i], (-1,1))) def create_graph(mat): # Create the graph G = graphs.Graph(mat) G.compute_laplacian("normalized") G.compute_fourier_basis() return G """ Explanation: 2. Rescaling the Data End of explanation """ def heat_diffusion(G, courses, tau): # Create the heat diffusion filter filt = filters.Heat(G, tau) # Plot the response of the filter #y = filt.evaluate(G.e) #plt.plot(G.e, y[0]) # Create the signal for the given graph signal = np.zeros(G.N) for course in courses: NODE = np.where(np.asarray(full_courses_list) == course)[0] signal[NODE] = 1 # Apply the filter to the signal filtered_s = filt.filter(signal) return filtered_s def diffusion(weight_mat, list_loved_courses, n_result_courses, tau_filter): # Define the index of the loved courses to hgighlight them later. NODE = [] for i in range(0,len(list_loved_courses)): if (len(np.where(np.asarray(full_courses_list) == list_loved_courses[i])[0])==0): print("ERROR! Course loved is not in the list of the courses.") return NODE.append(np.where(np.asarray(full_courses_list) == list_loved_courses[i])[0][0]) # Create the graph and do the diffusion on it. G_diffusion = create_graph(weight_mat) filtered_signals = heat_diffusion(G_diffusion,list_loved_courses,tau_filter) # Plot the diffusion G_diffusion.set_coordinates("spring")#G_diffusion.U[:,1:3]) G_diffusion.plot_signal(filtered_signals, vertex_size=50, highlight = NODE, ) # Create the list of courses ordered with their values found by the diffusion. filtered_signals_int = list(filtered_signals) courses_list = [] if(n_result_courses > len(filtered_signals_int)): n_result_courses = len(filtered_signals_int) for i in range(0,n_result_courses): course_code = full_courses_list[filtered_signals_int.index(max(filtered_signals_int))] courses_list.append(courses[courses.index.str.endswith(course_code)].CourseTitleFR.tolist()[0]) filtered_signals_int[filtered_signals_int.index(max(filtered_signals_int))] = -1 return courses_list weight_different_graph = [0.2,0,0,0,0,0,1] # [0.5?,0,0,0 -same students-,0,$,1] diffusion_graph = weight_different_graph[0]*weight_matrices[0] for i in range(1, len(weight_matrices)): diffusion_graph = diffusion_graph + weight_different_graph[i]*weight_matrices[i] recommanded_courses = diffusion(diffusion_graph,["EE-535", "EE-420"],7,4) recommanded_courses """ Explanation: 4. Diffusion End of explanation """
DS-100/sp17-materials
sp17/hw/hw2/hw2_solution.ipynb
gpl-3.0
import math import numpy as np import matplotlib %matplotlib inline import matplotlib.pyplot as plt !pip install -U okpy from client.api.notebook import Notebook ok = Notebook('hw2.ok') """ Explanation: Homework 2: Language in the 2016 Presidential Election Popular figures often have help managing their media presence. In the 2016 election, Twitter was an important communication medium for every major candidate. Many Twitter posts posted by the top two candidates were actually written by their aides. You might wonder how this affected the content or language of the tweets. In this assignment, we'll look at some of the patterns in tweets by the top two candidates, Clinton and Trump. We'll start with Clinton. Along the way, you'll get a first look at Pandas. Pandas is a Python package that provides a DataFrame data structure similar to the datascience package's Table, which you might remember from Data 8. DataFrames are a bit harder to use than Tables, but they provide more advanced functionality and are a standard tool for data analysis in Python. Some of the analysis in this assignment is based on a post by David Robinson. Feel free to read the post, but do not copy from it! David's post is written in the R programming language, which is a favorite of many data analysts, especially academic statisticians. Once you're done with your analysis, you may find it interesting to see whether R is easier to use for this task. To start the assignment, run the cell below to set up some imports and the automatic tests. End of explanation """ ds_tweets_save_path = "BerkeleyData_recent_tweets.pkl" from pathlib import Path # Guarding against attempts to download the data multiple # times: if not Path(ds_tweets_save_path).is_file(): import json # Loading your keys from keys.json (which you should have filled # in in question 1): with open("keys.json") as f: keys = json.load(f) import tweepy # Authenticating: auth = tweepy.OAuthHandler(keys["consumer_key"], keys["consumer_secret"]) auth.set_access_token(keys["access_token"], keys["access_token_secret"]) api = tweepy.API(auth) # Getting as many recent tweets by @BerkeleyData as Twitter will let us have: example_tweets = list(tweepy.Cursor(api.user_timeline, id="BerkeleyData").items()) # Saving the tweets to a file as "pickled" objects: with open(ds_tweets_save_path, "wb") as f: import pickle pickle.dump(example_tweets, f) # Re-loading the results: with open(ds_tweets_save_path, "rb") as f: import pickle example_tweets = pickle.load(f) # Looking at one tweet object, which has type Status: example_tweets[0] # You can try something like this: # import pprint; pprint.pprint(vars(example_tweets[0])) # ...to get a more easily-readable view. """ Explanation: Getting the dataset Since we'll be looking at Twitter data, we need to download the data from Twitter! Twitter provides an API for downloading tweet data in large batches. The tweepy package makes it fairly easy to use. Question 0 Install tweepy, if you don't already have it. (Be sure to activate your Conda environment for the class first. Then run pip install tweepy.) There are instructions on using tweepy here, but we will give you example code. Twitter requires you to have authentication keys to access their API. To get your keys, you'll have to sign up as a Twitter developer. Question 1 Follow these instructions to get your keys: Create a Twitter account. You can use an existing account if you have one. Under account settings, add your phone number to the account. Create a Twitter developer account. Attach it to your Twitter account. Once you're logged into your developer account, create an application for this assignment. You can call it whatever you want, and you can write any URL when it asks for a web site. On the page for that application, find your Consumer Key and Consumer Secret. On the same page, create an Access Token. Record the resulting Access Token and Access Token Secret. Edit the file keys.json and replace the placeholders with your keys. Don't turn in that file. I AM AN IMPORTANT NOTE. DO NOT SKIP ME. If someone has your authentication keys, they can access your Twitter account and post as you! So don't give them to anyone, and don't write them down in this notebook. The usual way to store sensitive information like this is to put it in a separate file and read it programmatically. That way, you can share the rest of your code without sharing your keys. That's why we're asking you to put your keys in keys.json for this assignment. I AM A SECOND IMPORTANT NOTE. Twitter limits developers to a certain rate of requests for data. If you make too many requests in a short period of time, you'll have to wait awhile (around 15 minutes) before you can make more. So carefully follow the code examples you see and don't rerun cells without thinking. Instead, always save the data you've collected to a file. We've provided templates to help you do that. In the example below, we have loaded some tweets by @BerkeleyData. Run it, inspect the output, and read the code. End of explanation """ def load_keys(path): """Loads your Twitter authentication keys from a file on disk. Args: path (str): The path to your key file. The file should be in JSON format and look like this (but filled in): { "consumer_key": "<your Consumer Key here>", "consumer_secret": "<your Consumer Secret here>", "access_token": "<your Access Token here>", "access_token_secret": "<your Access Token Secret here>" } Returns: dict: A dictionary mapping key names (like "consumer_key") to key values.""" import json with open(path) as f: return json.load(f) def download_recent_tweets_by_user(user_account_name, keys): """Downloads tweets by one Twitter user. Args: user_account_name (str): The name of the Twitter account whose tweets will be downloaded. keys (dict): A Python dictionary with Twitter authentication keys (strings), like this (but filled in): { "consumer_key": "<your Consumer Key here>", "consumer_secret": "<your Consumer Secret here>", "access_token": "<your Access Token here>", "access_token_secret": "<your Access Token Secret here>" } Returns: list: A list of Status objects, each representing one tweet.""" import tweepy # Authenticating: auth = tweepy.OAuthHandler(keys["consumer_key"], keys["consumer_secret"]) auth.set_access_token(keys["access_token"], keys["access_token_secret"]) api = tweepy.API(auth) return list(tweepy.Cursor(api.user_timeline, id=user_account_name).items()) def save_tweets(tweets, path): """Saves a list of tweets to a file in the local filesystem. This function makes no guarantee about the format of the saved tweets, **except** that calling load_tweets(path) after save_tweets(tweets, path) will produce the same list of tweets and that only the file at the given path is used to store the tweets. (That means you can implement this function however you want, as long as saving and loading works!) Args: tweets (list): A list of tweet objects (of type Status) to be saved. path (str): The place where the tweets will be saved. Returns: None""" with open(path, "wb") as f: import pickle pickle.dump(tweets, f) def load_tweets(path): """Loads tweets that have previously been saved. Calling load_tweets(path) after save_tweets(tweets, path) will produce the same list of tweets. Args: path (str): The place where the tweets were be saved. Returns: list: A list of Status objects, each representing one tweet.""" with open(path, "rb") as f: import pickle return pickle.load(f) # When you are done, run this cell to load @HillaryClinton's tweets. # Note the function get_tweets_with_cache. You may find it useful # later. def get_tweets_with_cache(user_account_name, keys_path): """Get recent tweets from one user, loading from a disk cache if available. The first time you call this function, it will download tweets by a user. Subsequent calls will not re-download the tweets; instead they'll load the tweets from a save file in your local filesystem. All this is done using the functions you defined in the previous cell. This has benefits and drawbacks that often appear when you cache data: +: Using this function will prevent extraneous usage of the Twitter API. +: You will get your data much faster after the first time it's called. -: If you really want to re-download the tweets (say, to get newer ones, or because you screwed up something in the previous cell and your tweets aren't what you wanted), you'll have to find the save file (which will look like <something>_recent_tweets.pkl) and delete it. Args: user_account_name (str): The Twitter handle of a user, without the @. keys_path (str): The path to a JSON keys file in your filesystem. """ save_path = "{}_recent_tweets.pkl".format(user_account_name) from pathlib import Path if not Path(save_path).is_file(): keys = load_keys(keys_path) tweets = download_recent_tweets_by_user(user_account_name, keys) save_tweets(tweets, save_path) return load_tweets(save_path) clinton_tweets = get_tweets_with_cache("HillaryClinton", "keys.json") # If everything is working properly, this should print out # a Status object (a single tweet). clinton_tweets should # contain around 3000 tweets. clinton_tweets[0] _ = ok.grade('q02') _ = ok.backup() """ Explanation: Question 2 Write code to download all the recent tweets by Hillary Clinton (@HillaryClinton). Follow our example code if you wish. Write your code in the form of four functions matching the documentation provided. (You may define additional functions as helpers.) Once you've written your functions, you can run the subsequent cell to download the tweets. End of explanation """ def extract_text(tweet): return tweet.text #SOLUTION def extract_time(tweet): return tweet.created_at #SOLUTION def extract_source(tweet): return tweet.source #SOLUTION _ = ok.grade('q03') _ = ok.backup() """ Explanation: Exploring the dataset Twitter gives us a lot of information about each tweet, not just its text. You can read the full documentation here. Look at one tweet to get a sense of the information we have available. Question 3 Which fields contain: 1. the actual text of a tweet, 2. the time when the tweet was posted, and 3. the source (device and app) from which the tweet was posted? To answer the question, write functions that extract each field from a tweet. (Each one should take a single Status object as its argument.) End of explanation """ import pandas as pd df = pd.DataFrame() """ Explanation: Question 4 Are there any other fields you think might be useful in identifying the true author of an @HillaryClinton tweet? (If you're reading the documentation, consider whether fields are actually present often enough in the data to be useful.) SOLUTION: Some possible answers: retweet_count or favorite_count might be useful if we think tweets by the candidate herself are retweeted or favorited more often. coordinates might be useful if we can identify some pattern in the aides' or candidate's locations (for example, if the aides always tweet from the same campaign office building, which Hillary rarely visits). quoted_status might be useful if aides are more likely to quote other tweets than the candidate herself. Building a Pandas table JSON (and the Status object, which is just Tweepy's translation of the JSON produced by the Twitter API to a Python object) is nice for transmitting data, but it's not ideal for analysis. The data will be easier to work with if we put them in a table. To create an empty table in Pandas, write: End of explanation """ def make_dataframe(tweets): """Make a DataFrame from a list of tweets, with a few relevant fields. Args: tweets (list): A list of tweets, each one a Status object. Returns: DataFrame: A Pandas DataFrame containing one row for each element of tweets and one column for each relevant field.""" df = pd.DataFrame() #SOLUTION df['text'] = [extract_text(t) for t in tweets] #SOLUTION df['created_at'] = [extract_time(t) for t in tweets] #SOLUTION df['source'] = [extract_source(t) for t in tweets] #SOLUTION return df """ Explanation: (pd is the standard abbrevation for Pandas.) Now let's make a table with useful information in it. To add a column to a DataFrame called df, write: df['column_name'] = some_list_or_array (This page is a useful reference for many of the basic operations in Pandas. You don't need to read it now, but it might be helpful if you get stuck.) Question 5 Write a function called make_dataframe. It should take as its argument a list of tweets like clinton_tweets and return a Pandas DataFrame. The DataFrame should contain columns for all the fields in question 3 and any fields you listed in question 4. Use the field names as the names of the corresponding columns. End of explanation """ clinton_df = make_dataframe(clinton_tweets) # The next line causes Pandas to display all the characters # from each tweet when the table is printed, for more # convenient reading. Comment it out if you don't want it. pd.set_option('display.max_colwidth', 150) clinton_df.head() _ = ok.grade('q05') _ = ok.backup() """ Explanation: Now you can run the next line to make your DataFrame. End of explanation """ clinton_df['source'].value_counts().sort_index().plot.barh(); #SOLUTION """ Explanation: Tweetsourcing Now that the preliminaries are done, we can do what we set out to do: Try to distinguish between Clinton's own tweets and her aides'. Question 6 Create a plot showing how many tweets came from each kind of source. For a real challenge, try using the Pandas documentation and Google to figure out how to do this. Otherwise, hints are provided. Hint: Start by grouping the data by source. df['source'].value_counts() will create an object called a Series (which is like a table that contains exactly 2 columns, where one column is called the index). You can create a version of that Series that's sorted by source (in this case, in alphabetical order) by calling sort_index() on it. Hint 2: To generate a bar plot from a Series s, call s.plot.barh(). You can also use matplotlib's plt.barh, but it's a little bit complicated to use. End of explanation """ # Do your analysis, then write your conclusions in a brief comment. tweetdeck = clinton_df[clinton_df['source'] == 'TweetDeck'] twc = clinton_df[clinton_df['source'] == 'Twitter Web Client'] import numpy as np def rounded_linspace(start, stop, count): import numpy as np return np.linspace(start, stop, count, endpoint=False).astype(int) print(tweetdeck.iloc[rounded_linspace(0, tweetdeck.shape[0], 10)]['text']) print(twc.iloc[rounded_linspace(0, twc.shape[0], 10)]['text']) # It does look like Twitter Web Client is used more for retweeting, # but it's not obvious which tweets are by Hillary. """ Explanation: You should find that most tweets come from TweetDeck. Question 7 Filter clinton_df to examine some tweets from TweetDeck and a few from the next-most-used platform. From examining only a few tweets (say 10 from each category), can you tell whether Clinton's personal tweets are limited to one platform? Hint: If df is a DataFrame and filter_array is an array of booleans of the same length, then df[filter_array] is a new DataFrame containing only the rows in df corresponding to True values in filter_array. End of explanation """ def is_clinton(tweet): """Distinguishes between tweets by Clinton and tweets by her aides. Args: tweet (Status): One tweet. Returns: bool: True if the tweet is written by Clinton herself.""" return extract_text(tweet).endswith("-H") #SOLUTION clinton_df['is_personal'] = [is_clinton(t) for t in clinton_tweets] #SOLUTION """ Explanation: When in doubt, read... Check Hillary Clinton's Twitter page. It mentions an easy way to identify tweets by the candidate herself. All other tweets are by her aides. Question 8 Write a function called is_clinton that takes a tweet (in JSON) as its argument and returns True for personal tweets by Clinton and False for tweets by her aides. Use your function to create a column called is_personal in clinton_df. Hint: You might find the string method endswith helpful. End of explanation """ # This cell is filled in for you; just run it and examine the output. def pivot_count(df, vertical_column, horizontal_column): """Cross-classifies df on two columns.""" pivoted = pd.pivot_table(df[[vertical_column, horizontal_column]], index=[vertical_column], columns=[horizontal_column], aggfunc=len, fill_value=0) return pivoted.rename(columns={False: "False", True: "True"}) clinton_pivoted = pivot_count(clinton_df, 'source', 'is_personal') clinton_pivoted """ Explanation: Now we have identified Clinton's personal tweets. Let us return to our analysis of sources and see if there was any pattern we could have found. You may recall that Tables from Data 8 have a method called pivot, which is useful for cross-classifying a dataset on two categorical attrbiutes. DataFrames support a more complicated version of pivoting. The cell below pivots clinton_df for you. End of explanation """ clinton_pivoted["aides proportion"] = clinton_pivoted['False'] / sum(clinton_pivoted['False']) clinton_pivoted["clinton proportion"] = clinton_pivoted['True'] / sum(clinton_pivoted['True']) clinton_pivoted[["aides proportion", "clinton proportion"]].plot.barh(); """ Explanation: Do Clinton and her aides have different "signatures" of tweet sources? That is, for each tweet they send, does Clinton send tweets from each source with roughly the same frequency as her aides? It's a little hard to tell from the pivoted table alone. Question 9 Create a visualization to facilitate that comparison. Hint: df.plot.barh works for DataFrames, too. But think about what data you want to plot. End of explanation """ # Use this cell to perform your hypothesis test. def expand_counts(source_counts): """Blow up a list/array of counts of categories into an array of individuals matching the counts. For example, we can generate a list of 2 individuals of type 0, 4 of type 1, and 1 of type 3 as follows: >>> expand_counts([2, 4, 0, 1]) array([0, 0, 1, 1, 1, 1, 3])""" return np.repeat(np.arange(len(source_counts)), source_counts) def tvd(a, b): return .5*sum(np.abs(a/sum(a) - b/sum(b))) def test_difference_in_distributions(sample0, sample1, num_trials): num_sources = len(sample0) individuals0 = expand_counts(sample0) individuals1 = expand_counts(sample1) count0 = len(individuals0) count1 = len(individuals1) all_individuals = np.append(individuals0, individuals1) def simulate_under_null(): permuted_pool = np.random.permutation(all_individuals) simulated_sample0 = np.bincount(permuted_pool[:count0], minlength=num_sources) simulated_sample1 = np.bincount(permuted_pool[count0:], minlength=num_sources) return tvd(simulated_sample0, simulated_sample1) actual_tvd = tvd(sample0, sample1) simulated_tvds = np.array([simulate_under_null() for _ in range(num_trials)]) return np.count_nonzero(simulated_tvds > actual_tvd) / num_trials p_value = test_difference_in_distributions(clinton_pivoted['True'], clinton_pivoted['False'], 100000) print("P-value: {:.6f}".format(p_value)) """ Explanation: You should see that there are some differences, but they aren't large. Do we need to worry that the differences (or lack thereof) are just "due to chance"? Statistician Ani argues as follows: "The tweets we see are not a random sample from anything. We have simply gathered every tweet by @HillaryClinton from the last several months. It is therefore meaningless to compute, for example, a confidence interval for the rate at which Clinton used TweetDeck. We have calculated exactly that rate from the data we have." Statistician Belinda responds: "We are interested in whether Clinton and her aides behave differently in general with respect to Twitter client usage in a way that we could use to identify their tweets. It's plausible to imagine that the tweets we see are a random sample from a huge unobserved population of all the tweets Clinton and her aides might send. We must worry about error due to random chance when we draw conclusions about this population using only the data we have available." Question 10 What position would you take on this question? Choose a side and give one (brief) argument for it, or argue for some third position. SOLUTION: Here is an argument for Belinda's position. Imagine that Clinton had tweeted only 5 times. Then we would probably not think we could come to a valid conclusion about her behavior patterns. So there is a distinction between the data and an underlying parameter that we're trying to learn about. However, this does not mean it's reasonable to use methods (like the simple bootstrap) that assume the data are a simple random sample from the population we're interested in. Question 11 Assume you are convinced by Belinda's argument. Perform a statistical test of the null hypothesis that the Clinton and aide tweets' sources are all independent samples from the same distribution (that is, that the differences we observe are "due to chance"). Briefly describe the test methodology and report your results. Hint: If you need a refresher, this section of the Data 8 textbook from Fall 2016 covered this kind of hypothesis test. Hint 2: Feel free to use datascience.Table to answer this question. However, it will be advantageous to learn how to do it with numpy alone. In our solution, we used some numpy functions you might not be aware of: np.append, np.random.permutation, np.bincount, and np.count_nonzero. We have provided the function expand_counts, which should help you solve a tricky problem that will arise. End of explanation """ probability_clinton = clinton_pivoted.loc['Twitter Web Client']['True'] / sum(clinton_pivoted.loc['Twitter Web Client']) #SOLUTION probability_clinton _ = ok.grade('q12') _ = ok.backup() """ Explanation: SOLUTION: We simulated many times under the null hypothesis by pooling the data and permuting the sources. We found a P-value around .04%, so we have very strong evidence against the null hypothesis that Clinton and her aides tweet from the same distribution of sources. It's important to note that strong evidence that the difference is not zero (which we have found) is very different from evidence that the difference is large (which we have not found). The next question demonstrates this. Question 12 Suppose you sample a random @HillaryClinton tweet and find that it is from the Twitter Web Client. Your visualization in question 9 should show you that Clinton tweets from this source about twice as frequently as her aides do, so you might imagine it's reasonable to predict that the tweet is by Clinton. But what is the probability that the tweet is by Clinton? (You should find a relatively small number. Clinton's aides tweet much more than she does. So even though there is a difference in their tweet source usage, it would be difficult to classify tweets this way.) Hint: Bayes' rule is covered in this section of the Data 8 textbook. End of explanation """ trump_tweets = get_tweets_with_cache("realDonaldTrump", "keys.json") #SOLUTION trump_df = make_dataframe(trump_tweets) #SOLUTION trump_df.head() """ Explanation: Another candidate Our results so far aren't Earth-shattering. Clinton uses different Twitter clients at slightly different rates than her aides. Now that we've categorized the tweets, we could of course investigate their contents. A manual analysis (also known as "reading") might be interesting, but it is beyond the scope of this course. And we'll have to wait a few more weeks before we can use a computer to help with such an analysis. Instead, let's repeat our analysis for Donald Trump. Question 13 Download the tweet data for Trump (@realDonaldTrump), and repeat the steps through question 6 to create a table called trump_df. End of explanation """ trump_df['source'].value_counts().sort_index().plot.barh(); #SOLUTION """ Explanation: Question 14 Make a bar chart of the sources of Trump tweets. End of explanation """ def is_trump_style_retweet(tweet_text): """Returns True if tweet_text looks like a Trump-style retweet.""" return tweet_text.startswith('"@') def is_aide_style_retweet(tweet_text): """Returns True if tweet_text looks like an aide-style retweet.""" return "RT @" in tweet_text def tweet_type(tweet_text): """Returns "Trump retweet", "Aide retweet", or "Not a retweet" as appropriate.""" if is_trump_style_retweet(tweet_text): return "Trump retweet" elif is_aide_style_retweet(tweet_text): return "Aide retweet" return "Not a retweet" trump_df['tweet_type'] = [tweet_type(t) for t in trump_df['text']] trump_df _ = ok.grade('q15') _ = ok.backup() """ Explanation: You should find two major sources of tweets. It is reported (for example, in this Gawker article) that Trump himself uses an Android phone (a Samsung Galaxy), while his aides use iPhones. But Trump has not confirmed this. Also, he has reportedly switched phones since his inauguration! How might we verify whether this is a way to identify his tweets? A retweet is a tweet that replies to (or simply repeats) a tweet by another user. Twitter provides several mechanisms for this, as explained in this article. However, Trump has an unusual way of retweeting: He simply adds the original sender's name to the original message, puts everything in quotes, and then adds his own comments at the end. For example, this is a tweet by user @melissa7889: @realDonaldTrump @JRACKER33 you should run for president! Here is Trump's retweet of this, from 2013: "@melissa7889: @realDonaldTrump @JRACKER33 you should run for president!" Thanks,very nice! Since 2015, the usual way of retweeting this message, and the method used by Trump's aides (but not Trump himself), would have been: Thanks,very nice! RT @melissa7889: @realDonaldTrump @JRACKER33 you should run for president! Question 15 Write a function to identify Trump-style retweets, and another function to identify the aide-style retweets. Then, use them to create a function called tweet_type that takes a tweet as its argument and returns values "Trump retweet", "Aide retweet", and "Not a retweet" as appropriate. Use your function to add a 'tweet_type' column to trump_df. Hint: Try the string method startswith and the Python keyword in. End of explanation """ trump_pivoted = pivot_count(trump_df, 'source', 'tweet_type') #SOLUTION trump_pivoted _ = ok.grade('q16') _ = ok.backup() """ Explanation: Question 16 Cross-classify @realDonaldTrump tweets by source and by tweet_type into a table called trump_pivoted. Hint: We did something very similar after question 7. You don't need to write much new code for this. End of explanation """ test_difference_in_distributions(trump_pivoted['Aide retweet'], trump_pivoted['Trump retweet'], 100000) #SOLUTION """ Explanation: Question 17 Does the cross-classified table show evidence against the hypothesis that Trump and his advisors tweet from roughly the same sources? Again assuming you agree with Statistician Belinda, run an hypothesis test in the next cell to verify that there is a difference in the relevant distributions. Then use the subsequent cell to describe your methodology and results. Are there any important caveats? End of explanation """ _ = ok.grade_all() """ Explanation: SOLUTION: We eliminated the non-retweets and performed a test for a difference in categorical distributions as we did for Clinton. As should obvious from the table, there is a difference! (We find a P-value of 0, though this is approximate, and the true P-value is merely extremely close to 0.) One small caveat is that we are looking only at retweets. It's plausible that people behave differently when retweeting - maybe they find one device or app more convenient for retweets. A bigger caveat is that we don't just care about there being any difference, but that the difference is large. This is obvious from looking at the table - Trump almost never retweets from an iPhone and his aides never retweet from an Android phone. (Since we care about magnitudes, it would be useful to create confidence intervals for the chances of Trump and his aides tweeting from various devices. With a dataset this large, they would be narrow.) We are really interested in knowing whether we can classify @realDonaldTrump tweets on the basis of the source. Just knowing that there is a difference in source distributions isn't nearly enough. Instead, we would like to claim something like this: "@realDonaldTrump tweets from Twitter for Android are generally authored by Trump himself. Other tweets are generally authored by his aides." Question 18 If you use bootstrap methods to compute a confidence interval for the proportion of Trump aide retweets from Android phones in "the population of all @realDonaldTrump retweets," you will find that the interval is [0, 0]. That's because there are no retweets from Android phones by Trump aides in our dataset. Is it reasonable to conclude from this that Trump aides definitely never tweet from Android phones? SOLUTION: No, the bootstrap is misleading in this case. If we'd seen 1 million retweets by Trump aides, it might be okay to make this conclusion. But we have seen only 177, so the conclusion seems a bit premature. Submitting your assignment First, run the next cell to run all the tests at once. End of explanation """ # Now, we'll submit to okpy _ = ok.submit() """ Explanation: Now, run this code in your terminal to make a git commit that saves a snapshot of your changes in git. The last line of the cell runs git push, which will send your work to your personal Github repo. Note: Don't add and commit your keys.json file! git add -A will do that, but the code we've written below won't. # Tell git to commit your changes to this notebook git add sp17/hw/hw2/hw2.ipynb # Tell git to make the commit git commit -m "hw2 finished" # Send your updates to your personal private repo git push origin master Finally, we'll submit the assignment to OkPy so that the staff will know to grade it. You can submit as many times as you want, and you can choose which submission you want us to grade by going to https://okpy.org/cal/data100/sp17/. End of explanation """
TurkuNLP/BINF_Programming
lectures/week-5-sequence-alignment.ipynb
gpl-2.0
from Bio import pairwise2 ## load the module ## globalxx ## use global alignment function which only score 1 ## for each match (0 for both penalty and mismatch) alignments = pairwise2.align.globalxx("ACCGT", "ACG") ## perform global alignments (xx) between two sequences. for alignment in alignments: ## Each alignment is a tuple consisting of the two aligned sequences, ## the score, the start and the end positions of the alignment ## (in global alignments the start is always 0 and the end the length of the alignment). print(alignment) ## print the alignment in a nicer format from Bio.pairwise2 import format_alignment print(format_alignment(*alignment)) print(repr(alignment)) """ Explanation: Sequence Alignment Pairwise alignment Module pairwise alignment in Biopython uses dynamic programming algorithm. a global alignment finds the best alignment of all characters between 2 sequences a local alignment finds the subsequences that align best between 2 sequences Match scores and gap penalties should be specified for any alignment. Compatible elements (not neccessarily the same character) should be given higher score. Gap or incompatibles should be given lower or negative scores, signifying the mismatch, though in some case 0 is used. Bio.pairwise2 contains essentially the same algorithms as water for local alignment and needle for global alignment used in EMBOSS. End of explanation """ from Bio import pairwise2 from Bio.pairwise2 import format_alignment ## m match score = 2, mismatch = 0 ## x no gap penalty for alignment in pairwise2.align.globalmx("ACCGT", "ACG", 2, -1): print(format_alignment(*alignment)) ## score = 6, since only matching scores ## match score = 2, mismatch = -1 ## gap opening = 0.5, gap extension = 0.1 for a in pairwise2.align.globalms("ACCGT", "ACG", 2, -1, -.5, -.1): print(format_alignment(*a)) ## score = 5, 2*3 (matchings) + 2*-0.5 (gap opening) """ Explanation: CODE DESCRIPTION You need to specify the match parameters and gap penalty parameters to control the scoring output. globalxx basically sets only match score = 1 and gap penalty score = 0. Setting scoring parameters is easy using the list below. match parameters x No parameters. Identical characters have score of 1, otherwise 0. m A match score is the score of identical chars, otherwise mismatch score. d A dictionary returns the score of any pair of characters. c A callback function returns scores. gap penalty parameters x No gap penalties. s Same open and extend gap penalties for both sequences. d The sequences have different open and extend gap penalties. c A callback function returns the gap penalties. End of explanation """ from Bio.SubsMat import MatrixInfo as matlist print(matlist.available_matrices) ## print list of available matrices matrix = matlist.blosum62 ## set the substitution matrix to be used ## BLOSUM62 is more stringent than BLOSUM45, ## thus the alignment score is lower for a in pairwise2.align.globaldx("KEVLA", "EVL", matrix): print(format_alignment(*a)) ## change to use BLOSUM45 for distantly related sequences ## this allows less identical sequence to score above threshold for a in pairwise2.align.globaldx("KEVLA", "EVL", matlist.blosum45): print(format_alignment(*a)) from Bio import pairwise2 from Bio import SeqIO ## use SeqIO to read in input files seq1 = SeqIO.read("alpha.faa", "fasta") seq2 = SeqIO.read("beta.faa", "fasta") alignments = pairwise2.align.globalds(seq1.seq, seq2.seq) ## global alignment ## d : A dictionary of the score of any pair of characters. ## s : Same open and extend gap penalties for both sequences. ## should return error since not enough parameters specified ## have to specify substitution matrix dictionary (1 parameter) ## also 2 parameters for penalties of gap openning and extension from Bio.SubsMat.MatrixInfo import blosum62 alignments = pairwise2.align.globalds(seq1.seq, seq2.seq, blosum62, -10, -1) ## supply enough parameters supplied so the code runs print(format_alignment(*alignments[0])) ## have a look at the first alignment ## compare with the globalxx below alignments = pairwise2.align.localxx(seq1.seq, seq2.seq) print(format_alignment(*alignments[1])) ## globalxx give different scores from global ds ## there can be many alignments, just randomly pick the second one here alignments = pairwise2.align.localxx(seq1.seq, seq2.seq, one_alignment_only=True) ## will return only the best alignment ## this takes shorter time print(format_alignment(*alignments[0])) alignments = pairwise2.align.localxx(seq1.seq, seq2.seq, score_only=True) ## speed gain ## only print the score print('score of alignment', alignments) """ Explanation: Substitution matrices For proteins, scoring only identical amino acids on both sequences as match is biologically incorrect. To score alignments correctly, you need to know what are the compatible amino acids. This can be done by using substitution matrix. The table is stored as dictionary and can be directly given to parameter d in match parameters. The alignment scores are directly influenced by selected matrix. More details on types, names and scores are described on this following page. https://biopython.org/DIST/docs/api/Bio.SubsMat.MatrixInfo-module.html End of explanation """ import Bio.Align.Applications dir(Bio.Align.Applications) ## available tools with command line wrappers """ Explanation: Alignment tools for multiple sequence alignment The implementation or calculation for both pairwise alignments and multiple sequence alignments can be slow. It is thus recommended to use better optimized alignment programs. Unfortunately, the algorithms are not implemented in Biopython directly. So accessing these tools is done by running programs inside python via the command-line wrapper provided by Biopython. With Biopython, this only takes 4 steps; Install the tools you want to use, e.g. MUSCLE, EMBOSS or CLUSTALW. Prepare an input file of your unaligned sequences in FASTA format. Call the corresponding command line wrapper, different command for each tool, to process this input file. Read or process the output from the tool, i.e. your aligned sequences. End of explanation """ import os from Bio.Align.Applications import ClustalwCommandline ## help(ClustalwCommandline) ## print help clustalw_exe = '/usr/bin/clustalw' ## for Windows user, change this to your installed ## e.g. r"C:\Program Files\new clustal\clustalw2.exe" clustalw_cline = ClustalwCommandline(clustalw_exe, infile="new_opuntia.fasta") ## clustalw_cline = ClustalwCommandline("clustalw2", infile="opuntia.fasta") assert os.path.isfile(clustalw_exe), "Clustal W executable missing" stdout, stderr = clustalw_cline() print(stdout) print(stderr) # if there is no error, it should be empty string from Bio import AlignIO align = AlignIO.read("new_opuntia.aln", "clustal") ## specify the format of alignment file print(align) from Bio import Phylo tree = Phylo.read("new_opuntia.dnd", "newick") Phylo.draw_ascii(tree) ## the result from clustal allows the tree view """ Explanation: ClustalW The wrapper uses subprocess module to run another program inside python. The program prints text on the screen which is piped via standard output and standard error. The input file is standard input. End of explanation """ from Bio.Align.Applications import MuscleCommandline help(MuscleCommandline) from Bio.Align.Applications import MuscleCommandline cline = MuscleCommandline(input="opuntia.fasta", out="opuntia.clw", clw=True) stdout, stderr = cline() print(stderr) ## alignment has only one sequence in fasta format ## error if try to open in clustal format AlignIO.read(open("opuntia.clw"), "clustal") from Bio.Align.Applications import MuscleCommandline cline = MuscleCommandline(input="opuntia.fasta", out="opuntia.txt") ## default format for MUSCLE is fasta stdout, stderr = cline() ## alignment has only one sequence, ## no error if read in fasta format print(AlignIO.read(open("opuntia.txt"), "fasta")) from Bio.Align.Applications import MuscleCommandline try: from StringIO import StringIO except ImportError: from io import StringIO muscle_cline = MuscleCommandline(input="new_opuntia.fasta") ## run command line without writing the output file ## default format of MUSCLE is fasta stdout, stderr = muscle_cline() ## the result is in stdout try: align = AlignIO.read(StringIO(stderr), "fasta") ## check if alignment is fine except: align = AlignIO.read(StringIO(stdout), "fasta") print(align) """ Explanation: MUSCLE Input file format is fasta and it saves output file in either fasta or clustal format which are compatible with Biopython, using AlignIO for reading or parsing. It can also output in GCG MSF or HTML format (not covered) as it is not supported by Biopython in parsing it. End of explanation """ from Bio.Emboss.Applications import NeedleCommandline ## for ubuntu system, giving command ## without specifying the installed path of EMBOSS seems to work ## both gapopen and gapextend need to be set needle_cline = NeedleCommandline(asequence="alpha.faa", bsequence="beta.faa", gapopen=10, gapextend=0.5, outfile="needle.txt") print(needle_cline) # how the command line looks like stdout, stderr = needle_cline() # run the program and save result in stdout & stderr print(stdout) print(stderr) from Bio.Emboss.Applications import NeedleCommandline # specify the location where EMBOSS program is installed needle_cline = NeedleCommandline(r"C:\EMBOSS\needle.exe", asequence="alpha.faa", bsequence="beta.faa", gapopen=10, gapextend=0.5, outfile="needle.txt") needle_cline() # result in error if the path is incorrect from Bio.Emboss.Applications import WaterCommandline needle_cline = WaterCommandline() ## provide the file name for each sequence needle_cline.asequence="alpha.faa" needle_cline.bsequence="beta.faa" ## specify the gap open and gap extend cost needle_cline.gapopen=10 needle_cline.gapextend=0.5 ## save output file needle_cline.outfile="needle.txt" ## how the command line look like print(needle_cline) ## run the program and combine output with error stdout, stderr = needle_cline(stdout=True, stderr=True) print(stdout + stderr) from Bio import AlignIO ## use alignio to parse the result written in EMBOSS format align = AlignIO.read("needle.txt", "emboss") ## get alignment length print(align.get_alignment_length()) """ Explanation: EMBOSS program The program includes algorithms for both local (Smith-Waterman) and global (Needleman-Wunch) alignments. End of explanation """ from Bio import AlignIO ## use parse here even though there is only one alignment alignment = AlignIO.parse(open("PF18225_seed.sth"), "stockholm") ## AlignIO allows you to access information of each sequence ## similar to SeqIO for i, align in enumerate(alignment): print('alignment length', align.get_alignment_length()) print('') for seqi in align: print(seqi.seq) print(seqi.name) print(seqi.dbxrefs) print(seqi.annotations) print(seqi.description) """ Explanation: Parse the result of alignments AlignIO module is used for read and write sequence alignment. The functionality of the module is quite similar to SeqIO. End of explanation """ from Bio import AlignIO ## open input file in read mode input_handle = open("PF18225_seed.sth", "r") ## open output file in write mode output_handle = open("PF18225_seed.phy", "w") ## uses parse here if there is more than one alignment ## parse will also work if there is only one alignment alignments = AlignIO.parse(input_handle, "stockholm") ## write out the alignment in phylip format AlignIO.write(alignments, output_handle, "phylip") ## close both file handles output_handle.close() input_handle.close() """ Explanation: Change file format This is very simple using AlignIO. End of explanation """ from Bio import AlignIO ## original phylip format limits sequence id to be only 10 characters AlignIO.convert("PF18225_seed.sth", "stockholm", "PF18225_seed_strict.phy", "phylip") ## relaxed phylip allows longer names to be written AlignIO.convert("PF18225_seed.sth", "stockholm", "PF18225_seed_relaxed.phy", "phylip-relaxed") ## it returns the number of the alignments """ Explanation: Change file format (alternative) Instead of opening input file in read-mode and output file in write-mode, convert function in AlignIO can be called directly to change the alignment file type. Arguments: in_file - an input handle or filename in_format - input file format, lower case string output - an output handle or filename out_file - output file format, lower case string alphabet - optional alphabet to assume, default=None The formats allowed for conversion are listed in the link. https://biopython.org/DIST/docs/api/Bio.AlignIO-module.html#convert End of explanation """ from Bio import AlignIO align = AlignIO.read("PF18225_full.fasta", "fasta") ## specify the format of alignment file print(align) from Bio.Align import AlignInfo from Bio.Align.AlignInfo import SummaryInfo ## load AlignInfo modules and SummaryInfo submodules summary = SummaryInfo(align) ## create summary object print('') dumb_7 = summary.dumb_consensus() ## create consensus sequence with the default threshold of 0.7 dumb_4 = summary.dumb_consensus(threshold=0.4) ## create consensus sequence with lower (specified) threshold of 0.4 ## compare the results from difference threshold print('default threshold = 0.7', str(dumb_7)) print('consensus threshold = 0.4', str(dumb_4)) """ Explanation: SummaryInfo dumb_consensus It goes through the sequence residue by residue and count the number of each type of residue (ie. A, G, T and C for DNA) in all sequences in the alignment. If the percentage of the most common residue type is greater then the specified threshold, that residue will be added to the consensus sequence, otherwise an ambiguous character will be added. Arguments: (taken from https://biopython.org/DIST/docs/api/Bio.Align.AlignInfo.SummaryInfo-class.html#dumb_consensus) threshold - The threshold value that is required to decide whether to add a particular atom. ambiguous - The ambiguous character to be added when the threshold is not reached. consensus_alpha - The alphabet to return for the consensus sequence. If this is None, then we will try to guess the alphabet. require_multiple - If set as 1, this will require that more than 1 sequence be part of an alignment to put it in the consensus (ie. not just 1 sequence and gaps). End of explanation """ from Bio import AlignIO align = AlignIO.read("PF18225_full.fasta", "fasta") ## specify the format of alignment file print(align) from Bio.Align import AlignInfo from Bio.Align.AlignInfo import SummaryInfo ## load AlignInfo modules and SummaryInfo submodules summary = SummaryInfo(align) ## create summary object ## create sumb consensus sequence dumb_cons = summary.dumb_consensus() print('dumb oncensus', str(dumb_cons)) print('') ## create gap consensus sequence gap_cons = summary.gap_consensus() print('gap oncensus', str(gap_cons)) ## compare the difference between dumb and gap consensus sequences ## when using the default threshold (0.7) summary.gap_consensus() == summary.dumb_consensus() """ Explanation: gap_consensus The method is similar to dumb_consensus, but allows gaps on the output. End of explanation """ from Bio import AlignIO align = AlignIO.read("new_opuntia.aln", "clustal") ## load our MSA alignment file print(align) print('') from Bio.Align import AlignInfo from Bio.Align.AlignInfo import SummaryInfo summary = SummaryInfo(align) ## create summary of the alignments ## check frequency of nucleotide at position 7 print(summary._get_letter_freqs(residue_num=7, all_records=align, letters=['A', 'C', 'G', 'T'], to_ignore=['N', '-'])) print('') ## check frequency throughout the sequence length ## by going through each position at a time for i in range(align.get_alignment_length()): freq_dict = summary._get_letter_freqs(residue_num=i, all_records=align, letters=['A', 'C', 'G', 'T'], to_ignore=['N', '-']) print(i, freq_dict) """ Explanation: Letter frequency Another way to look at the alignment and consensus sequence is to look at the letter frequency, the count of each letter at certain position. End of explanation """ ## consensus sequence consensus = summary.dumb_consensus() ## using consensus sequence print(summary.pos_specific_score_matrix(axis_seq=consensus, chars_to_ignore=['N', '-'])) ## first sequence in the alignment print(summary.pos_specific_score_matrix(axis_seq=align[0]))#, chars_to_ignore=['N', '-'])) """ Explanation: Position specific score matrices (PSSMs) PSSM is a count matrix. For each column in the alignment, it displays the sum of each character. The input sequence can be either the consensus sequence or any sequence in the alignment. End of explanation """
ledeprogram/algorithms
class6/donow/ronga_paul_DoNow_6.ipynb
gpl-3.0
import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import statsmodels.formula.api as smf """ Explanation: 1. Import the necessary packages to read in the data, plot, and create a linear regression model End of explanation """ df = pd.read_csv('../data/hanford.csv') df.head() """ Explanation: 2. Read in the hanford.csv file End of explanation """ print('Mortality interquantile: ', df['Mortality'].quantile(0.75) - df['Mortality'].quantile(0.25)) print('Exposure interquantile: ', df['Exposure'].quantile(0.75) - df['Exposure'].quantile(0.25)) print('Mode:', df.mode) df.describe() """ Explanation: <img src="images/hanford_variables.png"> County = Name of county Exposure = Index of exposure Mortality = Cancer mortality per 100,000 man-years 3. Calculate the basic descriptive statistics on the data End of explanation """ print("The coefficient is {}. It seems worthy of investigation.".format(df.corr()['Exposure']['Mortality'])) """ Explanation: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation? End of explanation """ lm = smf.ols(formula='Mortality~Exposure', data=df).fit() lm.params """ Explanation: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure End of explanation """ fig, ax = plt.subplots() ax.plot(df['Exposure'], df['Mortality'], 'o', label="Data") ax.plot(df['Exposure'], lm.fittedvalues, '-', color='red', label="Prediction") """ Explanation: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination) End of explanation """ intercept, slope = lm.params result = slope*10 + intercept print("The result is {}.".format(result)) """ Explanation: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10 End of explanation """
ForestClaw/forestclaw
applications/clawpack/transport/2d/sonic/swirl.ipynb
bsd-2-clause
!swirlcons --user:example=2 --user:rp-solver=4 """ Explanation: Advection (conservative form) Scalar advection problem in conservative form with variable velocity field. There are four Riemann solvers that can be tried out here, all described in LeVeque (Cambridge Press, 2002) rp-solver=1 : Q-star approach in which a $q^*$ value is defined to enforce flux continuity across the stationery wave. rp-solver=2 : Wave-decomposition approach based on solving the Riemann problem for system of two equations. rp-solver=3 : Edge centered velocities are used to construct classic update based on flux formulation rp=sovler=4 : F-wave approach. Two examples are avaible. In Example 1, the velocity field $u(x)$ is positive. In Example 2, the velocity field changes sign. Both velocity fields have non-zero divergence. Run code in serial mode (will work, even if code is compiled with MPI) End of explanation """ #!mpirun -n 4 swirlcons """ Explanation: Or, run code in parallel mode (command may need to be customized, depending your on MPI installation.) End of explanation """ %run make_plots.py """ Explanation: Create PNG files for web-browser viewing, or animation. End of explanation """ %pylab inline import glob from matplotlib import image from clawpack.visclaw.JSAnimation import IPython_display from matplotlib import animation figno = 0 fname = '_plots/*fig' + str(figno) + '.png' filenames=sorted(glob.glob(fname)) fig = plt.figure() im = plt.imshow(image.imread(filenames[0])) def init(): im.set_data(image.imread(filenames[0])) return im, def animate(i): image_i=image.imread(filenames[i]) im.set_data(image_i) return im, animation.FuncAnimation(fig, animate, init_func=init, frames=len(filenames), interval=500, blit=True) """ Explanation: View PNG files in browser, using URL above, or create an animation of all PNG files, using code below. End of explanation """
dietmarw/EK5312_ElectricalMachines
Chapman/Ch8-Problem_8-10to11.ipynb
unlicense
%pylab notebook %precision %.4g """ Explanation: Excercises Electric Machinery Fundamentals Chapter 8 Problem 8-10 to Problem 8-11 End of explanation """ P_rated = 30 # [hp] Il_rated = 110 # [A] Vt = 240 # [V] Nf = 2700 n_0 = 1800 # [r/min] Nse = 14 Ra = 0.19 # [Ohm] Rf = 75 # [Ohm] Rs = 0.02 # [Ohm] Radj_max = 400 # [Ohm] Radj_min = 100 # [Ohm] """ Explanation: Description | | | |-------------------------------------|--------------------------------------------| | $P_\text{rated} = 30\,hp$ | $I_\text{L,rated} = 110\,A$ | | $V_T = 240\,V$ | $n_\text{rated} = 1800\,r/min$ | | $R_A = 0.19\,\Omega$ | $R_S = 0.02\,\Omega$ | | $N_F = 2700 \text{ turns per pole}$ | $N_{SE} = 14 \text{ turns per pole}$ | | $R_F = 75\,\Omega$ | $R_\text{adj} = 100\text{ to }400\,\Omega$ | Rotational losses = 3550 W at full load. Magnetization curve as shown in Figure P8-1. <img src="figs/FigC_P8-1.jpg" width="70%"> <hr> Note: An electronic version of this magnetization curve can be found in file p81_mag.dat, which can be used with Python programs. Column 1 contains field current in amps, and column 2 contains the internal generated voltage $E_A$ in volts. <hr> For Problems 8-10 to 8-11, the motor is connected cumulatively compounded as shown in Figure P8-4. <img src="figs/FigC_P8-4.jpg" width="70%"> End of explanation """ Radj_10 = 175.0 # [Ohm] """ Explanation: Problem 8-10 Description If the motor is connected cumulatively compounded with $R_\text{adj} = 175\,\Omega$: (a) What is the no-load speed of the motor? (b) What is the full-load speed of the motor? (c) What is its speed regulation? (d) Calculate and plot the torque-speed characteristic for this motor. (Neglect armature effects in this problem.) End of explanation """ If_10 = Vt / (Radj_10+Rf) If_10 """ Explanation: SOLUTION At no-load conditions, $E_A = V_T = 240 V$ . The field current is given by: $$I_F = \frac{V_T}{R_\text{adj}+R_F}$$ End of explanation """ n_0 """ Explanation: From Figure P8-1, this field current would produce an internal generated voltage $E_{A_0}$ of 241 V at a speed End of explanation """ Ea0_10_nl = 241.0 # [V] Ea_10_nl = 240.0 # [V] n_10_nl = Ea_10_nl / Ea0_10_nl * n_0 print(''' n_10_nl = {:.1f} r/min ======================'''.format(n_10_nl)) """ Explanation: r/min. Therefore, the speed n with a voltage $E_A$ of 240 V would be: $$\frac{E_A}{E_{A_0}} = \frac{n}{n_0}$$ End of explanation """ Ia_10 = Il_rated - Vt/(Radj_10 + Rf) Ia_10 """ Explanation: At full load, the armature current is: $$I_A = I_L - I_F = I_L - \frac{V_T}{R_\text{adj}+R_F}$$ End of explanation """ Ea_10_fl = Vt - Ia_10*(Ra+Rs) Ea_10_fl """ Explanation: The internal generated voltage $E_A$ is: $$E_A = V_T - I_A (R_A+R_S)$$ End of explanation """ If_10_ = If_10 + Nse/Nf * Ia_10 If_10_ """ Explanation: The equivalent field current is: $$I_F^* = I_F + \frac{N_{SE}}{N_F}I_A$$ End of explanation """ n_0 Ea0_10_fl = 279.0 # [V] n_10_fl = Ea_10_fl / Ea0_10_fl * n_0 print(''' n_10_fl = {:.1f} r/min ======================'''.format(n_10_fl)) """ Explanation: From Figure P8-1, this field current would produce an internal generated voltage $E_{A_0}$ of 279 V at a speed End of explanation """ SR = (n_10_nl - n_10_fl) / n_10_fl print(''' SR = {:.1f} % ==========='''.format(SR*100)) """ Explanation: The speed regulation is: $$SR = \frac{n_\text{nl}-n_\text{fl}}{n_\text{fl}}$$ End of explanation """ #Load the magnetization curve data import pandas as pd # The data file is stored in the repository fileUrl = 'data/p81_mag.dat' data = pd.read_csv(fileUrl, # the address where to download the datafile from sep=' ', # our data source uses a blank space as separation comment='%', # ignore lines starting with a "%" skipinitialspace = True, # ignore intital spaces header=None, # we don't have a header line defined... names=['If_values', 'Ea_values'] # ...instead we define the names here ) """ Explanation: The torque-speed characteristic can best be plotted with a Python program. An appropriate program is shown below. Get the magnetization curve. Note that this curve is defined for a speed of 1200 r/min. End of explanation """ Radj_10 = 175.0 # [Ohm] Il_10 = linspace(0, 110, 111) """ Explanation: First, initialize the values needed in this program. End of explanation """ Ia_10 = Il_10 - Vt / (Rf + Radj_10) """ Explanation: Calculate the armature current for each load End of explanation """ Ea_10 = Vt - Ia_10*(Ra+Rs) """ Explanation: Now calculate the internal generated voltage for each armature current. End of explanation """ If_10 = Vt / (Rf + Radj_10) + Nse/Nf * Ia_10 """ Explanation: Calculate the effective field current with and without armature reaction. End of explanation """ Eao_10 = interp(If_10,data['If_values'],data['Ea_values']) """ Explanation: Calculate the resulting internal generated voltage at 1800 r/min by interpolating the motor's magnetization curve. End of explanation """ n_10 = ( Ea_10 / Eao_10 ) * n_0 """ Explanation: Calculate the resulting speed from Equation (8-13) End of explanation """ tau_ind_10 = Ea_10 * Ia_10 / (n_10 * 2 * pi / 60) """ Explanation: Calculate the induced torque corresponding to each speed from Equation (8-10). End of explanation """ title(r'Shunt DC Motor Torque-Speed Characteristic') xlabel(r'$\tau_{ind}$ [Nm]') ylabel(r'$n_m$ [r/min]') axis([ 0, 170 ,1390,1810]) #set the axis range plot(tau_ind_10,n_10) grid() """ Explanation: Plot the torque-speed curves End of explanation """ Radj_11 = 250.0 # [Ohm] """ Explanation: Problem 8-11 Description The motor is connected cumulatively compounded and is operating at full load. What will the new speed of the motor be if $R_\text{adj}$ is increased to $250\,\Omega$ ? How does the new speed compared to the full-load speed calculated in Problem 8-10? End of explanation """ If_11 = Vt / (Radj_11+Rf) If_11 """ Explanation: SOLUTION If $R_\text{adj}$ is increased to $250\,\Omega$ , the field current is given by: End of explanation """ Ia_11 = Il_rated - Vt/(Radj_11 + Rf) Ia_11 """ Explanation: At full load conditions, the armature current is: End of explanation """ Ea_11 = Vt - Ia_11*(Ra+Rs) Ea_11 """ Explanation: The internal generated voltage $E_A$ is: End of explanation """ If_11_ = If_11 + Nse/Nf * Ia_11 If_11_ """ Explanation: The equivalent field current is: End of explanation """ n_0 """ Explanation: From Figure P8-1, this field current would produce an internal generated voltage $E_{A_0}$ of 268 V at a speed End of explanation """ Ea0_11 = 268.0 # [r/min] Ea_11 = Ea_10_fl n_11 = Ea_11 / Ea0_11 * n_0 print(''' n_11 = {:.1f} r/min ==================='''.format(n_11)) """ Explanation: r/min. Therefore, the speed n with a voltage $E_A$ of 240 V would be: End of explanation """
statsmodels/statsmodels.github.io
v0.12.1/examples/notebooks/generated/tsa_arma_1.ipynb
bsd-3-clause
%matplotlib inline import numpy as np import pandas as pd from statsmodels.graphics.tsaplots import plot_predict from statsmodels.tsa.arima_process import arma_generate_sample from statsmodels.tsa.arima.model import ARIMA np.random.seed(12345) """ Explanation: Autoregressive Moving Average (ARMA): Artificial data End of explanation """ arparams = np.array([.75, -.25]) maparams = np.array([.65, .35]) """ Explanation: Generate some data from an ARMA process: End of explanation """ arparams = np.r_[1, -arparams] maparams = np.r_[1, maparams] nobs = 250 y = arma_generate_sample(arparams, maparams, nobs) """ Explanation: The conventions of the arma_generate function require that we specify a 1 for the zero-lag of the AR and MA parameters and that the AR parameters be negated. End of explanation """ dates = pd.date_range('1980-1-1', freq="M", periods=nobs) y = pd.Series(y, index=dates) arma_mod = ARIMA(y, order=(2, 0, 2), trend='n') arma_res = arma_mod.fit() print(arma_res.summary()) y.tail() import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize=(10,8)) fig = plot_predict(arma_res, start='1999-06-30', end='2001-05-31', ax=ax) legend = ax.legend(loc='upper left') """ Explanation: Now, optionally, we can add some dates information. For this example, we'll use a pandas time series. End of explanation """
jmhsi/justin_tinker
data_science/lendingclub_bak/dataprep_and_modeling/0.2.0_RF_regressor_no_weighting.ipynb
apache-2.0
platform = 'lendingclub' store = pd.HDFStore( '/Users/justinhsi/justin_tinkering/data_science/lendingclub/{0}_store.h5'. format(platform), append=True) loan_info = store['train_filtered_columns'] columns = loan_info.columns.values # checking dtypes to see which columns need one hotting, and which need null or not to_one_hot = [] to_null_or_not = [] do_nothing = [] for col in columns: if loan_info[col].dtypes == np.dtype('O'): print(col, loan_info[col].isnull().value_counts(dropna=False).to_dict()) to_one_hot.append(col) elif len(loan_info[col].isnull().value_counts(dropna=False)) > 1: print(col, loan_info[col].isnull().value_counts(dropna=False).to_dict()) to_null_or_not.append(col) else: print(col, loan_info[col].isnull().value_counts(dropna=False).to_dict()) do_nothing.append(col) """ Explanation: DO NOT FORGET TO DROP ISSUE_D AFTER PREPPING End of explanation """ standardized, eval_cols, mean_series, std_dev_series = data_prep.process_data_train( loan_info) """ Explanation: Until I figure out a good imputation method (e.g. bayes PCA), just drop columns with null still End of explanation """ regr = RandomForestRegressor( n_estimators=20, random_state=0, max_features=10, min_samples_split=20, min_samples_leaf=10, n_jobs=-1, ) regr.fit(standardized, eval_cols) # dump the model joblib.dump(regr, 'model_dump/model_0.2.0.pkl') # joblib.dump((mean_series, std_dev_series), 'model_dump/mean_stddev.pkl') regr.score(standardized, eval_cols) now = time.strftime("%Y_%m_%d_%Hh_%Mm_%Ss") # info to stick in detailed dataframe describing each model model_info = {'model_version': '0.2.0', 'target': 'npv_roi_10', 'weights': 'None', 'algo_model': 'RF_regr', 'hyperparams': "n_estimators=20,random_state=0,max_features=10,min_samples_split=20,min_samples_leaf=10,n_jobs=-1", 'cost_func': 'sklearn default, which I think is mse', 'useful_notes': 'R2 score of .199350 (regr.score())', 'date': now} model_info_df = pd.DataFrame(model_info, index = ['0.2.0']) store.open() store.append( 'model_info', model_info_df, data_columns=True, index=True, append=True, ) store.close() """ Explanation: straight up out of box elastic net with slightly tweaked alpha End of explanation """ store.open() test = store['test_filtered_columns'] train = store['train_filtered_columns'] loan_npv_rois = store['loan_npv_rois'] default_series = test['target_strict'] results = store['results'] store.close() train_X, train_y = data_prep.process_data_test(train) train_y = train_y['npv_roi_10'].values test_X, test_y = data_prep.process_data_test(test) test_y = test_y['npv_roi_10'].values regr = joblib.load('model_dump/model_0.2.0.pkl') regr_version = '0.2.0' test_yhat = regr.predict(test_X) train_yhat = regr.predict(train_X) test_mse = np.sum((test_yhat - test_y)**2)/len(test_y) train_mse = np.sum((train_yhat - train_y)**2)/len(train_y) def eval_models(trials, port_size, available_loans, regr, regr_version, test, loan_npv_rois, default_series): results = {} pct_default = {} test_copy = test.copy() for trial in tqdm_notebook(np.arange(trials)): loan_ids = np.random.choice( test_copy.index.values, available_loans, replace=False) loans_to_pick_from = test_copy.loc[loan_ids, :] scores = regr.predict(loans_to_pick_from) scores_series = pd.Series(dict(zip(loan_ids, scores))) scores_series.sort_values(ascending=False, inplace=True) picks = scores_series[:900].index.values results[trial] = loan_npv_rois.loc[picks, :].mean().to_dict() pct_default[trial] = (default_series.loc[picks].sum()) / port_size pct_default_series = pd.Series(pct_default) results_df = pd.DataFrame(results).T results_df['pct_def'] = pct_default_series return results_df # as per done with baseline models, say 3000 loans available # , pick 900 of them trials = 20000 port_size = 900 available_loans = 3000 model_results = eval_models(trials, port_size, available_loans, regr, regr_version, test_X, loan_npv_rois, default_series) multi_index = [] for col in model_results.columns.values: multi_index.append((col,regr_version)) append_results = model_results append_results.columns = pd.MultiIndex.from_tuples(multi_index, names = ['discount_rate', 'model']) try: results = results.join(append_results) except ValueError: results.loc[:, (slice(None), slice('0.2.0','0.2.0'))] = append_results results.sort_index(axis=1, inplace = True) store.open() store['results'] = results model_info = store['model_info'] store.close() results.describe() model_info """ Explanation: Examine performance on test set End of explanation """
SlipknotTN/udacity-deeplearning-nanodegree
reinforcement/Q-learning-cart.ipynb
mit
import gym import tensorflow as tf import numpy as np """ Explanation: Deep Q-learning In this notebook, we'll build a neural network that can learn to play games through reinforcement learning. More specifically, we'll use Q-learning to train an agent to play a game called Cart-Pole. In this game, a freely swinging pole is attached to a cart. The cart can move to the left and right, and the goal is to keep the pole upright as long as possible. We can simulate this game using OpenAI Gym. First, let's check out how OpenAI Gym works. Then, we'll get into training an agent to play the Cart-Pole game. End of explanation """ # Create the Cart-Pole game environment env = gym.make('CartPole-v0') """ Explanation: Note: Make sure you have OpenAI Gym cloned into the same directory with this notebook. I've included gym as a submodule, so you can run git submodule --init --recursive to pull the contents into the gym repo. End of explanation """ env.reset() rewards = [] for _ in range(100): env.render() state, reward, done, info = env.step(env.action_space.sample()) # take a random action rewards.append(reward) if done: rewards = [] env.reset() """ Explanation: We interact with the simulation through env. To show the simulation running, you can use env.render() to render one frame. Passing in an action as an integer to env.step will generate the next step in the simulation. You can see how many actions are possible from env.action_space and to get a random action you can use env.action_space.sample(). This is general to all Gym games. In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1. Run the code below to watch the simulation run. End of explanation """ print(rewards[-20:]) """ Explanation: To shut the window showing the simulation, use env.close(). If you ran the simulation above, we can look at the rewards: End of explanation """ class QNetwork: def __init__(self, learning_rate=0.01, state_size=4, action_size=2, hidden_size=10, name='QNetwork'): # state inputs to the Q-network with tf.variable_scope(name): self.inputs_ = tf.placeholder(tf.float32, [None, state_size], name='inputs') # One hot encode the actions to later choose the Q-value for the action self.actions_ = tf.placeholder(tf.int32, [None], name='actions') one_hot_actions = tf.one_hot(self.actions_, action_size) # Target Q values for training self.targetQs_ = tf.placeholder(tf.float32, [None], name='target') # ReLU hidden layers self.fc1 = tf.contrib.layers.fully_connected(self.inputs_, hidden_size) self.fc2 = tf.contrib.layers.fully_connected(self.fc1, hidden_size) # Linear output layer self.output = tf.contrib.layers.fully_connected(self.fc2, action_size, activation_fn=None) ### Train with loss (targetQ - Q)^2 # output has length 2, for two actions. This next line chooses # one value from output (per row) according to the one-hot encoded actions. self.Q = tf.reduce_sum(tf.multiply(self.output, one_hot_actions), axis=1) self.loss = tf.reduce_mean(tf.square(self.targetQs_ - self.Q)) self.opt = tf.train.AdamOptimizer(learning_rate).minimize(self.loss) """ Explanation: The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right. Q-Network We train our Q-learning agent using the Bellman Equation: $$ Q(s, a) = r + \gamma \max{Q(s', a')} $$ where $s$ is a state, $a$ is an action, and $s'$ is the next state from state $s$ and action $a$. Before we used this equation to learn values for a Q-table. However, for this game there are a huge number of states available. The state has four values: the position and velocity of the cart, and the position and velocity of the pole. These are all real-valued numbers, so ignoring floating point precisions, you practically have infinite states. Instead of using a table then, we'll replace it with a neural network that will approximate the Q-table lookup function. <img src="assets/deep-q-learning.png" width=450px> Now, our Q value, $Q(s, a)$ is calculated by passing in a state to the network. The output will be Q-values for each available action, with fully connected hidden layers. <img src="assets/q-network.png" width=550px> As I showed before, we can define our targets for training as $\hat{Q}(s,a) = r + \gamma \max{Q(s', a')}$. Then we update the weights by minimizing $(\hat{Q}(s,a) - Q(s,a))^2$. For this Cart-Pole game, we have four inputs, one for each value in the state, and two outputs, one for each action. To get $\hat{Q}$, we'll first choose an action, then simulate the game using that action. This will get us the next state, $s'$, and the reward. With that, we can calculate $\hat{Q}$ then pass it back into the $Q$ network to run the optimizer and update the weights. Below is my implementation of the Q-network. I used two fully connected layers with ReLU activations. Two seems to be good enough, three might be better. Feel free to try it out. End of explanation """ from collections import deque class Memory(): def __init__(self, max_size = 1000): self.buffer = deque(maxlen=max_size) def add(self, experience): self.buffer.append(experience) def sample(self, batch_size): idx = np.random.choice(np.arange(len(self.buffer)), size=batch_size, replace=False) return [self.buffer[ii] for ii in idx] """ Explanation: Experience replay Reinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on. Here, we'll create a Memory object that will store our experiences, our transitions $<s, a, r, s'>$. This memory will have a maxmium capacity, so we can keep newer experiences in memory while getting rid of older experiences. Then, we'll sample a random mini-batch of transitions $<s, a, r, s'>$ and train on those. Below, I've implemented a Memory object. If you're unfamiliar with deque, this is a double-ended queue. You can think of it like a tube open on both sides. You can put objects in either side of the tube. But if it's full, adding anything more will push an object out the other side. This is a great data structure to use for the memory buffer. End of explanation """ train_episodes = 1000 # max number of episodes to learn from max_steps = 200 # max steps in an episode gamma = 0.99 # future reward discount # Exploration parameters explore_start = 1.0 # exploration probability at start explore_stop = 0.01 # minimum exploration probability decay_rate = 0.0001 # exponential decay rate for exploration prob # Network parameters hidden_size = 64 # number of units in each Q-network hidden layer learning_rate = 0.0001 # Q-network learning rate # Memory parameters memory_size = 10000 # memory capacity batch_size = 20 # experience mini-batch size pretrain_length = batch_size # number experiences to pretrain the memory tf.reset_default_graph() mainQN = QNetwork(name='main', hidden_size=hidden_size, learning_rate=learning_rate) """ Explanation: Exploration - Exploitation To learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\epsilon$ (epsilon). That is, with some probability $\epsilon$ the agent will make a random action and with probability $1 - \epsilon$, the agent will choose an action from $Q(s,a)$. This is called an $\epsilon$-greedy policy. At first, the agent needs to do a lot of exploring. Later when it has learned more, the agent can favor choosing actions based on what it has learned. This is called exploitation. We'll set it up so the agent is more likely to explore early in training, then more likely to exploit later in training. Q-Learning training algorithm Putting all this together, we can list out the algorithm we'll use to train the network. We'll train the network in episodes. One episode is one simulation of the game. For this game, the goal is to keep the pole upright for 195 frames. So we can start a new episode once meeting that goal. The game ends if the pole tilts over too far, or if the cart moves too far the left or right. When a game ends, we'll start a new episode. Now, to train the agent: Initialize the memory $D$ Initialize the action-value network $Q$ with random weights For episode = 1, $M$ do For $t$, $T$ do With probability $\epsilon$ select a random action $a_t$, otherwise select $a_t = \mathrm{argmax}_a Q(s,a)$ Execute action $a_t$ in simulator and observe reward $r_{t+1}$ and new state $s_{t+1}$ Store transition $<s_t, a_t, r_{t+1}, s_{t+1}>$ in memory $D$ Sample random mini-batch from $D$: $<s_j, a_j, r_j, s'_j>$ Set $\hat{Q}j = r_j$ if the episode ends at $j+1$, otherwise set $\hat{Q}_j = r_j + \gamma \max{a'}{Q(s'_j, a')}$ Make a gradient descent step with loss $(\hat{Q}_j - Q(s_j, a_j))^2$ endfor endfor Hyperparameters One of the more difficult aspects of reinforcememt learning are the large number of hyperparameters. Not only are we tuning the network, but we're tuning the simulation. End of explanation """ # Initialize the simulation env.reset() # Take one random step to get the pole and cart moving state, reward, done, _ = env.step(env.action_space.sample()) memory = Memory(max_size=memory_size) # Make a bunch of random actions and store the experiences for ii in range(pretrain_length): # Uncomment the line below to watch the simulation env.render() # Make a random action action = env.action_space.sample() next_state, reward, done, _ = env.step(action) if done: # The simulation fails so no next state next_state = np.zeros(state.shape) # Add experience to memory memory.add((state, action, reward, next_state)) # Start new episode env.reset() # Take one random step to get the pole and cart moving state, reward, done, _ = env.step(env.action_space.sample()) else: # Add experience to memory memory.add((state, action, reward, next_state)) state = next_state """ Explanation: Populate the experience memory Here I'm re-initializing the simulation and pre-populating the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game. End of explanation """ # Now train with experiences saver = tf.train.Saver() rewards_list = [] with tf.Session() as sess: # Initialize variables sess.run(tf.global_variables_initializer()) step = 0 for ep in range(1, train_episodes): total_reward = 0 t = 0 while t < max_steps: step += 1 # Uncomment this next line to watch the training env.render() # Explore or Exploit explore_p = explore_stop + (explore_start - explore_stop)*np.exp(-decay_rate*step) if explore_p > np.random.rand(): # Make a random action action = env.action_space.sample() else: # Get action from Q-network feed = {mainQN.inputs_: state.reshape((1, *state.shape))} Qs = sess.run(mainQN.output, feed_dict=feed) action = np.argmax(Qs) # Take action, get new state and reward next_state, reward, done, _ = env.step(action) total_reward += reward if done: # the episode ends so no next state next_state = np.zeros(state.shape) t = max_steps print('Episode: {}'.format(ep), 'Total reward: {}'.format(total_reward), 'Training loss: {:.4f}'.format(loss), 'Explore P: {:.4f}'.format(explore_p)) rewards_list.append((ep, total_reward)) # Add experience to memory memory.add((state, action, reward, next_state)) # Start new episode env.reset() # Take one random step to get the pole and cart moving state, reward, done, _ = env.step(env.action_space.sample()) else: # Add experience to memory memory.add((state, action, reward, next_state)) state = next_state t += 1 # Sample mini-batch from memory batch = memory.sample(batch_size) states = np.array([each[0] for each in batch]) actions = np.array([each[1] for each in batch]) rewards = np.array([each[2] for each in batch]) next_states = np.array([each[3] for each in batch]) # Train network target_Qs = sess.run(mainQN.output, feed_dict={mainQN.inputs_: next_states}) # Set target_Qs to 0 for states where episode ends episode_ends = (next_states == np.zeros(states[0].shape)).all(axis=1) target_Qs[episode_ends] = (0, 0) targets = rewards + gamma * np.max(target_Qs, axis=1) loss, _ = sess.run([mainQN.loss, mainQN.opt], feed_dict={mainQN.inputs_: states, mainQN.targetQs_: targets, mainQN.actions_: actions}) saver.save(sess, "checkpoints/cartpole.ckpt") """ Explanation: Training Below we'll train our agent. If you want to watch it train, uncomment the env.render() line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game. End of explanation """ %matplotlib inline import matplotlib.pyplot as plt def running_mean(x, N): cumsum = np.cumsum(np.insert(x, 0, 0)) return (cumsum[N:] - cumsum[:-N]) / N eps, rews = np.array(rewards_list).T smoothed_rews = running_mean(rews, 10) plt.plot(eps[-len(smoothed_rews):], smoothed_rews) plt.plot(eps, rews, color='grey', alpha=0.3) plt.xlabel('Episode') plt.ylabel('Total Reward') """ Explanation: Visualizing training Below I'll plot the total rewards for each episode. I'm plotting the rolling average too, in blue. End of explanation """ test_episodes = 10 test_max_steps = 400 env.reset() with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) for ep in range(1, test_episodes): t = 0 while t < test_max_steps: env.render() # Get action from Q-network feed = {mainQN.inputs_: state.reshape((1, *state.shape))} Qs = sess.run(mainQN.output, feed_dict=feed) action = np.argmax(Qs) # Take action, get new state and reward next_state, reward, done, _ = env.step(action) if done: t = test_max_steps env.reset() # Take one random step to get the pole and cart moving state, reward, done, _ = env.step(env.action_space.sample()) else: state = next_state t += 1 env.close() """ Explanation: Testing Let's checkout how our trained agent plays the game. End of explanation """
hadrianpaulo/project_deathstar
analytics/Classification_cycle_1.ipynb
mit
df_train = pd.DataFrame() # MNCHN df_train['body'] = df_mnchn['body'].append(df_mnchn['Final Keywords']) df_train['label'] = 1 # Adolescent df_train = df_train.append(pd.DataFrame({ 'body': df_adolescent['body'].append(df_adolescent['Final Keywords']), 'label': 2 })) # Geriatrics df_train = df_train.append(pd.DataFrame({ 'body': df_geriatric['body'].append(df_geriatric['Final Keywords']), 'label': 3 })) # Special Populations df_train = df_train.append(pd.DataFrame({ 'body': df_specpop['body'].append(df_specpop['Final Keywords']), 'label': 4 })) df_train.reset_index(drop=True, inplace=True) # Other Disregard atm # df_train = df_train.append(pd.DataFrame({ # 'body': df_specpop['body'].append(df_specpop['Final Keywords']), # 'label': 4 # set.difference(set(df.title),set(df_mnchn.AO).union( # set(df_adolescent.AO)).union( # set(df_geriatric.AO)).union( # set(df_specpop.AO))) # })) """ Explanation: Labels: MNCHN = 1 Adolescent = 2 Geriatrics = 3 Special Populations = 4 Other = 5 but disregard for 1st cycle End of explanation """ from nltk.corpus import stopwords from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.naive_bayes import MultinomialNB from sklearn.pipeline import Pipeline tfid_params = { 'stop_words':stopwords.words(), 'ngram_range': (1,4), 'strip_accents':'ascii', } text_clf = Pipeline([('vect_tfid', TfidfVectorizer(**tfid_params)), ('clf', MultinomialNB()), ]) model_cycle_1 = text_clf.fit(df_train.body, df_train.label) results = pd.DataFrame(model_cycle_1.predict_proba(df.body), columns=themes) results['AO'] = df.title for theme in themes: results.sort_values(by=theme, ascending=False)[:40][['AO', theme]].to_csv(theme+'_cycle1_results.csv', index=False) df_mnchn.drop('body', axis=1).to_csv('mnchn_cycle1_keywords.csv', index=False) df_geriatric.drop('body', axis=1).to_csv('geriatric_cycle1_keywords.csv', index=False) df_adolescent.drop('body', axis=1).to_csv('adolescent_cycle1_keywords.csv', index=False) df_specpop.drop('body', axis=1).to_csv('specpop_cycle1_keywords.csv', index=False) """ Explanation: Classification Pipeline End of explanation """
projectmesa/mesa-examples
examples/PD_Grid/Demographic Prisoner's Dilemma Activation Schedule.ipynb
apache-2.0
from pd_grid import PD_Model import random import numpy as np import matplotlib.pyplot as plt import matplotlib.gridspec %matplotlib inline """ Explanation: Demographic Prisoner's Dilemma The Demographic Prisoner's Dilemma is a family of variants on the classic two-player Prisoner's Dilemma, first developed by Joshua Epstein. The model consists of agents, each with a strategy of either Cooperate or Defect. Each agent's payoff is based on its strategy and the strategies of its spatial neighbors. After each step of the model, the agents adopt the strategy of their neighbor with the highest total score. The specific variant presented here is adapted from the Evolutionary Prisoner's Dilemma model included with NetLogo. Its payoff table is a slight variant of the traditional PD payoff table: <table> <tr><td></td><td>**Cooperate**</td><td>**Defect**</td></tr> <tr><td>**Cooperate**</td><td>1, 1</td><td>0, *D*</td></tr> <tr><td>**Defect**</td><td>*D*, 0</td><td>0, 0</td></tr> </table> Where D is the defection bonus, generally set higher than 1. In these runs, the defection bonus is set to $D=1.6$. The Demographic Prisoner's Dilemma demonstrates how simple rules can lead to the emergence of widespread cooperation, despite the Defection strategy dominiating each individual interaction game. However, it is also interesting for another reason: it is known to be sensitive to the activation regime employed in it. Below, we demonstrate this by instantiating the same model (with the same random seed) three times, with three different activation regimes: Sequential activation, where agents are activated in the order they were added to the model; Random activation, where they are activated in random order every step; Simultaneous activation, simulating them all being activated simultaneously. End of explanation """ bwr = plt.get_cmap("bwr") def draw_grid(model, ax=None): ''' Draw the current state of the grid, with Defecting agents in red and Cooperating agents in blue. ''' if not ax: fig, ax = plt.subplots(figsize=(6,6)) grid = np.zeros((model.grid.width, model.grid.height)) for agent, x, y in model.grid.coord_iter(): if agent.move == "D": grid[y][x] = 1 else: grid[y][x] = 0 ax.pcolormesh(grid, cmap=bwr, vmin=0, vmax=1) ax.axis('off') ax.set_title("Steps: {}".format(model.schedule.steps)) def run_model(model): ''' Run an experiment with a given model, and plot the results. ''' fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(231) ax2 = fig.add_subplot(232) ax3 = fig.add_subplot(233) ax4 = fig.add_subplot(212) draw_grid(model, ax1) model.run(10) draw_grid(model, ax2) model.run(10) draw_grid(model, ax3) model.datacollector.get_model_vars_dataframe().plot(ax=ax4) # Set the random seed seed = 21 """ Explanation: Helper functions End of explanation """ random.seed(seed) m = PD_Model(50, 50, "Sequential") run_model(m) """ Explanation: Sequential Activation End of explanation """ random.seed(seed) m = PD_Model(50, 50, "Random") run_model(m) """ Explanation: Random Activation End of explanation """ random.seed(seed) m = PD_Model(50, 50, "Simultaneous") run_model(m) """ Explanation: Simultaneous Activation End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/hammoz-consortium/cmip6/models/sandbox-2/ocnbgchem.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-2', 'ocnbgchem') """ Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem MIP Era: CMIP6 Institute: HAMMOZ-CONSORTIUM Source ID: SANDBOX-2 Topic: Ocnbgchem Sub-Topics: Tracers. Properties: 65 (37 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:03 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks 4. Key Properties --&gt; Transport Scheme 5. Key Properties --&gt; Boundary Forcing 6. Key Properties --&gt; Gas Exchange 7. Key Properties --&gt; Carbon Chemistry 8. Tracers 9. Tracers --&gt; Ecosystem 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton 11. Tracers --&gt; Ecosystem --&gt; Zooplankton 12. Tracers --&gt; Disolved Organic Matter 13. Tracers --&gt; Particules 14. Tracers --&gt; Dic Alkalinity 1. Key Properties Ocean Biogeochemistry key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean biogeochemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean biogeochemistry model code (PISCES 2.0,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Geochemical" # "NPZD" # "PFT" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Model Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean biogeochemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Fixed" # "Variable" # "Mix of both" # TODO - please enter value(s) """ Explanation: 1.4. Elemental Stoichiometry Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe elemental stoichiometry (fixed, variable, mix of the two) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.5. Elemental Stoichiometry Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe which elements have fixed/variable stoichiometry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of all prognostic tracer variables in the ocean biogeochemistry component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.7. Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of all diagnotic tracer variables in the ocean biogeochemistry component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.damping') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.8. Damping Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any tracer damping used (such as artificial correction or relaxation to climatology,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport Time stepping method for passive tracers transport in ocean biogeochemistry 2.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time stepping framework for passive tracers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.2. Timestep If Not From Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Time step for passive tracers (if different from ocean) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks Time stepping framework for biology sources and sinks in ocean biogeochemistry 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time stepping framework for biology sources and sinks End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Timestep If Not From Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Time step for biology sources and sinks (if different from ocean) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline" # "Online" # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Transport Scheme Transport scheme in ocean biogeochemistry 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transport scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Use that of ocean model" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 4.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Transport scheme used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Use Different Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Decribe transport scheme if different than that of ocean model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Atmospheric Chemistry model" # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Boundary Forcing Properties of biogeochemistry boundary forcing 5.1. Atmospheric Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how atmospheric deposition is modeled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Land Surface model" # TODO - please enter value(s) """ Explanation: 5.2. River Input Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river input is modeled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Sediments From Boundary Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List which sediments are speficied from boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.4. Sediments From Explicit Model Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List which sediments are speficied from explicit sediment model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Gas Exchange *Properties of gas exchange in ocean biogeochemistry * 6.1. CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CO2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.2. CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe CO2 gas exchange End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.3. O2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is O2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.4. O2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe O2 gas exchange End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.5. DMS Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is DMS gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.6. DMS Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify DMS gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.7. N2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is N2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.8. N2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify N2 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.9. N2O Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is N2O gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.10. N2O Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify N2O gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.11. CFC11 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CFC11 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.12. CFC11 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify CFC11 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.13. CFC12 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CFC12 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.14. CFC12 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify CFC12 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.15. SF6 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is SF6 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.16. SF6 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify SF6 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.17. 13CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is 13CO2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.18. 13CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify 13CO2 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.19. 14CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is 14CO2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.20. 14CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify 14CO2 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.21. Other Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any other gas exchange End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other protocol" # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Carbon Chemistry Properties of carbon chemistry biogeochemistry 7.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how carbon chemistry is modeled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea water" # "Free" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7.2. PH Scale Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If NOT OMIP protocol, describe pH scale. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Constants If Not OMIP Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If NOT OMIP protocol, list carbon chemistry constants. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Tracers Ocean biogeochemistry tracers 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of tracers in ocean biogeochemistry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.2. Sulfur Cycle Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sulfur cycle modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrogen (N)" # "Phosphorous (P)" # "Silicium (S)" # "Iron (Fe)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.3. Nutrients Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List nutrient species present in ocean biogeochemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrates (NO3)" # "Amonium (NH4)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.4. Nitrous Species If N Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If nitrogen present, list nitrous species. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Dentrification" # "N fixation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.5. Nitrous Processes If N Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If nitrogen present, list nitrous processes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Tracers --&gt; Ecosystem Ecosystem properties in ocean biogeochemistry 9.1. Upper Trophic Levels Definition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Definition of upper trophic level (e.g. based on size) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Upper Trophic Levels Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Define how upper trophic level are treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "PFT including size based (specify both below)" # "Size based only (specify below)" # "PFT only (specify below)" # TODO - please enter value(s) """ Explanation: 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton Phytoplankton properties in ocean biogeochemistry 10.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of phytoplankton End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Diatoms" # "Nfixers" # "Calcifiers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.2. Pft Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Phytoplankton functional types (PFT) (if applicable) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microphytoplankton" # "Nanophytoplankton" # "Picophytoplankton" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.3. Size Classes Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Phytoplankton size classes (if applicable) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "Size based (specify below)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11. Tracers --&gt; Ecosystem --&gt; Zooplankton Zooplankton properties in ocean biogeochemistry 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of zooplankton End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microzooplankton" # "Mesozooplankton" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Size Classes Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Zooplankton size classes (if applicable) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12. Tracers --&gt; Disolved Organic Matter Disolved organic matter properties in ocean biogeochemistry 12.1. Bacteria Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there bacteria representation ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Labile" # "Semi-labile" # "Refractory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.2. Lability Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe treatment of lability in dissolved organic matter End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diagnostic" # "Diagnostic (Martin profile)" # "Diagnostic (Balast)" # "Prognostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Tracers --&gt; Particules Particulate carbon properties in ocean biogeochemistry 13.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is particulate carbon represented in ocean biogeochemistry? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "POC" # "PIC (calcite)" # "PIC (aragonite" # "BSi" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Types If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, type(s) of particulate matter taken into account End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "No size spectrum used" # "Full size spectrum" # "Discrete size classes (specify which below)" # TODO - please enter value(s) """ Explanation: 13.3. Size If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13.4. Size If Discrete Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic and discrete size, describe which size classes are used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Function of particule size" # "Function of particule type (balast)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Sinking Speed If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, method for calculation of sinking speed of particules End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "C13" # "C14)" # TODO - please enter value(s) """ Explanation: 14. Tracers --&gt; Dic Alkalinity DIC and alkalinity properties in ocean biogeochemistry 14.1. Carbon Isotopes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which carbon isotopes are modelled (C13, C14)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 14.2. Abiotic Carbon Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is abiotic carbon modelled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Prognostic" # "Diagnostic)" # TODO - please enter value(s) """ Explanation: 14.3. Alkalinity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is alkalinity modelled ? End of explanation """
LSSTC-DSFP/LSSTC-DSFP-Sessions
Sessions/Session11/Day1/InvestigatingDetectors.ipynb
mit
from astropy.io import fits import numpy as np import matplotlib import matplotlib.pyplot as plt matplotlib.rcParams['figure.dpi'] = 120 """ Explanation: Investigating Detectors Version 0.1 Understanding the behavior of the CCDs in a camera requires digging deep into calibration exposures. That is where you can uncover effects that might not be noticible in on-sky exposures, but may subtly contaminate the data if left uncorrected. It is also how camera engineering teams optimize and debug the performance of the camera when it's still in the lab. We're going to look at two test exposures taken with one of the Rubin Observatory CCDs. They're both biases; each image has a zero second exposure time and the detector was not illuminated. Please download a tarball of the images for this notebook: investigating_detectors.tar.gz. As a reminder, you can unpack these files via tar -zxvf investigating_detectors.tar.gz By C Slater (University of Washington) End of explanation """ def simulated_image(signal_level, read_noise, gain): """ Return a 1-D simulated "image" with the noise properties of a CCD sensor. The image is always 1000 pixels long. signal_level is the mean number of electrons in each pixel. read_noise is the noise of the readout amplifier, in electrons. gain is the number of electrons per ADU. """ return (1/gain) * (read_noise*np.random.randn(1000) + np.random.poisson(signal_level, size=1000)) """ Explanation: Photon Transfer Curve 1) Simulated Images The "Photon Transfer Curve" is the name given to the relationship between the signal level and the noise level in a sensor. We're going to do a few experiments to show how it works in principle, and then we'll look at some real images and make some diagnostic measurements. First we need a model of the noise in CCD image. I'm going to give this to you so we all start out on the same page End of explanation """ # Question noise_level_list = # complete measured_signal_level_list = # complete input_signal_levels = # complete for # complete # complete # complete noise_levels = # complete measured_signal_levels = # complete """ Explanation: Before diving in to programming, take a careful look at the components in the simulated image. What are the two noise sources, and why do they have that functional form? We're going to be looking a lot at the image "gain"; does it make sense how that is applied? Let's make some simulations. What we want to do is loop over a set of input levels light levels, from zero to "full well" capacity (on order of 10,000 electrons). For each simulated image, we want to measure the mean signal level (because that's what we see as users of a CCD) and the standard deviation of that image. Save those in two lists, but at the end convert those back to numpy arrays to make downstream usage easier. For right now, set the read noise to 5, and the gain to 0.8. End of explanation """ # Question # Complete plt.ylabel("RMS") plt.xlabel("Measured Signal Level") """ Explanation: Plot the noise vs. the measured signal level, on a log-log plot. End of explanation """ # Question plt.plot( # Complete plt.ylabel("Variance") plt.xlabel("Measured Signal Level") """ Explanation: What is the behavior you see? What are the two different noise regimes? Fit a straight line to the "bright" portion of the data (high signal levels) and print the resulting coefficients. Remember that you're looking at a log-log plot, and so you want to fit the logs of the variables. You can add this to the plot in the cell above. Why does the line have that value of the slope? Now we're going to plot something slightly different. Plot the variance this time, and on a linear plot instead of log-log (again vs. measured signal level). Fit a straight line to the data (in linear space) and print the coefficients. Also print the reciprocal of the slope. Where did this slope come from? End of explanation """ bias1_file = fits.open("00258334360-S10-det003.fits") bias1_data = bias1_file[1].data # Question plt.imshow( # complete """ Explanation: The slope here is related to the gain (either proportionally or inversely, depending on how one chooses to define gain). This can be summarized as $$ \frac{1}{\textrm{gain}} = \langle \frac{\textrm{Variance}}{\textrm{Mean Signal Level}} \rangle $$ It's a clever and useful trick, or at least it seems like a trick, because the standard deviation plot wasn't affected by the gain at all. Go back and try varying the gain and re-run the plots, and you'll see what does and doesn't change. One way to think of it is that the measured signal level is affected linearly by the gain, but the variance is affected by the square of the gain. Dividing these two gives you a linear relation back, but when dividing the square root of the variance, the gain cancels out. 2) Looking at real bias frames Remember that a bias frame is an image that is exposed for zero seconds; it's just immediately read-out without being exposed to light. You might think that is a pretty boring image, particularly if you're at the telescope and getting ready for a night of observing. But to a camera engineer, bias frames hold lots of information about how the camera is operating. We're going to look at images from one example LSST sensor; this was taken on a test stand and not the actual camera, so don't take it as representative of real camera performance. Our first step is, as usual, to look at the image and make sure it seems reasonable. End of explanation """ # complete """ Explanation: Notice that when we plotted bias1_file[1].data, the image we get is 2048 by 576 pixels. Because LSST sensors have 16 separate amplifiers, the data from each one of them is put in a different "header data unit" (HDU) in the FITS file. You can get to them by substituting n in bias1_file[n], where n is the amplifier number. 3) Looking for structure The bias looks mostly like Gaussian noise, but if you look carefully some parts of the image look like they have some "structure". Let's make a few plots: try plotting the mean of the data along columns in one plot, and along rows in another. Start with just a single amplifier, but if you like you can learn more by plotting each amplifier as a different line. Hint: the amplifiers each have different mean levels that you probably want to subtract off. End of explanation """ bias2_file = fits.open("00258334672-S10-det003.fits") # Question measured_stddevs = {} for hdu in # complete hdu_difference = # complete stddev = # complete measured_stddevs[ # comple measured_stddevs """ Explanation: These "simple" bias frames turn out to have a lot of structure in them, particularly at the start of columns. This isn't something we can dive much further into, because it's really an electronics problem (that was known about at the time). It's also worth noting that it's fractionally a small effect. We will have to make sure our subsequent analyses are not affected by the issue though. 4) Measuring the noise Bias images usually have some repeatable structure to them, so a useful trick is to use the difference of two bias frames taken close in time. Let's measure the standard deviation for the differences between the biases, doing so separately for each amplifier. This isn't the final read noise value yet, because it's still in ADU and not in electrons. We will store the results in a dictionary for later use. We load the second image: End of explanation """ flat1_file = fits.open("00258342968-S10-det003.fits") flat2_file = fits.open("00258343136-S10-det003.fits") """ Explanation: 5) Measuring the gain We have just one more step before we can report the read noise. We need to measure the gains so we can convert the noise in ADU into electrons. To do that, we're going to use the trick we saw at the start of this notebook. We need to add two things though: we want to use pairs of images, to cancel out any fixed spatial patterns, and we need images with significant counts in them so that we're not just measuring read noise. The formula we want to implement is thus: $$ \frac{1}{\textrm{gain}} = \langle \frac{(I_1 - I_2)^2}{I_1 + I_2} \rangle $$ where $I_1$ and $I_2$ are the pixel values from each image, and the $\langle$ $\rangle$ brackets denote taking the mean of this ratio over all pixels. We have some flat field images from those same sensors that we can use: End of explanation """ # Question for hdu in range(1,16): flat1_data = flat1_file[ # complete flat2_data = flat2_file[ # complete debiased_flat1 = # complete debiased_flat2 = # complete squared_noise = # complete summed_intensity = # complete # Some pixels with low counts are likely artifacts and can skew the measurement. # It helps to only keep pixels that have significant flux; you can experiment with this cutoff ok_values = (summed_intensity > # complete # Remember that as we defined gain above, the formula returns 1/gain. reciprocal_gain = # complete print(hdu, reciprocal_gain, 1/reciprocal_gain) """ Explanation: Since each amplifier can have a slightly different gain, we want to run this per-HDU and output a table of values. Since we're looping over the HDUs, we can also print the finished read noise values at the same time. Note that those have a factor of $\sqrt{2}$ because we took the difference of two bias frames, so the noise is greater than a single image. End of explanation """
LSSTC-DSFP/LSSTC-DSFP-Sessions
Sessions/Session11/Day3/GalaxyPhotometryAndShapes.ipynb
mit
# Load the packages we will use import numpy as np import astropy.io.fits as pf import astropy.coordinates as co from matplotlib import pyplot as pl import scipy.fft as fft %matplotlib inline """ Explanation: Practice with galaxy photometry and shape measurement To accompany galaxy-measurement lecture from the LSSTC Data Science Fellowship Program, July 2020. All questions and corrections can be directed to me at [email protected] Enjoy! Gary Bernstein, 16 July 2020 End of explanation """ def addBackground(image, variance): # Add Gaussian noise with given variance to each pixel of the image image += np.random.normal(scale=np.sqrt(variance),size=image.shape) return n_pix = 64 xy=np.indices( (n_pix,n_pix),dtype=float) x = xy[1].copy()- n_pix/2 y = xy[0].copy()- n_pix/2 pl.imshow(x,origin='lower',interpolation='nearest') pl.title("This is a plot of x coordinate") pl.colorbar() # Here is our elliptical exponential galaxy drawing function # It is always centered on the pixel just above right of the image center. def drawDisk(r0=4.,flux=1.,e=0.,sigma_psf=3.,n_pix=n_pix): # n_pix must be even. # Build arrays holding the (ky,kx) values # irfft2 wants array of this shape: tmp = np.ones((n_pix,n_pix//2+1),dtype=float) freqs = np.arange(-n_pix//2,n_pix//2) freqs = (2 * np.pi / n_pix)*np.roll(freqs,n_pix//2) kx = tmp * freqs[:n_pix//2+1] ky = tmp * freqs[:,np.newaxis] # Calculate the FT of the PSF ft = np.exp( (kx*kx+ky*ky)*(-sigma_psf*sigma_psf/2.)) # Produce the FT of the exponential - for the circular version, # it's (1+k^2 r_0^2)**(-3/2) # factors to "ellipticize" and scale the k's: a = np.power((1+e)/(1-e),0.25) ksqp1 = np.square(r0*kx*a) + np.square(r0*ky/a) + 1 ft *= flux / (ksqp1*np.sqrt(ksqp1)) # Now FFT back to real space img = fft.irfft2(ft) # And roll the origin to the center return np.roll(img, (n_pix//2,n_pix//2),axis=(0,1)) # As a test, let's draw an image with a small PSF size and # see if it really is exponential. # With e>0, it should be extended along x axis r0=4. img = drawDisk(e=0.2,flux=1e5,sigma_psf=3.,r0=r0) pl.imshow(img,origin='lower',interpolation='nearest') pl.title("Is it stretched along x?") # And also a plot of log(flux) vs x or y should look linear pl.figure() pl.plot(np.arange(-32,32)/r0,np.log(img[:,32]),label='Y') pl.plot(np.arange(-32,32)/r0,np.log(img[32,:]),label='X') pl.legend() pl.title("Are the lines straight and near unity slope?") pl.xlabel("(x or y)/r0") pl.ylabel("log(I)") pl.grid() """ Explanation: Useful tools For our galaxy measurement practice, we'll be testing out some of our techniques on exponential profile galaxies, which are define by $$ I(x,y) \propto e^{-r/r_0},$$ where $r_0$ is the "scale length," and we'll allow our galaxy to potentially be elliptical shaped by setting $$ r^2 = (1-e^2) \left[ \frac{(x-x_0)^2}{1-e} + \frac{(y-y_0)^2}{1+e}\right].$$ To reduce the complexity of our problem, I'm only letting the galaxy have the $e_+$ form of ellipticity, where $e>0$ ($e<0$) means the galaxy is stretched along the $x$ ($y$) axis. We're also going to assume that our galaxy is viewed through a circular Gaussian PSF: $$ T(x,y) \propto e^{-(x^2+y^2)/2\sigma_{\rm PSF}^2}.$$ The function drawDisk below is provided to draw an image of an elliptical exponential galaxy as convolved with a Gaussian PSF. You don't have to understand how it works to do these exercises. But you might be interested (since this is how the GalSim galaxy simulation package works): the galaxy and the PSF are first "drawn" in Fourier space, and then multiplied, since a convolution in real space is multiplication in Fourier space (which is much faster). Then we use a Fast Fourier Transform (FFT) to get our image back in real space. I also include in this notebook two helpful things from the astrometry notebook: * The function addBackground which will add background noise of a chosen level (denoted as $n$ in the lecture notes) to any image. * The x and y arrays that give the location values of each pixel. In this set of exercises, we'll work exclusively with 64x64 images. Also I am going to redefine the coordinate system so that $(x,y)=(0,0)$ is actually at element [32,32] of the array. End of explanation """ r0 = 4. e = 0. flux = 1e4 sigma_psf = 2. # your work here... """ Explanation: Exercise 1: Aperture photometry Here we'll try out a few forms of aperture photometry and see how they compare in terms of the S/N ratios they provide on the galaxy flux. (a) Write a function tophat_flux(img,R) which implements a simple tophat aperture sum of flux in all pixels within radius R of the center of the galaxy. We will keep the center of our galaxy fixed at pixel [32,32] so you don't have to worry about iterating to find the centroid. Draw a noiseless version of a circular galaxy with the characteristics in the cell below. Then use your tophat_flux function to plot the "curve of growth" for this image, with R on the x axis going from 5 to 30 pixels, and the y axis showing the fraction of the total flux that falls in your aperture. How many scale radii do we need the aperture to be to miss <1% of the flux? End of explanation """ # your work here... """ Explanation: (b) Next let's add some background noise to our image, say n_bg=100. First, make one such noisy version of your galaxy and imshow it. Then, using analytic methods, estimate what the variance of your aperture flux measurements will be when R=10. * Finally, make 1000 different realizations of your noisy galaxy and measure their tophat_flux to see whether the real variance of the flux measurements matches your prediction. End of explanation """ # your work here... """ Explanation: (c) Now create a plot of the S/N level of the flux measurement vs the radius R of the aperture. Here the signal is the mean, and the noise the std deviation, of the tophat_flux of many noisy measurements of this galaxy. You can use either an analytic or numeric estimate of these quantities. Report what the optimal tophat S/N is, and what R achieves it. End of explanation """ # your work here... """ Explanation: (d) Repeat part (c), but this time use a Gaussian aperture whose width $\sigma_w$ you vary to optimize the S/N ratio of the aperture flux, i.e. a function gaussian_flux(img,sigma_w) is needed. Which performs better, the optimized tophat or the optimized Gaussian? End of explanation """ # your work here... """ Explanation: Exercise 2: Spurious color This time let's consider that we want to measure an accurate $g-r$ color for our galaxy, but the seeing is $\sigma_{\rm PSF}=2$ pixels in the $r$ image but $\sigma_{\rm PSF}=2.5$ pixels in the $g$ image. Let's see how the size of our aperture biases our color measurement. (a) Draw a noiseless $g$-band and a noiseless $r$-band image of our galaxy. Let's assume that the true color $g-r \equiv 2.5\log_10(f_r/f_g) = 0,$ i.e. that the $g$ and $r$ fluxes of the galaxy are both equal to our nominal flux. Plot the difference between the two images: are they the same? End of explanation """ # your work here... """ Explanation: (b) Using either your Gaussian or your tophat aperture code, plot the measured $g-r$ color of the galaxy as a function of the size of the aperture. Since the true color is zero, this measurement is the size of the systematic error that is being made in color because of mismatched pre-seeing apertures. End of explanation """ # your work here... """ Explanation: We can see here that a naive use of "matched" apertures can cause significant spurious color, even when the aperture has a sigma that is many times that of the galaxy and PSF. But the tophat does better. So without any kind of PSF matching, we have to use algorithms with non-optimal S/N in order to approach true colors. Exercise 3: Degradation of ellipticity measurements by seeing It's hard to measure the shape of a galaxy that is not resolved by the PSF. That means that poorly-resolved galaxies are less useful for detecting weak-lensing (WL) shear. Let's see if we can quantify this by using the Fisher matrix to determine the best possible measurement accuracy on the parameter $e$ of our model (we'll make things easy by holding all other parameters of the galaxy model as fixed). Remember how the Fisher matrix works: for an image signal $I_{xy}$ and noise $\sigma_{xy}$ in each pixel, the Fisher information for a parameter $\theta$ is $$ F_{\theta\theta} = \sum_{xy} \frac{1}{\sigma^2_{xy}} \left(\frac{\partial I_{xy}}{\partial\theta}\right)^2.$$ Here we're interested in $\theta=e$. (a) Draw two versions of our standard galaxy, with $e = \pm0.01.$ Use these to calculate and plot the quantity we need, $\frac{\partial I_{xy}}{\partial e}.$ Comment on how this picture relates to the fact that we like to measure WL shear using the moment of $x^2-y^2$. End of explanation """ # your work here... """ Explanation: (b) Use this to calculate the best achievable measurement accuracy on $e$ for our standard image. End of explanation """ # your work here... """ Explanation: (c) Make a graph showing how the optimal $\sigma_e$ varies as the size $\sigma_{\rm PSF}$ of the Gaussian PSF varies from being $0.2\times r_0$ to being $3\times r_0.$. What's the lesson here? End of explanation """
liganega/Gongsu-DataSci
ref_materials/exams/2015/midterm.ipynb
gpl-3.0
def interval_point(a, b, x): if a < b: return (b-a)*x + a else: return a - (a-b)*x """ Explanation: 2015년 2학기 공업수학 중간고사 시험지 이름: 학번: 시험지 작성 요령 예제코드를 보면서 문제의 내용을 이해하도록 노력한다. 문제별로 '해야 할 일' 에서 요구하는 방향으로 변경된 코드의 빈자리를 채우거나 답을 한다. 문제 1 세 개의 숫자를 입력받는 함수 interval_point는 아래 기능을 구현한다. 숫자 a와 b는 구간의 처음과 끝을 나타낸다. 숫자 x는 0과 1 사이의 값이다. 그러면 interval_point(a, b, x)는 a에서 출발하여 x비율만큼 b 방향으로 이동할 때 갈 수 있는 위치를 되돌려준다. interval_point 함수를 다음과 같이 정의할 수 있다. End of explanation """ interval_point(0, 1, 0.5) interval_point(3, 2, 0.2) """ Explanation: 활용 예제 End of explanation """ while True: try: x = float(raw_input("Please type a new number: ")) inverse = 1.0 / x print("The inverse of {} is {}.".format(x, inverse)) break except ValueError: print("You should have given either an int or a float") except ZeroDivisionError: print("The input number is {} which cannot be inversed.".format(int(x))) """ Explanation: 해야 할 일 1 (5점) 위 코드를 if 조건문을 사용하지 않도록 수정하고자 한다. 아래 코드의 빈칸 (A)를 채워라. def interval_point_no_if(a, b, x): return (A) 문제 2 아래 코드는 오류가 발생할 경우를 대비하여 예외처리를 사용한 코드이다. End of explanation """ f = open("test.txt", 'w') f.write("1,3,5,8\n0,4,7\n1,18") f.close() """ Explanation: 해야 할 일 2 (10점) 아래 코드가 하는 일을 설명하고 발생할 수 있는 예외들을 나열하며, 예외처리를 어떻게 하는지 설명하라. 문제 3 콤마(',')로 구분된 문자들이 저장되어 있는 파일을 csv(comma separated value) 파일이라 부른다. 숫자들로만 구성된 csv 파일을 인자로 받아서 각 줄별로 포함된 숫자들과 숫자들의 합을 계산하여 보여주는 함수 print_line_sum_of_file과 관련된 문제이다. 예를 들어 test.txt 파일에 아래 내용이 들어 있다고 가정하면 아래의 결과가 나와야 한다. 1,3,5,8 0,4,7 1,18 In [1]: print_line_sum_of_file("test.txt") out[1]: 1 + 3 + 5 + 8 = 17 0 + 4 + 7 = 11 1 + 18 = 19 text.txt 파일을 생성하는 방법은 다음과 같다. End of explanation """ def print_line_sum_of_file(filename): g = open("test.txt", 'r') h = g.readlines() g.close() for line in h: sum = 0 k = line.strip().split(',') for i in range(len(k)): if i < len(k) -1: print(k[i] + " +"), else: print(k[i] + " ="), sum = sum + int(k[i]) print(sum) """ Explanation: 또한 print_line_sum_of_file을 예를 들어 다음과 같이 작성할 수 있다. End of explanation """ print_line_sum_of_file("test.txt") """ Explanation: 위 함수를 이전에 작성한 예제 파일에 적용하면 예상된 결과과 나온다. End of explanation """ def linear_1(a, b): return a + b def linear_2(a, b): return a * 2 + b """ Explanation: 해야 할 일 3 (5점) 그런데 위와 같이 정의하면 숫자가 아닌 문자열이 포함되어 있을 경우 ValueError가 발생한다. ValuError가 어디에서 발생하는지 답하라. 해야 할 일 4 (10점) 이제 데이터 파일에 숫자가 아닌 문자열이 포함되어 있을 경우도 다룰 수 있도록 print_line_sum_of_file를 수정해야 한다. 예를 들어 숫자가 아닌 문자가 포함되어 있는 단어가 있을 경우 단어의 길이를 덧셈에 추가하도록 해보자. 예제: test.txt 파일에 아래 내용이 들어 있다고 가정하면 아래 결과가 나와야 한다. 1,3,5,8 1,cat4,7 co2ffee In [1]: print_line_sum_of_file("test.txt") out[1]: 1 + 3 + 5 + 8 = 17 1 + cat4 + 7 = 12 co2ffee = 7 예를 들어 다음과 같이 수정할 수 있다. 빈 칸 (A)와 (B)를 채워라. f = open("test.txt", 'w') f.write("1,3,5,8\n1,cat4,7\nco2ffee") f.close() def print_line_sum_of_file(filename): g = open("test.txt", 'r') h = g.readlines() g.close() for line in h: sum = 0 k = line.strip().split(',') for i in range(len(k)): if i &lt; len(k) - 1: print(k[i] + " +"), else: print(k[i] + " ="), try: (A) except ValueError: (B) print(sum) 문제 4 함수를 리턴값으로 갖는 고계함수(higer-order function)를 다루는 문제이다. 먼저 다음의 함수들을 살펴보자. End of explanation """ def linear_gen(n, a, b): return a * n + b """ Explanation: 동일한 방식을 반복하면 임의의 자연수 n에 대해 linear_n 함수를 정의할 수 있다. 즉, linear_n(a, b) = a * n + b 이 만족되는 함수를 무한히 많이 만들 수 있다. 그런데 그런 함수들을 위와같은 방식으로 정의하는 것은 매우 비효율적이다. 한 가지 대안은 변수를 하나 더 추가하는 것이다. End of explanation """ def linear_10(a, b): return linear_gen(10, a, b) """ Explanation: 위와 같이 linear_gen 함수를 정의한 다음에 특정 n에 대해 linear_n 이 필요하다면 아래와 같이 간단하게 정의해서 사용할 수 있다. 예를 들어 n = 10인 경우이다. End of explanation """ names = ["Koh", "Kang", "Park", "Kwon", "Lee", "Yi", "Kim", "Jin"] """ Explanation: 해야 할 일 5 (10점) 그런데 이 방식은 특정 linear_n을 사용하고자 할 때마다 def 키워드를 이용하여 함수를 정의해야 하는 단점이 있다. 그런데 고계함수를 활용하면 def 키워드를 한 번만 사용해도 모든 수 n에 대해 linear_n 함수를 필요할 때마다 사용할 수 있다. 예를 들어 아래 등식이 만족시키는 고계함수 linear_high를 정의할 수 있다. linear_10(3, 5) = linear_high(10)(3, 5) linear_15(2, 7) = linear_high(15)(2, 7) 아래 코드가 위 등식을 만족시키도록 빈자리 (A)와 (B)를 채워라. def linear_high(n): def linear_n(a, b): (A) return (B) 문제 5 이름이 없는 무명함수를 다루는 문제이다. 무명함수는 간단하게 정의할 수 있는 함수를 한 번만 사용하고자 할 경우에 굳이 함수 이름이 필요없다고 판단되면 사용할 수 있다. 예를 들어 앞서 linear_high 함수를 정의할 때 사용된 linear_n 함수의 경우가 그렇다. linear_n 함수는 linear_high 함수가 호출될 때만 의미를 갖는 함수이며 그 이외에는 존재하지 않는 함수가 된다. 따라서 그냥 linear_n 함수를 물어보면 파이썬 해석기가 전혀 알지 못하며 NameError가 발생한다. 해야 할 일 6 (5점) linear_n 함수의 정의가 매우 단순하다. 따라서 굳이 이름을 줄 필요가 없이 람다(lambda) 기호를 이용하여 함수를 정의하면 편리하다. __문제 4__에서 linear_high 함수와 동일한 기능을 수행하는 함수 linear_high_lambda 함수를 아래와 같이 정의하고자 한다. 빈 칸 (A)를 채워라. def linear_high_lambda(n): return (A) 문제 6 문자열로 구성된 리스트 names가 있다. End of explanation """ def StartsWithK(s): return s[0] == 'K' K_names = filter(StartsWithK, names) K_names """ Explanation: K로 시작하는 이름으로만 구성된 리스트는 파이썬 내장함수 filter를 이용하여 만들 수 있다. End of explanation """ map(lambda x : x ** 2, range(5)) """ Explanation: 해야 할 일 7 (15점) filter 함수를 사용하지 않으면서 동일한 기능을 수행하는 코드를 작성하고자 하면 다음과 같이 할 수 있다. 빈칸 (A), (B), (C)를 채워라. K_names = [] for name in names: if (A) : (B) else: (C) 해야 할 일 8 (5점) K로 시작하는 이름만으로 구성된 리스트를 글자 순서의 __역순__으로 정렬하고자 한다. 아래 코드가 리스트 관련 특정 메소드를 사용하도록 빈 자리 (D)를 채워라. K_names.(D) 문제 7 파이썬 내장함수 map의 기능은 아래 예제에서 확인할 수 있다. End of explanation """ def list_square(num): L = [] for i in range(num): L.append(i ** 2) return L list_square(5) """ Explanation: map 함수를 사용하지 않는 방식은 다음과 같다. End of explanation """ cities = ['A', 'B', 'C', 'D', 'E'] populations = [20, 30, 140, 80, 25] """ Explanation: 해야 할 일 9 (10점) list_square 함수와 동일한 기능을 수행하는 함수 list_square_comp 함수를 리스트 조건제시법을 활용하여 아래처럼 구현하고자 한다. 빈자리 (A)를 채워라. def list_square_comp(num): return (A) 문제 8 다섯 개의 도시명과 각 도시의 인구수로 이루어진 두 개의 리스트가 아래처럼 있다. End of explanation """ city_pop = [] for i in range(len(cities)): city_pop.append((cities[i], populations[i])) city_pop """ Explanation: 도시이름과 인구수를 쌍으로 갖는 리스트를 구현하는 방법은 아래와 같다. End of explanation """ city_pop[2][1] """ Explanation: city_pop를 이용하여 예를 들어 C 도시의 인구수를 확인하는 방법은 다음과 같다. End of explanation """
Rantanen/igraph
examples/ipython.ipynb
mit
import igraph igraph.draw([(1, 2), (2, 3), (3, 4), (4, 1), (4, 5), (5, 2)]) """ Explanation: igraph in the IPython notebook I wrote igraph to visualize graphs in 3D purely out of curiosity. I couldn't find any 3D force-directed graph libraries when I wrote it, so this happened. It can be used with the notebook to interactively view graph data. Some graphs just look nicer in 3D. To use, send it Python objects. A simple data structure is the adjacency list. End of explanation """ graph = { "nodes": { "ross": {"color": 0xffaaaa, "size": 2.0}, "joey": {"size": 0.5}, "chandler": {"color": 0x2222ff, "size": 1.25}, "phoebe": {"color": 0x22ff22}, "rachel": {}, "monica": {}, "jack": {}, "judy": {}, }, "edges": [ {"source": "chandler", "target": "ross"}, {"source": "monica", "target": "ross"}, {"source": "ross", "target": "rachel"}, {"source": "ross", "target": "joey"}, {"source": "ross", "target": "phoebe"}, {"source": "ross", "target": "judy"}, {"source": "monica", "target": "rachel"}, {"source": "rachel", "target": "jack"}, {"source": "chandler", "target": "phoebe"} ] } igraph.draw(graph) """ Explanation: That works, but the graph is boring. More complex graph visualizations use node size and color to show multiple dimensions. To do this with igraph, you can use a more expressive Python data structure. Each node takes three optional parameters: color, size, and location. End of explanation """ igraph.draw("miserables.json") """ Explanation: The draw method can take python objects, strings, or files. End of explanation """ number_of_steps = 5 b_tree = [(1, 2), (1, 3)] index = 3 for _ in range(number_of_steps): leaves = [edge[1] for edge in b_tree if all(edge[1] != other_edge[0] for other_edge in b_tree)] for leaf in leaves: for __ in range(2): index += 1 b_tree.append((leaf, index)) igraph.draw(b_tree, shader="lambert", default_node_color=0x383294, z=200, size=(800, 600)) """ Explanation: Because the input is a pure Python data structure, you can programmatically create and edit graphs without learning a new API. This lets you mess around with graph data structures and algorithms without thinking about library-specific semantics. End of explanation """ help(igraph.draw) """ Explanation: You may have noticed some extra arguments used in that last example. There are options to change default colors and sizing, renderers, and more. For a full breakdown, read the docs. End of explanation """ help(igraph.generate) """ Explanation: You may also want more control over generating the force-directed graph from an adjacency list. For that use igraph.generate. End of explanation """
ucsd-ccbb/mali-dual-crispr-pipeline
dual_crispr/distributed_files/notebooks/Dual CRISPR 5-Count Plots.ipynb
mit
g_dataset_name = "Notebook5Test" g_fastq_counts_run_prefix = "TestSet5" g_fastq_counts_dir = '~/dual_crispr/test_data/test_set_5' g_collapsed_counts_run_prefix = "" g_collapsed_counts_dir = "" g_combined_counts_run_prefix = "" g_combined_counts_dir = "" g_plots_run_prefix = "" g_plots_dir = '~/dual_crispr/test_outputs/test_set_5' """ Explanation: Dual CRISPR Screen Analysis Step 5: Count Plots Amanda Birmingham, CCBB, UCSD ([email protected]) Instructions To run this notebook reproducibly, follow these steps: 1. Click Kernel > Restart & Clear Output 2. When prompted, click the red Restart & clear all outputs button 3. Fill in the values for your analysis for each of the variables in the Input Parameters section 4. Click Cell > Run All Input Parameters End of explanation """ import inspect import ccbb_pyutils.analysis_run_prefixes as ns_runs import ccbb_pyutils.files_and_paths as ns_files import ccbb_pyutils.notebook_logging as ns_logs def describe_var_list(input_var_name_list): description_list = ["{0}: {1}\n".format(name, eval(name)) for name in input_var_name_list] return "".join(description_list) ns_logs.set_stdout_info_logger() g_fastq_counts_dir = ns_files.expand_path(g_fastq_counts_dir) g_collapsed_counts_run_prefix = ns_runs.check_or_set(g_collapsed_counts_run_prefix, g_fastq_counts_run_prefix) g_collapsed_counts_dir = ns_files.expand_path(ns_runs.check_or_set(g_collapsed_counts_dir, g_fastq_counts_dir)) g_combined_counts_run_prefix = ns_runs.check_or_set(g_combined_counts_run_prefix, g_collapsed_counts_run_prefix) g_combined_counts_dir = ns_files.expand_path(ns_runs.check_or_set(g_combined_counts_dir, g_collapsed_counts_dir)) g_plots_run_prefix = ns_runs.check_or_set(g_plots_run_prefix, ns_runs.generate_run_prefix(g_dataset_name)) g_plots_dir = ns_files.expand_path(ns_runs.check_or_set(g_plots_dir, g_combined_counts_dir)) print(describe_var_list(['g_fastq_counts_dir', 'g_collapsed_counts_run_prefix','g_collapsed_counts_dir', 'g_combined_counts_run_prefix', 'g_combined_counts_dir', 'g_plots_run_prefix', 'g_plots_dir'])) ns_files.verify_or_make_dir(g_collapsed_counts_dir) ns_files.verify_or_make_dir(g_combined_counts_dir) ns_files.verify_or_make_dir(g_plots_dir) %matplotlib inline """ Explanation: Automated Set-Up End of explanation """ import dual_crispr.construct_counter as ns_counter print(inspect.getsource(ns_counter.get_counts_file_suffix)) import dual_crispr.count_combination as ns_combine print(inspect.getsource(ns_combine.get_collapsed_counts_file_suffix)) print(inspect.getsource(ns_combine.get_combined_counts_file_suffix)) """ Explanation: Count File Suffixes End of explanation """ import dual_crispr.count_plots as ns_plot print(inspect.getsource(ns_plot)) """ Explanation: Count Plots Functions End of explanation """ print(ns_files.check_file_presence(g_fastq_counts_dir, g_fastq_counts_run_prefix, ns_counter.get_counts_file_suffix(), check_failure_msg="Count plots could not detect any individual fastq count files.")) ns_plot.plot_raw_counts(g_fastq_counts_dir, g_fastq_counts_run_prefix, ns_counter.get_counts_file_suffix(), g_plots_dir, g_plots_run_prefix, ns_plot.get_boxplot_suffix()) """ Explanation: Individual fastq Plots End of explanation """ print(ns_files.check_file_presence(g_collapsed_counts_dir, g_collapsed_counts_run_prefix, ns_combine.get_collapsed_counts_file_suffix(), check_failure_msg="Count plots could not detect any individual sample count files.") ) ns_plot.plot_raw_counts(g_collapsed_counts_dir, g_collapsed_counts_run_prefix, ns_combine.get_collapsed_counts_file_suffix(), g_plots_dir, g_plots_run_prefix, ns_plot.get_boxplot_suffix()) """ Explanation: Individual Sample Plots End of explanation """ print(ns_files.check_file_presence(g_combined_counts_dir, g_combined_counts_run_prefix, ns_combine.get_combined_counts_file_suffix(), check_failure_msg="Count plots could not detect a combined count file.")) ns_plot.plot_combined_raw_counts(g_combined_counts_dir, g_combined_counts_run_prefix, ns_combine.get_combined_counts_file_suffix(), g_plots_dir, g_plots_run_prefix, ns_plot.get_boxplot_suffix()) """ Explanation: Combined Samples Plot End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/production_ml/labs/samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
apache-2.0
import kfp import kfp.gcp as gcp import kfp.dsl as dsl import kfp.compiler as compiler import kfp.components as comp import datetime import kubernetes as k8s # Required Parameters PROJECT_ID='<ADD GCP PROJECT HERE>' GCS_BUCKET='gs://<ADD STORAGE LOCATION HERE>' """ Explanation: Composing a pipeline from reusable, pre-built, and lightweight components This tutorial describes how to build a Kubeflow pipeline from reusable, pre-built, and lightweight components. The following provides a summary of the steps involved in creating and using a reusable component: Write the program that contains your component’s logic. The program must use files and command-line arguments to pass data to and from the component. Containerize the program. Write a component specification in YAML format that describes the component for the Kubeflow Pipelines system. Use the Kubeflow Pipelines SDK to load your component, use it in a pipeline and run that pipeline. Then, we will compose a pipeline from a reusable component, a pre-built component, and a lightweight component. The pipeline will perform the following steps: - Train an MNIST model and export it to Google Cloud Storage. - Deploy the exported TensorFlow model on AI Platform Prediction service. - Test the deployment by calling the endpoint with test data. Note: Ensure that you have Docker installed, if you want to build the image locally, by running the following command: which docker The result should be something like: /usr/bin/docker End of explanation """ # Optional Parameters, but required for running outside Kubeflow cluster # The host for 'AI Platform Pipelines' ends with 'pipelines.googleusercontent.com' # The host for pipeline endpoint of 'full Kubeflow deployment' ends with '/pipeline' # Examples are: # https://7c021d0340d296aa-dot-us-central2.pipelines.googleusercontent.com # https://kubeflow.endpoints.kubeflow-pipeline.cloud.goog/pipeline HOST = '<ADD HOST NAME TO TALK TO KUBEFLOW PIPELINE HERE>' # For 'full Kubeflow deployment' on GCP, the endpoint is usually protected through IAP, therefore the following # will be needed to access the endpoint. CLIENT_ID = '<ADD OAuth CLIENT ID USED BY IAP HERE>' OTHER_CLIENT_ID = '<ADD OAuth CLIENT ID USED TO OBTAIN AUTH CODES HERE>' OTHER_CLIENT_SECRET = '<ADD OAuth CLIENT SECRET USED TO OBTAIN AUTH CODES HERE>' # This is to ensure the proper access token is present to reach the end point for 'AI Platform Pipelines' # If you are not working with 'AI Platform Pipelines', this step is not necessary ! gcloud auth print-access-token # Create kfp client in_cluster = True try: k8s.config.load_incluster_config() except: in_cluster = False pass if in_cluster: client = kfp.Client() else: if HOST.endswith('googleusercontent.com'): CLIENT_ID = None OTHER_CLIENT_ID = None OTHER_CLIENT_SECRET = None client = kfp.Client(host=HOST, client_id=CLIENT_ID, other_client_id=OTHER_CLIENT_ID, other_client_secret=OTHER_CLIENT_SECRET) """ Explanation: Create client If you run this notebook outside of a Kubeflow cluster, run the following command: - host: The URL of your Kubeflow Pipelines instance, for example "https://&lt;your-deployment&gt;.endpoints.&lt;your-project&gt;.cloud.goog/pipeline" - client_id: The client ID used by Identity-Aware Proxy - other_client_id: The client ID used to obtain the auth codes and refresh tokens. - other_client_secret: The client secret used to obtain the auth codes and refresh tokens. python client = kfp.Client(host, client_id, other_client_id, other_client_secret) If you run this notebook within a Kubeflow cluster, run the following command: python client = kfp.Client() You'll need to create OAuth client ID credentials of type Other to get other_client_id and other_client_secret. Learn more about creating OAuth credentials End of explanation """ %%bash # Create folders if they don't exist. mkdir -p tmp/reuse_components_pipeline/mnist_training # Create the Python file that lists GCS blobs. cat > ./tmp/reuse_components_pipeline/mnist_training/app.py <<HERE import argparse from datetime import datetime import tensorflow as tf parser = argparse.ArgumentParser() parser.add_argument( '--model_path', type=str, required=True, help='Name of the model file.') parser.add_argument( '--bucket', type=str, required=True, help='GCS bucket name.') args = parser.parse_args() bucket=args.bucket model_path=args.model_path model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation=tf.nn.relu), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) print(model.summary()) mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 callbacks = [ tf.keras.callbacks.TensorBoard(log_dir=bucket + '/logs/' + datetime.now().date().__str__()), # Interrupt training if val_loss stops improving for over 2 epochs tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'), ] model.fit(x_train, y_train, batch_size=32, epochs=5, callbacks=callbacks, validation_data=(x_test, y_test)) from tensorflow import gfile gcs_path = bucket + "/" + model_path # The export require the folder is new if gfile.Exists(gcs_path): gfile.DeleteRecursively(gcs_path) tf.keras.experimental.export_saved_model(model, gcs_path) with open('/output.txt', 'w') as f: f.write(gcs_path) HERE """ Explanation: Build reusable components Writing the program code The following cell creates a file app.py that contains a Python script. The script downloads MNIST dataset, trains a Neural Network based classification model, writes the training log and exports the trained model to Google Cloud Storage. Your component can create outputs that the downstream components can use as inputs. Each output must be a string and the container image must write each output to a separate local text file. For example, if a training component needs to output the path of the trained model, the component writes the path into a local file, such as /output.txt. End of explanation """ %%bash # Create Dockerfile. # AI platform only support tensorflow 1.14 cat > ./tmp/reuse_components_pipeline/mnist_training/Dockerfile <<EOF FROM tensorflow/tensorflow:1.14.0-py3 WORKDIR /app COPY . /app EOF """ Explanation: Create a Docker container Create your own container image that includes your program. Creating a Dockerfile Now create a container that runs the script. Start by creating a Dockerfile. A Dockerfile contains the instructions to assemble a Docker image. The FROM statement specifies the Base Image from which you are building. WORKDIR sets the working directory. When you assemble the Docker image, COPY copies the required files and directories (for example, app.py) to the file system of the container. RUN executes a command (for example, install the dependencies) and commits the results. End of explanation """ IMAGE_NAME="mnist_training_kf_pipeline" TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)" GCR_IMAGE="gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}".format( PROJECT_ID=PROJECT_ID, IMAGE_NAME=IMAGE_NAME, TAG=TAG ) APP_FOLDER='./tmp/reuse_components_pipeline/mnist_training/' # In the following, for the purpose of demonstration # Cloud Build is choosen for 'AI Platform Pipelines' # kaniko is choosen for 'full Kubeflow deployment' if HOST.endswith('googleusercontent.com'): # kaniko is not pre-installed with 'AI Platform Pipelines' import subprocess # ! gcloud builds submit --tag ${IMAGE_NAME} ${APP_FOLDER} cmd = ['gcloud', 'builds', 'submit', '--tag', GCR_IMAGE, APP_FOLDER] build_log = (subprocess.run(cmd, stdout=subprocess.PIPE).stdout[:-1].decode('utf-8')) print(build_log) else: if kfp.__version__ <= '0.1.36': # kfp with version 0.1.36+ introduce broken change that will make the following code not working import subprocess builder = kfp.containers._container_builder.ContainerBuilder( gcs_staging=GCS_BUCKET + "/kfp_container_build_staging" ) kfp.containers.build_image_from_working_dir( image_name=GCR_IMAGE, working_dir=APP_FOLDER, builder=builder ) else: raise("Please build the docker image use either [Docker] or [Cloud Build]") """ Explanation: Build docker image Now that we have created our Dockerfile for creating our Docker image. Then we need to build the image and push to a registry to host the image. There are three possible options: - Use the kfp.containers.build_image_from_working_dir to build the image and push to the Container Registry (GCR). This requires kaniko, which will be auto-installed with 'full Kubeflow deployment' but not 'AI Platform Pipelines'. - Use Cloud Build, which would require the setup of GCP project and enablement of corresponding API. If you are working with GCP 'AI Platform Pipelines' with GCP project running, it is recommended to use Cloud Build. - Use Docker installed locally and push to e.g. GCR. Note: If you run this notebook within Kubeflow cluster, with Kubeflow version >= 0.7 and exploring kaniko option, you need to ensure that valid credentials are created within your notebook's namespace. - With Kubeflow version >= 0.7, the credential is supposed to be copied automatically while creating notebook through Configurations, which doesn't work properly at the time of creating this notebook. - You can also add credentials to the new namespace by either copying credentials from an existing Kubeflow namespace, or by creating a new service account. - The following cell demonstrates how to copy the default secret to your own namespace. ```bash %%bash NAMESPACE=<your notebook name space> SOURCE=kubeflow NAME=user-gcp-sa SECRET=$(kubectl get secrets \${NAME} -n \${SOURCE} -o jsonpath="{.data.\${NAME}.json}" | base64 -D) kubectl create -n \${NAMESPACE} secret generic \${NAME} --from-literal="\${NAME}.json=\${SECRET}" ``` End of explanation """ image_name = GCR_IMAGE """ Explanation: If you want to use docker to build the image Run the following in a cell ```bash %%bash -s "{PROJECT_ID}" IMAGE_NAME="mnist_training_kf_pipeline" TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)" Create script to build docker image and push it. cat > ./tmp/components/mnist_training/build_image.sh <<HERE PROJECT_ID="${1}" IMAGE_NAME="${IMAGE_NAME}" TAG="${TAG}" GCR_IMAGE="gcr.io/\${PROJECT_ID}/\${IMAGE_NAME}:\${TAG}" docker build -t \${IMAGE_NAME} . docker tag \${IMAGE_NAME} \${GCR_IMAGE} docker push \${GCR_IMAGE} docker image rm \${IMAGE_NAME} docker image rm \${GCR_IMAGE} HERE cd tmp/components/mnist_training bash build_image.sh ``` End of explanation """ %%bash -s "{image_name}" GCR_IMAGE="${1}" echo ${GCR_IMAGE} # Create Yaml # the image uri should be changed according to the above docker image push output cat > mnist_pipeline_component.yaml <<HERE name: Mnist training description: Train a mnist model and save to GCS inputs: - name: model_path description: 'Path of the tf model.' type: String - name: bucket description: 'GCS bucket name.' type: String outputs: - name: gcs_model_path description: 'Trained model path.' type: GCSPath implementation: container: image: ${GCR_IMAGE} command: [ python, /app/app.py, --model_path, {inputValue: model_path}, --bucket, {inputValue: bucket}, ] fileOutputs: gcs_model_path: /output.txt HERE import os mnist_train_op = kfp.components.load_component_from_file(os.path.join('./', 'mnist_pipeline_component.yaml')) mnist_train_op.component_spec """ Explanation: Writing your component definition file To create a component from your containerized program, you must write a component specification in YAML that describes the component for the Kubeflow Pipelines system. For the complete definition of a Kubeflow Pipelines component, see the component specification. However, for this tutorial you don’t need to know the full schema of the component specification. The notebook provides enough information to complete the tutorial. Start writing the component definition (component.yaml) by specifying your container image in the component’s implementation section: End of explanation """ mlengine_deploy_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.0/components/gcp/ml_engine/deploy/component.yaml') def deploy( project_id, model_uri, model_id, runtime_version, python_version): return mlengine_deploy_op( model_uri=model_uri, project_id=project_id, model_id=model_id, runtime_version=runtime_version, python_version=python_version, replace_existing_version=True, set_default=True) """ Explanation: Define deployment operation on AI Platform End of explanation """ def deployment_test(project_id: str, model_name: str, version: str) -> str: model_name = model_name.split("/")[-1] version = version.split("/")[-1] import googleapiclient.discovery def predict(project, model, data, version=None): """Run predictions on a list of instances. Args: project: (str), project where the Cloud ML Engine Model is deployed. model: (str), model name. data: ([[any]]), list of input instances, where each input instance is a list of attributes. version: str, version of the model to target. Returns: Mapping[str: any]: dictionary of prediction results defined by the model. """ service = googleapiclient.discovery.build('ml', 'v1') name = 'projects/{}/models/{}'.format(project, model) if version is not None: name += '/versions/{}'.format(version) response = service.projects().predict( name=name, body={ 'instances': data }).execute() if 'error' in response: raise RuntimeError(response['error']) return response['predictions'] import tensorflow as tf import json mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 result = predict( project=project_id, model=model_name, data=x_test[0:2].tolist(), version=version) print(result) return json.dumps(result) # # Test the function with already deployed version # deployment_test( # project_id=PROJECT_ID, # model_name="mnist", # version='ver_bb1ebd2a06ab7f321ad3db6b3b3d83e6' # previous deployed version for testing # ) deployment_test_op = comp.func_to_container_op( func=deployment_test, base_image="tensorflow/tensorflow:1.15.0-py3", packages_to_install=["google-api-python-client==1.7.8"]) """ Explanation: Kubeflow serving deployment component as an option. Note that, the deployed Endppoint URI is not availabe as output of this component. ```python kubeflow_deploy_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.0/components/gcp/ml_engine/deploy/component.yaml') def deploy_kubeflow( model_dir, tf_server_name): return kubeflow_deploy_op( model_dir=model_dir, server_name=tf_server_name, cluster_name='kubeflow', namespace='kubeflow', pvc_name='', service_type='ClusterIP') ``` Create a lightweight component for testing the deployment End of explanation """ # Define the pipeline @dsl.pipeline( name='Mnist pipeline', description='A toy pipeline that performs mnist model training.' ) def mnist_reuse_component_deploy_pipeline( project_id: str = PROJECT_ID, model_path: str = 'mnist_model', bucket: str = GCS_BUCKET ): train_task = mnist_train_op( model_path=model_path, bucket=bucket ).apply(gcp.use_gcp_secret('user-gcp-sa')) deploy_task = deploy( project_id=project_id, model_uri=train_task.outputs['gcs_model_path'], model_id="mnist", runtime_version="1.14", python_version="3.5" ).apply(gcp.use_gcp_secret('user-gcp-sa')) deploy_test_task = deployment_test_op( project_id=project_id, model_name=deploy_task.outputs["model_name"], version=deploy_task.outputs["version_name"], ).apply(gcp.use_gcp_secret('user-gcp-sa')) return True """ Explanation: Create your workflow as a Python function Define your pipeline as a Python function. @kfp.dsl.pipeline is a required decoration, and must include name and description properties. Then compile the pipeline function. After the compilation is completed, a pipeline file is created. End of explanation """ pipeline_func = mnist_reuse_component_deploy_pipeline experiment_name = 'minist_kubeflow' arguments = {"model_path":"mnist_model", "bucket":GCS_BUCKET} run_name = pipeline_func.__name__ + ' run' # Submit pipeline directly from pipeline function run_result = client.create_run_from_pipeline_func(pipeline_func, experiment_name=experiment_name, run_name=run_name, arguments=arguments) """ Explanation: Submit a pipeline run End of explanation """
spacy-io/thinc
examples/02_transformers_tagger_bert.ipynb
mit
!pip install "thinc>=8.0.0a0" transformers torch "ml_datasets>=0.2.0a0" "tqdm>=4.41" """ Explanation: Training a part-of-speech tagger with transformers (BERT) This example shows how to use Thinc and Hugging Face's transformers library to implement and train a part-of-speech tagger on the Universal Dependencies AnCora corpus. This notebook assumes familiarity with machine learning concepts, transformer models and Thinc's config system and Model API (see the "Thinc for beginners" notebook and the documentation for more info). End of explanation """ from thinc.api import prefer_gpu, use_pytorch_for_gpu_memory is_gpu = prefer_gpu() print("GPU:", is_gpu) if is_gpu: use_pytorch_for_gpu_memory() """ Explanation: First, let's use Thinc's prefer_gpu helper to make sure we're performing operations on GPU if available. The function should be called right after importing Thinc, and it returns a boolean indicating whether the GPU has been activated. If we're on GPU, we can also call use_pytorch_for_gpu_memory to route cupy's memory allocation via PyTorch, so both can play together nicely. End of explanation """ CONFIG = """ [model] @layers = "TransformersTagger.v1" starter = "bert-base-multilingual-cased" [optimizer] @optimizers = "Adam.v1" [optimizer.learn_rate] @schedules = "warmup_linear.v1" initial_rate = 0.01 warmup_steps = 3000 total_steps = 6000 [loss] @losses = "SequenceCategoricalCrossentropy.v1" [training] batch_size = 128 words_per_subbatch = 2000 n_epoch = 10 """ """ Explanation: Overview: the final config Here's the final config for the model we're building in this notebook. It references a custom TransformersTagger that takes the name of a starter (the pretrained model to use), an optimizer, a learning rate schedule with warm-up and the general training settings. You can keep the config string within your file or notebook, or save it to a conig.cfg file and load it in via Config.from_disk. End of explanation """ from typing import Optional, List import numpy from thinc.types import Ints1d, Floats2d from dataclasses import dataclass import torch from transformers import BatchEncoding, TokenSpan @dataclass class TokensPlus: batch_size: int tok2wp: List[Ints1d] input_ids: torch.Tensor token_type_ids: torch.Tensor attention_mask: torch.Tensor def __init__(self, inputs: List[List[str]], wordpieces: BatchEncoding): self.input_ids = wordpieces["input_ids"] self.attention_mask = wordpieces["attention_mask"] self.token_type_ids = wordpieces["token_type_ids"] self.batch_size = self.input_ids.shape[0] self.tok2wp = [] for i in range(self.batch_size): spans = [wordpieces.word_to_tokens(i, j) for j in range(len(inputs[i]))] self.tok2wp.append(self.get_wp_starts(spans)) def get_wp_starts(self, spans: List[Optional[TokenSpan]]) -> Ints1d: """Calculate an alignment mapping each token index to its first wordpiece.""" alignment = numpy.zeros((len(spans)), dtype="i") for i, span in enumerate(spans): if span is None: raise ValueError( "Token did not align to any wordpieces. Was the tokenizer " "run with is_split_into_words=True?" ) else: alignment[i] = span.start return alignment def test_tokens_plus(name: str="bert-base-multilingual-cased"): from transformers import AutoTokenizer inputs = [ ["Our", "band", "is", "called", "worlthatmustbedivided", "!"], ["We", "rock", "!"] ] tokenizer = AutoTokenizer.from_pretrained(name) wordpieces = tokenizer( inputs, is_split_into_words=True, add_special_tokens=True, return_token_type_ids=True, return_attention_mask=True, return_length=True, return_tensors="pt", padding="longest" ) tplus = TokensPlus(inputs, wordpieces) assert len(tplus.tok2wp) == len(inputs) == len(tplus.input_ids) for i, align in enumerate(tplus.tok2wp): assert len(align) == len(inputs[i]) for j in align: assert j >= 0 and j < tplus.input_ids.shape[1] test_tokens_plus() """ Explanation: Defining the model The Thinc model we want to define should consist of 3 components: the transformers tokenizer, the actual transformer implemented in PyTorch and a softmax-activated output layer. 1. Wrapping the tokenizer To make it easier to keep track of the data that's passed around (and get type errors if something goes wrong), we first create a TokensPlus dataclass that holds the information we need from the transformers tokenizer. The most important work we'll do in this class is to build an alignment map. The transformer models are trained on input sequences that over-segment the sentence, so that they can work on smaller vocabularies. These over-segmentations are generally called "word pieces". The transformer will return a tensor with one vector per wordpiece. We need to map that to a tensor with one vector per POS-tagged token. We'll pass those token representations into a feed-forward network to predict the tag probabilities. During the backward pass, we'll then need to invert this mapping, so that we can calculate the gradients with respect to the wordpieces given the gradients with respect to the tokens. To keep things relatively simple, we'll store the alignment as a list of arrays, with each array mapping one token to one wordpiece vector (its first one). To make this work, we'll need to run the tokenizer with is_split_into_words=True, which should ensure that we get at least one wordpiece per token. End of explanation """ import thinc from thinc.api import Model from transformers import AutoTokenizer @thinc.registry.layers("transformers_tokenizer.v1") def TransformersTokenizer(name: str) -> Model[List[List[str]], TokensPlus]: def forward(model, inputs: List[List[str]], is_train: bool): tokenizer = model.attrs["tokenizer"] wordpieces = tokenizer( inputs, is_split_into_words=True, add_special_tokens=True, return_token_type_ids=True, return_attention_mask=True, return_length=True, return_tensors="pt", padding="longest" ) return TokensPlus(inputs, wordpieces), lambda d_tokens: [] return Model("tokenizer", forward, attrs={"tokenizer": AutoTokenizer.from_pretrained(name)}) """ Explanation: The wrapped tokenizer will take a list-of-lists as input (the texts) and will output a TokensPlus object containing the fully padded batch of tokens. The wrapped transformer will take a list of TokensPlus objects and will output a list of 2-dimensional arrays. TransformersTokenizer: List[List[str]] → TokensPlus Transformer: TokensPlus → List[Array2d] 💡 Since we're adding type hints everywhere (and Thinc is fully typed, too), you can run your code through mypy to find type errors and inconsistencies. If you're using an editor like Visual Studio Code, you can enable mypy linting and type errors will be highlighted in real time as you write code. To use the tokenizer as a layer in our network, we register a new function that returns a Thinc Model. The function takes the name of the pretrained weights (e.g. "bert-base-multilingual-cased") as an argument that can later be provided via the config. After loading the AutoTokenizer, we can stash it in the attributes. This lets us access it at any point later on via model.attrs["tokenizer"]. End of explanation """ from typing import List, Tuple, Callable from thinc.api import ArgsKwargs, torch2xp, xp2torch from thinc.types import Floats2d def convert_transformer_inputs(model, tokens: TokensPlus, is_train): kwargs = { "input_ids": tokens.input_ids, "attention_mask": tokens.attention_mask, "token_type_ids": tokens.token_type_ids, } return ArgsKwargs(args=(), kwargs=kwargs), lambda dX: [] def convert_transformer_outputs( model: Model, inputs_outputs: Tuple[TokensPlus, Tuple[torch.Tensor]], is_train: bool ) -> Tuple[List[Floats2d], Callable]: tplus, trf_outputs = inputs_outputs wp_vectors = torch2xp(trf_outputs[0]) tokvecs = [wp_vectors[i, idx] for i, idx in enumerate(tplus.tok2wp)] def backprop(d_tokvecs: List[Floats2d]) -> ArgsKwargs: # Restore entries for BOS and EOS markers d_wp_vectors = model.ops.alloc3f(*trf_outputs[0].shape, dtype="f") for i, idx in enumerate(tplus.tok2wp): d_wp_vectors[i, idx] += d_tokvecs[i] return ArgsKwargs( args=(trf_outputs[0],), kwargs={"grad_tensors": xp2torch(d_wp_vectors)}, ) return tokvecs, backprop """ Explanation: The forward pass takes the model and a list-of-lists of strings and outputs the TokensPlus dataclass. It also outputs a dummy callback function, to meet the API contract for Thinc models. Even though there's no way we can meaningfully "backpropagate" this layer, we need to make sure the function has the right signature, so that it can be used interchangeably with other layers. 2. Wrapping the transformer To load and wrap the transformer, we can use transformers.AutoModel and Thinc's PyTorchWrapper. The forward method of the wrapped model can take arbitrary positional arguments and keyword arguments. Here's what the wrapped model is going to look like: python @thinc.registry.layers("transformers_model.v1") def Transformer(name) -&gt; Model[TokensPlus, List[Floats2d]]: return PyTorchWrapper( AutoModel.from_pretrained(name), convert_inputs=convert_transformer_inputs, convert_outputs=convert_transformer_outputs, ) The Transformer layer takes our TokensPlus dataclass as input and outputs a list of 2-dimensional arrays. The convert functions are used to map inputs and outputs to and from the PyTorch model. Each function should return the converted output, and a callback to use during the backward pass. To make the arbitrary positional and keyword arguments easier to manage, Thinc uses an ArgsKwargs dataclass, essentially a named tuple with args and kwargs that can be spread into a function as *ArgsKwargs.args and **ArgsKwargs.kwargs. The ArgsKwargs objects will be passed straight into the model in the forward pass, and straight into torch.autograd.backward during the backward pass. End of explanation """ import thinc from thinc.api import PyTorchWrapper from transformers import AutoModel @thinc.registry.layers("transformers_model.v1") def Transformer(name: str) -> Model[TokensPlus, List[Floats2d]]: return PyTorchWrapper( AutoModel.from_pretrained(name), convert_inputs=convert_transformer_inputs, convert_outputs=convert_transformer_outputs, ) """ Explanation: The input and output transformation functions give you full control of how data is passed into and out of the underlying PyTorch model, so you can work with PyTorch layers that expect and return arbitrary objects. Putting it all together, we now have a nice layer that is configured with the name of a transformer model, that acts as a function mapping tokenized input into feature vectors. End of explanation """ from thinc.api import chain, with_array, Softmax @thinc.registry.layers("TransformersTagger.v1") def TransformersTagger(starter: str, n_tags: int = 17) -> Model[List[List[str]], List[Floats2d]]: return chain( TransformersTokenizer(starter), Transformer(starter), with_array(Softmax(n_tags)), ) """ Explanation: We can now combine the TransformersTokenizer and Transformer into a feed-forward network using the chain combinator. The with_array layer transforms a sequence of data into a contiguous 2d array on the way into and out of a model. End of explanation """ from thinc.api import Config, registry C = registry.resolve(Config().from_str(CONFIG)) model = C["model"] optimizer = C["optimizer"] calculate_loss = C["loss"] cfg = C["training"] """ Explanation: Training the model Setting up model and data Since we've registered all layers via @thinc.registry.layers, we can construct the model, its settings and other functions we need from a config (see CONFIG above). The result is a config object with a model, an optimizer, a function to calculate the loss and the training settings. End of explanation """ import ml_datasets (train_X, train_Y), (dev_X, dev_Y) = ml_datasets.ud_ancora_pos_tags() train_Y = list(map(model.ops.asarray, train_Y)) # convert to cupy if needed dev_Y = list(map(model.ops.asarray, dev_Y)) # convert to cupy if needed model.initialize(X=train_X[:5], Y=train_Y[:5]) """ Explanation: We’ve prepared a separate package ml-datasets with loaders for some common datasets, including the AnCora data. If we're using a GPU, calling ops.asarray on the outputs ensures that they're converted to cupy arrays (instead of numpy arrays). Calling Model.initialize with a batch of inputs and outputs allows Thinc to infer the missing dimensions. End of explanation """ def minibatch_by_words(pairs, max_words): pairs = list(zip(*pairs)) pairs.sort(key=lambda xy: len(xy[0]), reverse=True) batch = [] for X, Y in pairs: batch.append((X, Y)) n_words = max(len(xy[0]) for xy in batch) * len(batch) if n_words >= max_words: yield batch[:-1] batch = [(X, Y)] if batch: yield batch def evaluate_sequences(model, Xs: List[Floats2d], Ys: List[Floats2d], batch_size: int) -> float: correct = 0.0 total = 0.0 for X, Y in model.ops.multibatch(batch_size, Xs, Ys): Yh = model.predict(X) for yh, y in zip(Yh, Y): correct += (y.argmax(axis=1) == yh.argmax(axis=1)).sum() total += y.shape[0] return float(correct / total) """ Explanation: Helper functions for training and evaluation Before we can train the model, we also need to set up the following helper functions for batching and evaluation: minibatch_by_words: Group pairs of sequences into minibatches under max_words in size, considering padding. The size of a padded batch is the length of its longest sequence multiplied by the number of elements in the batch. evaluate_sequences: Evaluate the model sequences of two-dimensional arrays and return the score. End of explanation """ from tqdm.notebook import tqdm from thinc.api import fix_random_seed fix_random_seed(0) for epoch in range(cfg["n_epoch"]): batches = model.ops.multibatch(cfg["batch_size"], train_X, train_Y, shuffle=True) for outer_batch in tqdm(batches, leave=False): for batch in minibatch_by_words(outer_batch, cfg["words_per_subbatch"]): inputs, truths = zip(*batch) inputs = list(inputs) guesses, backprop = model(inputs, is_train=True) backprop(calculate_loss.get_grad(guesses, truths)) model.finish_update(optimizer) optimizer.step_schedules() score = evaluate_sequences(model, dev_X, dev_Y, cfg["batch_size"]) print(epoch, f"{score:.3f}") """ Explanation: The training loop Transformers often learn best with large batch sizes – larger than fits in GPU memory. But you don't have to backprop the whole batch at once. Here we consider the "logical" batch size (number of examples per update) separately from the physical batch size. For the physical batch size, what we care about is the number of words (considering padding too). We also want to sort by length, for efficiency. At the end of the batch, we call the optimizer with the accumulated gradients, and advance the learning rate schedules. You might want to evaluate more often than once per epoch – that's up to you. End of explanation """
pradau/udacity
Data_Analyst_ND_Project0.ipynb
bsd-2-clause
import pandas as pd # pandas is a software library for data manipulation and analysis # We commonly use shorter nicknames for certain packages. Pandas is often abbreviated to pd. # hit shift + enter to run this cell or block of code path = r'/Users/pradau/Dropbox/temp/Downloads/chopstick-effectiveness.csv' # Change the path to the location where the chopstick-effectiveness.csv file is located on your computer. # If you get an error when running this block of code, be sure the chopstick-effectiveness.csv is located at the path on your computer. dataFrame = pd.read_csv(path) dataFrame """ Explanation: Chopsticks! A few researchers set out to determine the optimal length of chopsticks for children and adults. They came up with a measure of how effective a pair of chopsticks performed, called the "Food Pinching Performance." The "Food Pinching Performance" was determined by counting the number of peanuts picked and placed in a cup (PPPC). An investigation for determining the optimum length of chopsticks. Link to Abstract and Paper the abstract below was adapted from the link Chopsticks are one of the most simple and popular hand tools ever invented by humans, but have not previously been investigated by ergonomists. Two laboratory studies were conducted in this research, using a randomised complete block design, to evaluate the effects of the length of the chopsticks on the food-serving performance of adults and children. Thirty-one male junior college students and 21 primary school pupils served as subjects for the experiment to test chopsticks lengths of 180, 210, 240, 270, 300, and 330 mm. The results showed that the food-pinching performance was significantly affected by the length of the chopsticks, and that chopsticks of about 240 and 180 mm long were optimal for adults and pupils, respectively. Based on these findings, the researchers suggested that families with children should provide both 240 and 180 mm long chopsticks. In addition, restaurants could provide 210 mm long chopsticks, considering the trade-offs between ergonomics and cost. For the rest of this project, answer all questions based only on the part of the experiment analyzing the thirty-one adult male college students. Download the data set for the adults, then answer the following questions based on the abstract and the data set. If you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. You will learn more about Markdown later in the Nanodegree Program. Hit shift + enter or shift + return to show the formatted text. 1. What is the independent variable in the experiment? Chopstick length 2. What is the dependent variable in the experiment? The Food.Pinching.Efficiency (title in csv file) or PPPC (according to introduction) which is the measure of food-pinching performance. 3. How is the dependent variable operationally defined? The number of peanuts picked and placed in a cup. Presumably this is a rate per unit time since the values given in the csv file are not integers. 4. Based on the description of the experiment and the data set, list at least two variables that you know were controlled. Think about the participants who generated the data and what they have in common. You don't need to guess any variables or read the full paper to determine these variables. (For example, it seems plausible that the material of the chopsticks was held constant, but this is not stated in the abstract or data description.) Each group has the same gender and similar age (e.g. 31 male junior college students is one group). This means the age and gender were matched. Matching age could eliminate the effects of reduced flexibility, agility or mental focus that might be present in older subjects. Matching gender may be important because smaller hands (among women) may be better suited to smaller chopsticks. One great advantage of ipython notebooks is that you can document your data analysis using code, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. For now, let's see some code for doing statistics. End of explanation """ dataFrame['Food.Pinching.Efficiency'].mean() """ Explanation: Let's do a basic statistical calculation on the data using code! Run the block of code below to calculate the average "Food Pinching Efficiency" for all 31 participants and all chopstick lengths. End of explanation """ meansByChopstickLength = dataFrame.groupby('Chopstick.Length')['Food.Pinching.Efficiency'].mean().reset_index() meansByChopstickLength # reset_index() changes Chopstick.Length from an index to column. Instead of the index being the length of the chopsticks, the index is the row numbers 0, 1, 2, 3, 4, 5. """ Explanation: This number is helpful, but the number doesn't let us know which of the chopstick lengths performed best for the thirty-one male junior college students. Let's break down the data by chopstick length. The next block of code will generate the average "Food Pinching Effeciency" for each chopstick length. Run the block of code below. End of explanation """ # Causes plots to display within the notebook rather than in a new window %pylab inline import matplotlib.pyplot as plt plt.scatter(x=meansByChopstickLength['Chopstick.Length'], y=meansByChopstickLength['Food.Pinching.Efficiency']) # title="") plt.xlabel("Length in mm") plt.ylabel("Efficiency in PPPC") plt.title("Average Food Pinching Efficiency by Chopstick Length") plt.show() """ Explanation: 5. Which chopstick length performed the best for the group of thirty-one male junior college students? For the 31 male college students the best length was 240mm. End of explanation """
data-cube/agdc-v2-examples
notebooks/02_loading_data.ipynb
apache-2.0
import datacube dc = datacube.Datacube(app='load-data-example') """ Explanation: Loading data from the datacube This notebook will briefly discuss how to load data from the datacube. Importing the datacube To start with, we'll import the datacube module and load an instance of the datacube and call our application name load-data-example. End of explanation """ data = dc.load(product='ls5_nbar_albers', x=(149.25, 149.5), y=(-36.25, -36.5), time=('2008-01-01', '2009-01-01')) data """ Explanation: Loading data Loading data from the datacube uses the load function. The function takes several arguments: * product; A specifc product to load * x; Defines the spatial region in the x dimension * y; Defines the spatial region in the y dimension * time; Defines the temporal extent. We'll load the Landsat 5-TM, Nadir Bi-directional reflectance ristribution function Adjusted Reflectance, for the spatial region covering: 149.25 -> 149.5 degrees longitude -36.25 -> -36.5 degrees latitude and a temporal extent covering: 2008-01-01 -> 2009-01-01 End of explanation """ data = dc.load(product='ls5_nbar_albers', x=(1543137.5, 1569137.5), y=(-4065537.5, -4096037.5), time=('2008-01-01', '2009-01-01'), crs='EPSG:3577') data """ Explanation: Load data via a products native co-ordinate system By default, the x and y arguments accept queries in a geographical co-ordinate system identified by the EPSG code 4326, which is the same as within Google Earth. The user can also query via the native co-ordinate system that the product is stored in, and supply the crs argument. End of explanation """ data = dc.load(product='ls5_nbar_albers', x=(149.25, 149.5), y=(-36.25, -36.5), time=('2008-01-01', '2009-01-01'), measurements=['red', 'nir']) data """ Explanation: Load specific measurements of a given product Some products have several measurements such as Landsat 5-TM, which for the ls5_nbar_albers product contains the following spectral measurements: blue green red nir swir1 swir2 In this next example we'll only load the red and nir measurements. End of explanation """ help(dc.load) """ Explanation: Additional help can be found by calling help(dc.load) End of explanation """
ireapps/pycar
completed/read_csv_notebook_complete.ipynb
mit
from urllib.request import urlretrieve import csv """ Explanation: Read a CSV We're going to use built-in Python modules - programs really - to download a csv file from the Internet and save it locally. CSV stands for comma-separated values. It's a common file format a file format that resembles a spreadsheet or database table in a text file. So first, let's import two built-in Python modules: urllib and csv. urllib is a module that allows Python to make http requests to URLs on the web to fetch HTML. It contains a submodule called request. And inside there we want a specific method called urlretrieve csv is a module that helps Python work with tabular data extracted from spreadsheets and databases End of explanation """ downloaded_file = "banklist.csv" """ Explanation: We're going to download a csv file. What should we name it? End of explanation """ urlretrieve("https://s3.amazonaws.com/datanicar/banklist.csv", downloaded_file) """ Explanation: Now we need a URL to a CSV file out on the Internet. For this project we're going to download a CSV file that the FDIC compiles of all the banks that have failed since October 1, 2000. The file we want is at https://s3.amazonaws.com/datanicar/banklist.csv. If the internet is uncooperative, we can also use the local version of the file in the project1/data/ directory, and structure out code a little differently. To do this, we use that program within the urllib module to download the file and save it to our project folder. It's called urlretrieve and for our purposes starting out think of it as a way to download a file from the Internet. urlretrieve takes two arguments to download a file. First specify our target URL, and then we give it a name for the file we want to create. End of explanation """ # open the downloaded file with open(downloaded_file, 'r') as file: # use python's csv reader to access the contents # and create an object that represents the data csv_data = csv.reader(file) # loop through each row of the csv for row in csv_data: # and print the row to the terminal print(row) # print the data type to the terminal print(type(row)) # print the length of the row to the terminal print(len(row)) print(if len(row) != 7) """ Explanation: The output shows we successfully downloaded the file and saved it Now we want to go ahead and use python's csv reader to open the file and see what is inside. We specify the name of the file we just created, and we add a setting so we can open and read almost any CSV file. End of explanation """
Yangqing/caffe2
caffe2/python/tutorials/Control_Ops.ipynb
apache-2.0
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.python import workspace from caffe2.python.core import Plan, to_execution_step, Net from caffe2.python.net_builder import ops, NetBuilder """ Explanation: Control Ops Tutorial In this tutorial we show how to use control flow operators in Caffe2 and give some details about their underlying implementations. Conditional Execution Using NetBuilder Let's start with conditional operator. We will demonstrate how to use it in two Caffe2 APIs used for building nets: NetBuilder and brew. End of explanation """ with NetBuilder() as nb: # Define our constants ops.Const(0.0, blob_out="zero") ops.Const(1.0, blob_out="one") ops.Const(0.5, blob_out="x") ops.Const(0.0, blob_out="y") # Define our conditional sequence with ops.IfNet(ops.GT(["x", "zero"])): ops.Copy("one", "y") with ops.Else(): ops.Copy("zero", "y") """ Explanation: In the first example, we define several blobs and then use the 'If' operator to set the value of one of them conditionally depending on values of other blobs. The pseudocode for the conditional examples we will implement is as follows: if (x &gt; 0): y = 1 else: y = 0 End of explanation """ # Initialize a Plan plan = Plan('if_net_test') # Add the NetBuilder definition above to the Plan plan.AddStep(to_execution_step(nb)) # Initialize workspace for blobs ws = workspace.C.Workspace() # Run the Plan ws.run(plan) # Fetch some blobs and print print('x = ', ws.blobs["x"].fetch()) print('y = ', ws.blobs["y"].fetch()) """ Explanation: Note the usage of NetBuilder's ops.IfNet and ops.Else calls: ops.IfNet accepts a blob reference or blob name as an input, it expects an input blob to have a scalar value convertible to bool. Note that the optional ops.Else is at the same level as ops.IfNet and immediately follows the corresponding ops.IfNet. Let's execute the resulting net (execution step) and check the values of the blobs. Note that since x = 0.5, which is indeed greater than 0, we should expect y = 1 after execution. End of explanation """ with NetBuilder() as nb: # Define our constants ops.Const(0.0, blob_out="zero") ops.Const(1.0, blob_out="one") ops.Const(2.0, blob_out="two") ops.Const(1.5, blob_out="x") ops.Const(0.0, blob_out="y") # Define our conditional sequence with ops.IfNet(ops.GT(["x", "zero"])): ops.Copy("x", "local_blob") # create local_blob using Copy -- this is not visible outside of this block with ops.IfNet(ops.LE(["local_blob", "one"])): ops.Copy("one", "y") with ops.Else(): ops.Copy("two", "y") with ops.Else(): ops.Copy("zero", "y") # Note that using local_blob would fail here because it is outside of the block in # which it was created """ Explanation: Before going further, it's important to understand the semantics of execution blocks ('then' and 'else' branches in the example above), i.e. handling of reads and writes into global (defined outside of the block) and local (defined inside the block) blobs. NetBuilder uses the following set of rules: In NetBuilder's syntax, a blob's declaration and definition occur at the same time - we define an operator which writes its output into a blob with a given name. NetBuilder keeps track of all operators seen before the current execution point in the same block and up the stack in parent blocks. If an operator writes into a previously unseen blob, it creates a local blob that is visible only within the current block and the subsequent children blocks. Local blobs created in a given block are effectively deleted when we exit the block. Any write into previously defined (in the same block or in the parent blocks) blob updates an originally created blob and does not result in the redefinition of a blob. An operator's input blobs have to be defined earlier in the same block or in the stack of parent blocks. As a result, in order to see the values computed by a block after its execution, the blobs of interest have to be defined outside of the block. This rule effectively forces visible blobs to always be correctly initialized. To illustrate concepts of block semantics and provide a more sophisticated example, let's consider the following net: End of explanation """ # Initialize a Plan plan = Plan('if_net_test_2') # Add the NetBuilder definition above to the Plan plan.AddStep(to_execution_step(nb)) # Initialize workspace for blobs ws = workspace.C.Workspace() # Run the Plan ws.run(plan) # Fetch some blobs and print print('x = ', ws.blobs["x"].fetch()) print('y = ', ws.blobs["y"].fetch()) # Assert that the local_blob does not exist in the workspace # It should have been destroyed because of its locality assert "local_blob" not in ws.blobs """ Explanation: When we execute this, we expect that y == 2.0, and that local_blob will not exist in the workspace. End of explanation """ from caffe2.python import brew from caffe2.python.workspace import FeedBlob, RunNetOnce, FetchBlob from caffe2.python.model_helper import ModelHelper """ Explanation: Conditional Execution Using Brew Module Brew is another Caffe2 interface used to construct nets. Unlike NetBuilder, brew does not track the hierarchy of blocks and, as a result, we need to specify which blobs are considered local and which blobs are considered global when passing 'then' and 'else' models to an API call. Let's start by importing the necessary items for the brew API. End of explanation """ # Initialize model, which will represent our main conditional model for this test model = ModelHelper(name="test_if_model") # Add variables and constants to our conditional model; notice how we add them to the param_init_net model.param_init_net.ConstantFill([], ["zero"], shape=[1], value=0.0) model.param_init_net.ConstantFill([], ["one"], shape=[1], value=1.0) model.param_init_net.ConstantFill([], ["x"], shape=[1], value=0.5) model.param_init_net.ConstantFill([], ["y"], shape=[1], value=0.0) # Add Greater Than (GT) conditional operator to our model # which checks if "x" > "zero", and outputs the result in the "cond" blob model.param_init_net.GT(["x", "zero"], "cond") # Initialize a then_model, and add an operator which we will set to be # executed if the conditional model returns True then_model = ModelHelper(name="then_test_model") then_model.net.Copy("one", "y") # Initialize an else_model, and add an operator which we will set to be # executed if the conditional model returns False else_model = ModelHelper(name="else_test_model") else_model.net.Copy("zero", "y") # Use the brew module's handy cond operator to facilitate the construction of the operator graph brew.cond( model=model, # main conditional model cond_blob="cond", # blob with condition value external_blobs=["x", "y", "zero", "one"], # data blobs used in execution of conditional then_model=then_model, # pass then_model else_model=else_model) # pass else_model """ Explanation: We will use the Caffe2's ModelHelper class to define and represent our models, as well as contain the parameter information about the models. Note that a ModelHelper object has two underlying nets: (1) param_init_net: Responsible for parameter initialization (2) net: Contains the main network definition, i.e. the graph of operators that the data flows through Note that ModelHelper is similar to NetBuilder in that we define the operator graph first, and actually run later. With that said, let's define some models to act as conditional elements, and use the brew module to form the conditional statement that we want to run. We will construct the same statement used in the first example above. End of explanation """ from caffe2.python import net_drawer from IPython import display graph = net_drawer.GetPydotGraph(model.net, rankdir="LR") display.Image(graph.create_png(), width=800) """ Explanation: Before we run the model, let's use Caffe2's graph visualization tool net_drawer to check if the operator graph makes sense. End of explanation """ # Run param_init_net once RunNetOnce(model.param_init_net) # Run main net (once in this case) RunNetOnce(model.net) # Fetch and examine some blobs print("x = ", FetchBlob("x")) print("y = ", FetchBlob("y")) """ Explanation: Now let's run the net! When using ModelHelper, we must first run the param_init_net to initialize paramaters, then we execute the main net. End of explanation """ with NetBuilder() as nb: # Define our variables ops.Const(0, blob_out="i") ops.Const(0, blob_out="y") # Define loop code and conditions with ops.WhileNet(): with ops.Condition(): ops.Add(["i", ops.Const(1)], ["i"]) ops.LE(["i", ops.Const(7)]) ops.Add(["i", "y"], ["y"]) """ Explanation: Loops Using NetBuilder Another important control flow operator is 'While', which allows repeated execution of a fragment of net. Let's consider NetBuilder's version first. The pseudocode for this example is: i = 0 y = 0 while (i &lt;= 7): y = i + y i += 1 End of explanation """ # Initialize a Plan plan = Plan('while_net_test') # Add the NetBuilder definition above to the Plan plan.AddStep(to_execution_step(nb)) # Initialize workspace for blobs ws = workspace.C.Workspace() # Run the Plan ws.run(plan) # Fetch blobs and print print("i = ", ws.blobs["i"].fetch()) print("y = ", ws.blobs["y"].fetch()) """ Explanation: As with the 'If' operator, standard block semantic rules apply. Note the usage of ops.Condition clause that should immediately follow ops.WhileNet and contains code that is executed before each iteration. The last operator in the condition clause is expected to have a single boolean output that determines whether the other iteration is executed. In the example above we increment the counter ("i") before each iteration and accumulate its values in "y" blob, the loop's body is executed 7 times, the resulting blob values: End of explanation """ # Initialize model, which will represent our main conditional model for this test model = ModelHelper(name="test_while_model") # Add variables and constants to our model model.param_init_net.ConstantFill([], ["i"], shape=[1], value=0) model.param_init_net.ConstantFill([], ["one"], shape=[1], value=1) model.param_init_net.ConstantFill([], ["seven"], shape=[1], value=7) model.param_init_net.ConstantFill([], ["y"], shape=[1], value=0) # Initialize a loop_model that represents the code to run inside of loop loop_model = ModelHelper(name="loop_test_model") loop_model.net.Add(["i", "y"], ["y"]) # Initialize cond_model that represents the conditional test that the loop # abides by, as well as the incrementation step cond_model = ModelHelper(name="cond_test_model") cond_model.net.Add(["i", "one"], "i") cond_model.net.LE(["i", "seven"], "cond") # Use brew's loop operator to facilitate the creation of the loop's operator graph brew.loop( model=model, # main model that contains data cond_blob="cond", # explicitly specifying condition blob external_blobs=["cond", "i", "one", "seven", "y"], # data blobs used in execution of the loop loop_model=loop_model, # pass loop_model cond_model=cond_model # pass condition model (optional) ) """ Explanation: Loops Using Brew Module Now let's take a look at how to replicate the loop above using the ModelHelper+brew interface. End of explanation """ graph = net_drawer.GetPydotGraph(model.net, rankdir="LR") display.Image(graph.create_png(), width=800) """ Explanation: Once again, let's visualize the net using the net_drawer. End of explanation """ RunNetOnce(model.param_init_net) RunNetOnce(model.net) print("i = ", FetchBlob("i")) print("y = ", FetchBlob("y")) """ Explanation: Finally, we'll run the param_init_net and net and print our final blob values. End of explanation """ import numpy as np # Feed blob called x, which is simply a 1-D numpy array [0.5] FeedBlob("x", np.array(0.5, dtype='float32')) # _use_control_ops=True forces NetBuilder to output single net as a result # x is external for NetBuilder, so we let nb know about it through initial_scope param with NetBuilder(_use_control_ops=True, initial_scope=["x"]) as nb: ops.Const(0.0, blob_out="zero") ops.Const(1.0, blob_out="one") ops.Const(4.0, blob_out="y") ops.Const(0.0, blob_out="z") with ops.IfNet(ops.GT(["x", "zero"])): ops.Pow("y", "z", exponent=2.0) with ops.Else(): ops.Pow("y", "z", exponent=3.0) # we should get a single net as output assert len(nb.get()) == 1, "Expected a single net produced" net = nb.get()[0] # add gradient operators for 'z' blob grad_map = net.AddGradientOperators(["z"]) """ Explanation: Backpropagation Both 'If' and 'While' operators support backpropagation. To illustrate how backpropagation with control ops work, let's consider the following examples in which we construct the operator graph using NetBuilder and obtain calculate gradients using the AddGradientOperators function. The first example shows the following conditional statement: x = 1-D numpy float array y = 4 z = 0 if (x &gt; 0): z = y^2 else: z = y^3 End of explanation """ # Run the net RunNetOnce(net) # Fetch blobs and print print("x = ", FetchBlob("x")) print("y = ", FetchBlob("y")) print("z = ", FetchBlob("z")) print("y_grad = ", FetchBlob("y_grad")) """ Explanation: In this case $$x = 0.5$$ $$z = y^2 = 4^2 = 16$$ We will fetch the blob y_grad, which was generated by the AddGradientOperators call above. This blob contains the gradient of blob z with respect to y. According to basic calculus: $$y_grad = \frac{\partial{z}}{\partial{y}}y^2 = 2y = 2(4) = 8$$ End of explanation """ # To re-run net with different input, simply feed new blob FeedBlob("x", np.array(-0.5, dtype='float32')) RunNetOnce(net) print("x = ", FetchBlob("x")) print("y = ", FetchBlob("y")) print("z = ", FetchBlob("z")) print("y_grad = ", FetchBlob("y_grad")) """ Explanation: Now, let's change value of blob "x" to -0.5 and rerun net: End of explanation """ with NetBuilder(_use_control_ops=True) as nb: # Define variables and constants ops.Copy(ops.Const(0), "i") ops.Copy(ops.Const(1), "one") ops.Copy(ops.Const(2), "two") ops.Copy(ops.Const(2.0), "x") ops.Copy(ops.Const(3.0), "y") ops.Copy(ops.Const(2.0), "z") # Define loop statement # Computes x^4, y^2, z^3 with ops.WhileNet(): with ops.Condition(): ops.Add(["i", "one"], "i") ops.LE(["i", "two"]) ops.Pow("x", "x", exponent=2.0) with ops.IfNet(ops.LT(["i", "two"])): ops.Pow("y", "y", exponent=2.0) with ops.Else(): ops.Pow("z", "z", exponent=3.0) # Sum s = x + y + z ops.Add(["x", "y"], "x_plus_y") ops.Add(["x_plus_y", "z"], "s") assert len(nb.get()) == 1, "Expected a single net produced" net = nb.get()[0] # Add gradient operators to output blob 's' grad_map = net.AddGradientOperators(["s"]) workspace.RunNetOnce(net) print("x = ", FetchBlob("x")) print("x_grad = ", FetchBlob("x_grad")) # derivative: 4x^3 print("y = ", FetchBlob("y")) print("y_grad = ", FetchBlob("y_grad")) # derivative: 2y print("z = ", FetchBlob("z")) print("z_grad = ", FetchBlob("z_grad")) # derivative: 3z^2 """ Explanation: The next and final example illustrates backpropagation on the following loop: x = 2 y = 3 z = 2 i = 0 while (i &lt;= 2): x = x^2 if (i &lt; 2): y = y^2 else: z = z^3 i += 1 s = x + y + z Note that this code essentially computes the sum of x^4 (by squaring x twice), y^2, and z^3. End of explanation """
BrandonSmithJ/tensorflow-double-DQN
Double-DQN/tensorflow-deepq/notebooks/.ipynb_checkpoints/karpathy_game-checkpoint.ipynb
mit
g.plot_reward(smoothing=100) """ Explanation: Average Reward over time End of explanation """ g.__class__ = KarpathyGame np.set_printoptions(formatter={'float': (lambda x: '%.2f' % (x,))}) x = g.observe() new_shape = (x[:-2].shape[0]//g.eye_observation_size, g.eye_observation_size) print(x[:-2].reshape(new_shape)) print(x[-2:]) g.to_html() """ Explanation: Visualizing what the agent is seeing Starting with the ray pointing all the way right, we have one row per ray in clockwise order. The numbers for each ray are the following: - first three numbers are normalized distances to the closest visible (intersecting with the ray) object. If no object is visible then all of them are $1$. If there's many objects in sight, then only the closest one is visible. The numbers represent distance to friend, enemy and wall in order. - the last two numbers represent the speed of moving object (x and y components). Speed of wall is ... zero. Finally the last two numbers in the representation correspond to speed of the hero. End of explanation """
treasure-data/pandas-td
doc/magic.ipynb
apache-2.0
%load_ext pandas_td.ipython """ Explanation: Magic functions You can enable magic functions by loading pandas_td.ipython: End of explanation """ c = get_config() c.InteractiveShellApp.extensions = [ 'pandas_td.ipython', ] """ Explanation: It can be loaded automatically by the following configuration in "~/.ipython/profile_default/ipython_config.py": End of explanation """ %td_databases """ Explanation: After loading the extension, type "%td" and press TAB to list magic functions: List functions %td_databases returns the list of databases: End of explanation """ %td_tables sample """ Explanation: %td_tables returns the list of tables: End of explanation """ %td_jobs """ Explanation: %td_jobs returns the list of recently executed jobs: End of explanation """ %td_use sample_datasets """ Explanation: Use database %td_use is a special function that has side effects. First, it pushes table names into the current namespace: End of explanation """ nasdaq """ Explanation: By printing a table name, you can describe column names: End of explanation """ %%td_presto select count(1) cnt from nasdaq """ Explanation: Tab completion is also supported: As the second effect of %td_use, it implicitly changes "default database", which is used when you write queries without database names. Query functions %%td_hive, %%td_pig, and %%td_presto are cell magic functions that run queries: End of explanation """ %%td_presto -o df select count(1) cnt from nasdaq df """ Explanation: The result of the query can be stored in a variable by -o: End of explanation """ %%td_presto -O './output.csv' select count(1) cnt from nasdaq """ Explanation: Or you can save the result into a file by -O: End of explanation """ start = '2010-01-01' end = '2011-01-01' %%td_presto select count(1) cnt from nasdaq where td_time_range(time, '{start}', '{end}') """ Explanation: Python-style variable substition is supported: End of explanation """ %%td_presto -n select count(1) cnt from nasdaq where td_time_range(time, '{start}', '{end}') """ Explanation: You can preview the actual query by --dry-run (or -n): End of explanation """ %%td_presto select -- Time-series index (yearly) td_date_trunc('year', time) time, -- Same as above -- td_time_format(time, 'yyyy-01-01') time, count(1) cnt from nasdaq group by 1 limit 3 """ Explanation: Time-series index With magic functions, "time" column is converted into time-series index automatically. You can use td_date_trunc() or td_time_format() in combination with GROUP BY for aggregation: End of explanation """ %matplotlib inline %%td_presto --plot select -- x-axis td_date_trunc('year', time) time, -- y-axis min(low) low, max(high) high from nasdaq where symbol = 'AAPL' group by 1 """ Explanation: Plotting --plot is a convenient option for plotting. The first column represents x-axis. Other columns represent y-axis: End of explanation """ %%td_presto -o df select -- daily summary td_date_trunc('day', time) time, min(low) low, max(high) high, sum(volume) volume from nasdaq where symbol = 'AAPL' group by 1 # Use resample for local calculation df['high'].resample('1m', how='max').plot() """ Explanation: In practice, however, it is more efficient to execute rough calculation on the server side and store the result into a variable for further analysis: End of explanation """ %%td_presto --plot select -- x-axis td_date_trunc('month', time) time, -- columns symbol, -- y-axis avg(close) close from nasdaq where symbol in ('AAPL', 'MSFT') group by 1, 2 """ Explanation: --plot provides a shortcut way of plotting "pivot charts", as a combination of pivot() and plot(). If the query result contains non-numeric columns, or column names ending with "_id", they are used as columns parameter: End of explanation """ %%td_presto --pivot select td_date_trunc('year', time) time, symbol, avg(close) close from nasdaq where td_time_range(time, '2010', '2015') and symbol like 'AA%' group by 1, 2 """ Explanation: Pivot tables --pivot creates a pivot table from the result of query. Like --plot, the first column represents index and other non-numeric columns represents new columns: End of explanation """ %%td_presto -v --plot select td_date_trunc('year', time) time, sum(volume) volume from nasdaq group by 1 """ Explanation: Verbose output By passing -v (--verbose) option, you can print pseudo Python code that was executed by the magic function. End of explanation """
ComputationalModeling/spring-2017-danielak
past-semesters/spring_2016/day-by-day/day02-modeling-cold-spread/Day 2 pre-class assignment.ipynb
agpl-3.0
# The command below this comment imports the functionality that we need to display # YouTube videos in a Jupyter Notebook. You need to run this cell before you # run ANY of the YouTube videos. from IPython.display import YouTubeVideo """ Explanation: Day 2 pre-class assignment Goals for today's pre-class assignment Make sure that you can get a Jupyter notebook up and running! Learn about algorithms, computer programs, and their relationship To devise and think about the components of an algorithm for a simple task Learn about Python, IPython, and IPython notebooks and understand why we're using it in class. Assignment instructions Pre-class assignments will be composed of a combination of videos, text to read, and small assignments. The goal of these assignments is to prepare you for class the following day. You should watch the videos and read the text, and then do the assigned work. You will be graded on making a good-faith effort, not on correctness! To make notebook cells that have Python code in them do something, hold down the 'shift' key and then press the 'enter' or 'return' key (you'll have to do this to get movies to run). To edit a cell (to add answers, for example) you double-click, add your text, and then enter it by holding down 'shift' and pressing 'enter'. This assignment is due by 11:59 p.m. the day before class, and should be uploaded into the "Pre-class assignments" dropbox folder for Day 2. Submission instructions can be found at the end of the notebook. End of explanation """ # the command below this comment actually displays a specific YouTube video, # with a given width and height. You can watch the video in full-screen (much higher # resolution) mode by clicking the little box in the bottom-right corner of the video. YouTubeVideo("jT0KZ849fak",width=640,height=360) """ Explanation: Algorithms End of explanation """ YouTubeVideo("L03BzGmLUUE",width=640,height=360) """ Explanation: Further reading on algorithms and computer programs note: This isn't mandatory, but might be helpful! Wikipedia page on algorithms Wikipedia page on computer programs Assignment: Algorithms and computer programs Question 1: Come up with an algorithm for a simple task that you do every day (i.e., putting on your shoes). What are the steps of this algorithm? Put your answer to Question 1 here! (double-click on this text to edit this cell, and hit shift+enter to save the text) Question 2: Think about the algorithm you devised in the previous question and the video you just watched. Identify the various parts of your algorithm, as defined by the video. Put your answer to question 2 here! Python, IPython, and IPython notebooks End of explanation """
bradhowes/keystrokecountdown
src/articles/poisson/index.ipynb
mit
N = 10000.0 T = 2.0 lmbda = N / T / 60 / 60 lmbda """ Explanation: Introduction We wish to simulate a stochastic process where there are N users of our application that we contend will use our app within a 2 hour time period. To perform the simulation, we would like to have our users attempt to use the application at random times such that the distribution of the intervals between events accurately reflects what one might see in the real world. End of explanation """ count = int(1E6) x = np.arange(count) y = -np.log(1.0 - np.random.random_sample(len(x))) / lmbda np.average(y) y[:10] """ Explanation: For the arrival rate, let's set $N = 10,000$ users, and our time interval $T = 2.0$ hours. From that, we can calculate an arrival rate of $\lambda = N / T = 5,000$ per hour or $\lambda = 1.388$ users / second. Now for the times. Starting at $T_0$ we have no arrivals, but as time passes the probability of an event increases, until it reaches a near-certainty. If we randomly chose a value $U$ between 0 and 1, then we can calculate a random time interval as $$ I_n = \frac{-ln U}{\lambda} $$ Let's validate this by generating a large sample of intervals and taking their average. End of explanation """ plt.hist(y, 10) plt.show() """ Explanation: So with a rate of $\lambda = 1.388$ new events would arrive on average $I = 0.72$ seconds apart (or $1 / \lambda$). We can plot the distribution of these random times, where we should see an exponential distribution. End of explanation """ from random import expovariate sum([expovariate(lmbda) for i in range(count)])/count """ Explanation: Random Generation Python contains the random.expovariate method which should give us similar intervals. Let's see by averaging a large sum of them. End of explanation """ y = np.random.exponential(1.0/lmbda, count) np.cumsum(y)[:10] np.average(y) """ Explanation: For completeness, we can also use NumPy's random.poisson method if we pass in $1 / \lambda$ End of explanation """ x = range(count) y = [expovariate(lmbda) for i in x] plt.hist(y, 10) plt.show() """ Explanation: Again, this is in agreement with our expected average interval. Note the numbers (and histogram plots) won't match exactly as we are dealing with random time intervals. End of explanation """ intervals = [expovariate(lmbda) for i in range(1000)] timestamps = [0.0] timestamp = 0.0 for t in intervals: timestamp += t timestamps.append(timestamp) timestamps[:10] deltas = [y - x for x, y in zip(timestamps, timestamps[1:])] deltas[:10] sum(deltas) / len(deltas) deltas = [y - x for x, y in zip(timestamps, timestamps[1:])] plt.figure(figsize=(16, 4)) plt.plot(deltas, 'r+') plt.show() """ Explanation: Event Times For a timeline of events, we can simply generate a sequence of independent intervals, and then generate a runnng sum of them for absolute timestamps. End of explanation """ limit = T * 60 * 60 counts = [] for iter in range(100): count = 0 timestamp = 0.0 while timestamp < limit: timestamp += expovariate(lmbda) count += 1 counts.append(count) sum(counts) / len(counts) """ Explanation: Here we can readily see how the time between events is distributed, with most of the deltas below 1.0 with some fairly large outliers. This is to be expected as $T_n$ will always be greater than $T_{n-1}$ but perhaps not by much. Finally, let's generate $T = 2.0$ hours worth of timestamps and see if we have close to our desired $N$ value. We will do this 100 times and then average the counts. We should have a value that is very close to $N = 10,000$. End of explanation """
TheOregonian/articles
air_quality/air_quality.ipynb
mit
df_list = pd.read_html( 'https://en.wikipedia.org/wiki/Air_quality_index', header=0) aqi_df = df_list[14].drop(0) aqi_df[['min','max']] = aqi_df['AQI'].str.split('-', 1, expand=True) aqi_df.columns aqi_df.rename(columns={'O3 (ppb).1': 'O3 (ppb) 1 hour', 'AQI.1': 'Category'}, inplace=True) # The final value for "Category" should also be "Hazardous" aqi_df.fillna(method='ffill', inplace=True) aqi_df """ Explanation: Computing AQI End of explanation """ files = glob.glob(os.path.join('input/', '*.csv')) states = ['Oregon', 'Washington', 'California'] def state_select(file, list_of_states): df = pd.read_csv( file, dtype={'State Code': str, 'County Code': str}, parse_dates=['Date']) df_subset = df.loc[df['State Name'].isin(list_of_states)] return df_subset df = pd.concat(state_select(file, states) for file in files) # Check date of most recent measurement df.groupby(by=['State Name'])['Date'].max() """ Explanation: Oregon’s index is based on three pollutants regulated by the federal Clean Air Act: ground-level ozone, particle pollution and nitrogen dioxide.<sup>2</sup> Ozone: O<sub>3</sub> (ppb) Particle pollution: PM<sub>2.5</sub> (µg/m<sup>3</sup>) Nitrogen dioxide: NO<sub>2</sub> (ppb) Note: AQI data typically contains both the concentration and the AQI for a particular pollutant or the AQI for a 24-hour period and the "Defining parameter," i.e. the pollutant whose value drove the AQI for that day. To do: Use the above data to define functions to compute AQI and category whenever that data is absent. County data Someday, when I've spent more time reading up on requests and SSL, I'll pull this data from the EPA website directly. For now, I'm going to demonstrate what to do when you've downloaded and unarchived the files locally (assuming you did so in the same directory)! In this scenario, I pulled "AQI by County" at https://aqs.epa.gov/aqsweb/airdata/download_files.html#Annual, but the following (with some modifications for what you're filtering on) will work for any of the files at this URL. End of explanation """ df.head() df.to_csv('output/aqi_by_county_west.csv') """ Explanation: This is how I discovered that 2018 data is not yet available for Oregon! The file I included is cobbled together from many datasets and not something I generated with this code. However, this code will work for anyone! For the sake of space, I only included six years of CSVs in this repo. But you can add more, if you like. End of explanation """
UCSBarchlab/PyRTL
ipynb-examples/example6-memory.ipynb
bsd-3-clause
import random import pyrtl from pyrtl import * pyrtl.reset_working_block() """ Explanation: Example 6: Memories in PyRTL One important part of many circuits is the ability to have data in locations that are persistent over clock cycles. In previous examples, we have shown the register wirevector, which is great for storing a small amount of data for a single clock cycle. However, PyRTL also has other ways to store data, namely memories and ROMs. End of explanation """ mem1 = MemBlock(bitwidth=32, addrwidth=3, name='mem') mem2 = MemBlock(32, 3, 'mem') """ Explanation: Part 1: Memories Memories is a way to store multiple sets of data for extended periods of time. Below we will make two instances of the same memory to test using that the same thing happens to two different memories using the same inputs End of explanation """ waddr = Input(3, 'waddr') count = Register(3, 'count') """ Explanation: One memory will receive the write address from an input, the other, a register End of explanation """ wdata = Input(32, 'wdata') we = Input(1, 'we') raddr = Input(3, 'raddr') """ Explanation: In order to make sure that the two memories take the same inputs, we will use same write data, write enable, and read addr values End of explanation """ rdata1 = Output(32, 'rdata1') rdata2 = Output(32, 'rdata2') """ Explanation: We will be grabbing data from each of the two memory blocks so we need two different output wires to see the results End of explanation """ rdata1 <<= mem1[raddr] rdata2 <<= mem2[raddr] """ Explanation: Ports The way of sending data to and from a memory block is through the use of a port. There are two types of ports, read ports and write ports. Each memory can have multiple read and write ports, but it doesn't make sense for one to have either 0 read ports or 0 write ports. Below, we will make one read port for each of the two memories End of explanation """ WE = MemBlock.EnabledWrite mem1[waddr] <<= WE(wdata, we) # Uses input wire mem2[count] <<= WE(wdata, we) # Uses count register """ Explanation: Write Enable Bit For the write ports, we will do something different. Sometimes you don't want the memories to always accept the data and address on the write port. The write enable bit allows us to disable the write port as long as the value is zero, giving us complete control over whether to accept the data. End of explanation """ count.next <<= select(we, falsecase=count, truecase=count + 1) """ Explanation: Now we will finish up the circuit. We will increment count register on each write. End of explanation """ validate = Output(1, 'validate') validate <<= waddr == count """ Explanation: We will also verify that the two write addresses are always the same End of explanation """ simvals = { 'we': "00111111110000000000000000", 'waddr': "00012345670000000000000000", 'wdata': "00123456789990000000000000", 'raddr': "00000000000000000123456777" } """ Explanation: Now it is time to simulate the circuit. First we will set up the values for all of the inputs. Write 1 through 8 into the eight registers, then read back out End of explanation """ mem1_init = {addr: 9 for addr in range(8)} mem2_init = {addr: 9 for addr in range(8)} """ Explanation: For simulation purposes, we can give the spots in memory an initial value note that in the actual circuit, the values are initially undefined below, we are building the data with which to initialize memory End of explanation """ memvals = {mem1: mem1_init, mem2: mem2_init} """ Explanation: The simulation only recognizes initial values of memories when they are in a dictionary composing of memory : mem_values pairs. End of explanation """ sim_trace = pyrtl.SimulationTrace() sim = pyrtl.Simulation(tracer=sim_trace, memory_value_map=memvals) for cycle in range(len(simvals['we'])): sim.step({k: int(v[cycle]) for k, v in simvals.items()}) sim_trace.render_trace() # cleanup in preparation for the rom example pyrtl.reset_working_block() """ Explanation: Now run the simulation like before. Note the adding of the memory value map. End of explanation """ def rom_data_func(address): return 31 - 2 * address rom_data_array = [rom_data_func(a) for a in range(16)] """ Explanation: Part 2: ROMs ROMs are another type of memory. Unlike normal memories, ROMs are read only and therefore only have read ports. They are used to store predefined data. There are two different ways to define the data stored in the ROMs either through passing a function or though a list or tuple. End of explanation """ # FIXME: rework how memassigns work to account for more read ports rom1 = RomBlock(bitwidth=5, addrwidth=4, romdata=rom_data_func, max_read_ports=10) rom2 = RomBlock(5, 4, rom_data_array, max_read_ports=10) rom_add_1, rom_add_2 = Input(4, "rom_in"), Input(4, "rom_in_2") rom_out_1, rom_out_2 = Output(5, "rom_out_1"), Output(5, "rom_out_2") rom_out_3, cmp_out = Output(5, "rom_out_3"), Output(1, "cmp_out") """ Explanation: Now we will make the ROM blocks. ROM blocks are similar to memory blocks but because they are read only, they also need to be passed in a set of data to be initialized as. End of explanation """ temp1 = rom1[rom_add_1] temp2 = rom2[rom_add_1] rom_out_3 <<= rom2[rom_add_2] # now we will connect the rest of the outputs together rom_out_1 <<= temp1 rom_out_2 <<= temp2 cmp_out <<= temp1 == temp2 """ Explanation: Because output wirevectors cannot be used as the source for other nets, in order to use the rom outputs in two different places, we must instead assign them to a temporary variable. End of explanation """ random.seed(4839483) """ Explanation: One of the things that is useful to have is repeatability, However, we also don't want the hassle of typing out a set of values to test. One solution in this case is to seed random and then pulling out 'random' numbers from it. End of explanation """ simvals = { 'rom_in': [1, 11, 4, 2, 7, 8, 2, 4, 5, 13, 15, 3, 4, 4, 4, 8, 12, 13, 2, 1], 'rom_in_2': [random.randrange(0, 16) for i in range(20)] } """ Explanation: Now we will create a new set of simulation values. In this case, since we want to use simulation values that are larger than 9 we cannot use the trick used in previous examples to parse values. The two ways we are doing it below are both valid ways of making larger values End of explanation """ sim_trace = pyrtl.SimulationTrace() sim = pyrtl.Simulation(tracer=sim_trace) for cycle in range(len(simvals['rom_in'])): sim.step({k: v[cycle] for k, v in simvals.items()}) sim_trace.render_trace() """ Explanation: Now run the simulation like before. Note that for ROMs, we do not supply a memory value map because ROMs are defined with the values predefined. End of explanation """
tbarrongh/cosc-learning-labs
src/notebook/03_interface_startup.ipynb
apache-2.0
help('learning_lab.03_interface_startup') """ Explanation: COSC Learning Lab 03_interface_startup.py Related Scripts: * 03_interface_shutdown.py * 03_interface_configuration.py Table of Contents Table of Contents Documentation Implementation Execution HTTP Documentation End of explanation """ from importlib import import_module script = import_module('learning_lab.03_interface_startup') from inspect import getsource print(getsource(script.main)) print(getsource(script.demonstrate)) """ Explanation: Implementation End of explanation """ run ../learning_lab/03_interface_startup.py """ Explanation: Execution End of explanation """ from basics.odl_http import http_history from basics.http import http_history_to_html from IPython.core.display import HTML HTML(http_history_to_html(http_history())) """ Explanation: HTTP End of explanation """
jepegit/cellpy
examples/jupyter notebooks/cellpy_batch_processing.ipynb
mit
import numpy as np import matplotlib.pyplot as plt import cellpy from cellpy import prms from cellpy import prmreader from cellpy.utils import batch %matplotlib inline ## Uncomment this and run for checking your cellpy parameters. # prmreader.info() """ Explanation: Notebook for cellpy batch processing You can fill inn the MarkDown cells (the cells without "numbering") by double-clicking them. Also remember, press shift + enter to execute a cell. A couple of useful links: - How to write MarkDown - Jupyter notebooks - cellpy This notebook uses the following packages python >= 3.6 cellpy >= 0.3.3 pandas numpy matplotlib bokeh pyviz (holoviews) 0. Setting up things properly 0.1 Installing cellpy For this to work, you will have to have a version of cellpy satisfying the criteria in the paragraph above. You might have to do a pre-release install to get it or clone the github repository and install in developer-mode. Installing a pre-release with pip. bash pip install --upgrade --pre cellpy (if this is the first time you install cellpy, you can skip the --upgrade option) Installing after cloning using pip in developer-mode. Note that you have to be in the directory where you have put the cellpy package (where the setup.py file is), if not, using . as argument will not work and you will have to provide the full path to the setup.py file): bash pip install -e . 0.2 Make sure you have a properly working config file For cellpy to find stuff, it needs to know where to look. A config file exists for this purpose. It is typically located in your home directory (for mac and linux) or in your documents directory (for Windows) and has a name on this form (replacing "username" with your real username): .cellpy_prms_username.conf The file format is YAML (be aware that it cares about white spaces). The most important settings for this notebook are probably the Paths. Make sure they make sense (and that both the paths and the db_path filename exist) and edit it if necessary. Here is how a typical file (at least the top of it) looks like: ```yaml Paths: cellpydatadir: cellpy_data/cellpyfiles db_filename: cellpy_db.xlsx db_path: cellpy_data/db filelogdir: cellpy_data/logs outdatadir: cellpy_data/out rawdatadir: cellpy_data/raw examplesdir: cellpy_data/examples notebookdir: cellpy_data/notebooks batchfiledir: cellpy_data/batchfiles FileNames: file_name_format: YYYYMMDD_[NAME]EEE_CC_TT_RR Db: db_type: simple_excel_reader db_table_name: db_table db_header_row: 0 db_unit_row: 1 db_data_start_row: 2 ... ``` 0.3 Database file This notebook uses the cellpy batch utility. For it to work properly (or at all) you will have to provide it with a database. You can choose to implement a database and a loader your self. Currently, cellpy ships with a very simple database solution that hardly justifies its name as a database. It reads an excel-file where the first row acts as column headers, the second provides the type (e.g. string, bool, etc), and the rest provides the necessary information for each of the cells (one row pr. cell). A sample excel file ("db-file") is provided with this example. You will need fill inn values manually, one row for each cell you want to load. Then you will have to put it in the database folder (as defined in your config file where it says db_file: in the Paths-section). The name of the file must also be the same as defined in the config-file (db_filename:, i.e cellpy_db.xlsx in the example config file snippet above). When cellpy reads the file, it uses the batch column (see below) to select which rows (i.e. cells) to load. For example, if the "b01" batch column is the one you tell cellpy to use and you provide it with the name "casandras_experiment", it will only select the rows that has "casandras_experiment" in the "b01" column. You provide cellpy with the "lookup" name when you issue the batch.init command, for example: python b = batch.init("casandras_experiment", "cool_project", batch_col="b01") You must always have the columns colored green filled out. And make sure that the id column (the first one in the example xlsx file) has a unique integer for each row (it is used as a "key" when looking up stuff from the file). 0.4 Files to read Make sure that the names of your experiment-files (for example your .res files) are on the form date_something_that_describes_the_cell.res because this is the name-format supported at the moment (this is not strictly true, but just to be on the safe side...). OK, thats all for now. Have a look at the source code in the github repository or 1. Key information about the current experiment Experimental-id: xxx Short-name: xxx Project: project name By: your name Date: xx.xx.xxxx 2. Short summary of the experiment before processing It is often helpful to formulate what you wanted to achieve with your experiment before actually going into depth of the data. I believe that it does not make you "biased" when processing your data, but instead sharpens your mind and motivates you to look more closely on your results. I might be wrong, off course. Then just skip filling in this part. Main purpose (State the main hypothesis for the current set of experiment) Expected outcome (What do you expect to find out? What kind of tests did you perform?) Special considerations (State if there are any special considerations for this experiment) 3. Processing data Setting up everything End of explanation """ # Please fill in here project = "xxx" name = "xxx" batch_col = "b01" """ Explanation: Creating pages and initialise the cellpy batch object If you need to create Journal Pages, please provide appropriate names for the project and the experiment to allow cellpy to build the pages. End of explanation """ print(" INITIALISATION OF BATCH ".center(80, "=")) b = batch.init(name, project, batch_col=batch_col) """ Explanation: Initialisation End of explanation """ # setting some prms b.experiment.export_raw = False # b.experiment.export_cycles = True # b.experiment.export_ica = True """ Explanation: Set parameters End of explanation """ # load info from your db and write the journal pages b.create_journal() # create the apropriate folders b.paginate() # load the data (and save .csv-files if you have set export_(raw/cycles/ica) = True) # (this might take some time) b.update() # collect summary-data (e.g. charge capacity vs cycle number) from each cell and export to .csv-file(s). b.combine_summaries() """ Explanation: Run End of explanation """ # Plot the charge capacity and the C.E. (and resistance) vs. cycle number (standard plot) b.plot_summaries() # Show the journal pages b.pages """ Explanation: 4. Looking at the data End of explanation """ import hvplot.pandas import holoviews as hv # hvplot does not like infinities s = b.summaries.replace([np.inf, -np.inf], np.nan) layout = ( s["coulombic_efficiency"].hvplot() + s["discharge_capacity"].hvplot() * s["charge_capacity"].hvplot() ) layout.cols(1) s["cumulated_coulombic_efficiency"].hvplot() """ Explanation: Using hvplot for plotting summaries End of explanation """ discharge_capacity = b.summaries.discharge_capacity charge_capacity = b.summaries.charge_capacity coulombic_efficiency = b.summaries.coulombic_efficiency ir_charge = b.summaries.ir_charge fig, (ax1, ax2) = plt.subplots(2, 1) ax1.plot(discharge_capacity) ax1.set_ylabel("capacity ") ax2.plot(ir_charge) ax2.set_xlabel("cycle") ax2.set_ylabel("resistance") """ Explanation: Looking closer at some summary-plots End of explanation """ # Lets check what cells we have cell_labels = b.experiment.cell_names cell_labels # OK, then I choose one of them label = cell_labels[0] data = b.experiment.data[label] """ Explanation: 5. Checking for more details per cycle A. pick the CellpyData object for one of the cells End of explanation """ cap = data.get_cap(categorical_column=True) cap.head() fig, ax = plt.subplots() ax.plot(cap.capacity, cap.voltage) ax.set_xlabel("capacity") ax.set_ylabel("voltage") c4, v4 = data.get_cap(cycle=4, method="forth-and-forth") c10, v10 = data.get_cap(cycle=10, method="forth-and-forth") fig, ax = plt.subplots() ax.set_xlabel("capacity") ax.set_ylabel("voltage") ax.plot(c4, v4, "ro", label="cycle 4") ax.plot(c10, v10, "bs", label="cycle 22") ax.legend(); """ Explanation: B. Get some voltage curves for some cycles and plot them The method get_cap can be used to extract voltage curves. End of explanation """ from cellpy.utils import ica v4, dqdv4 = ica.dqdv_cycle( data.get_cap(4, categorical_column=True, method="forth-and-forth") ) v10, dqdv10 = ica.dqdv_cycle( data.get_cap(10, categorical_column=True, method="forth-and-forth") ) plt.plot(v4, dqdv4, label="cycle 4") plt.plot(v10, dqdv10, label="cycle 10") plt.legend(); """ Explanation: Looking at some dqdv data Get capacity cycles and make dqdv using the ica module End of explanation """ fig, ax = plt.subplots() for cycle in data.get_cycle_numbers(): d = data.get_cap(cycle, categorical_column=True, method="forth-and-forth") if not d.empty: v, dqdv = ica.dqdv_cycle(d) ax.plot(v, dqdv) else: print(f"cycle {cycle} seems to be missing or corrupted") """ Explanation: Put it in a for-loop for plotting many ica plots End of explanation """ hv.extension("bokeh") tidy_ica = ica.dqdv_frames(data) cycles = list(range(1, 3)) + [10, 15] tidy_ica = tidy_ica.loc[tidy_ica.cycle.isin(cycles), :] %%opts Curve [xlim=(0,1)] NdOverlay [legend_position='right'] #legend_cols=True] curve4 = (hv.Curve(tidy_ica, kdims=['voltage'], vdims=['dq', 'cycle'], label="Incremental capacity plot") .groupby("cycle") .opts( style={"Curve": dict(color=hv.Palette("Viridis"))}, ) .overlay() .opts( width=800, height=500, ) ) curve4 """ Explanation: Get all the dqdv data in one go End of explanation """
albahnsen/PracticalMachineLearningClass
notebooks/02-IntroPython_Numpy_Scypy_Pandas.ipynb
mit
import sys print('Python version:', sys.version) import IPython print('IPython:', IPython.__version__) import numpy print('numpy:', numpy.__version__) import scipy print('scipy:', scipy.__version__) import matplotlib print('matplotlib:', matplotlib.__version__) import pandas print('pandas:', pandas.__version__) import sklearn print('scikit-learn:', sklearn.__version__) """ Explanation: 02 - Introduction to Python for Data Analysis by Alejandro Correa Bahnsen and Jesus Solano version 1.4, January 2019 Part of the class Practical Machine Learning This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Rick Muller, Sandia National Laboratories Why Python? Python is the programming language of choice for many scientists to a large degree because it offers a great deal of power to analyze and model scientific data with relatively little overhead in terms of learning, installation or development time. It is a language you can pick up in a weekend, and use for the rest of one's life. The Python Tutorial is a great place to start getting a feel for the language. To complement this material, I taught a Python Short Course years ago to a group of computational chemists during a time that I was worried the field was moving too much in the direction of using canned software rather than developing one's own methods. I wanted to focus on what working scientists needed to be more productive: parsing output of other programs, building simple models, experimenting with object oriented programming, extending the language with C, and simple GUIs. I'm trying to do something very similar here, to cut to the chase and focus on what scientists need. In the last year or so, the Jupyter Project has put together a notebook interface that I have found incredibly valuable. A large number of people have released very good IPython Notebooks that I have taken a huge amount of pleasure reading through. Some ones that I particularly like include: Rick Muller A Crash Course in Python for Scientists Rob Johansson's excellent notebooks, including Scientific Computing with Python and Computational Quantum Physics with QuTiP lectures; XKCD style graphs in matplotlib; A collection of Notebooks for using IPython effectively A gallery of interesting IPython Notebooks I find Jupyter notebooks an easy way both to get important work done in my everyday job, as well as to communicate what I've done, how I've done it, and why it matters to my coworkers. In the interest of putting more notebooks out into the wild for other people to use and enjoy, I thought I would try to recreate some of what I was trying to get across in the original Python Short Course, updated by 15 years of Python, Numpy, Scipy, Pandas, Matplotlib, and IPython development, as well as my own experience in using Python almost every day of this time. Why Python for Data Analysis? Python is great for scripting and applications. The pandas library offers improved library support. Scraping, web APIs Strong High Performance Computation support Load balanceing tasks MPI, GPU MapReduce Strong support for abstraction Intel MKL HDF5 Environment But we already know R ...Which is better? Hard to answer http://www.kdnuggets.com/2015/05/r-vs-python-data-science.html http://www.kdnuggets.com/2015/03/the-grammar-data-science-python-vs-r.html https://www.datacamp.com/community/tutorials/r-or-python-for-data-analysis https://www.dataquest.io/blog/python-vs-r/ http://www.dataschool.io/python-or-r-for-data-science/ What You Need to Install There are two branches of current releases in Python: the older-syntax Python 2, and the newer-syntax Python 3. This schizophrenia is largely intentional: when it became clear that some non-backwards-compatible changes to the language were necessary, the Python dev-team decided to go through a five-year (or so) transition, during which the new language features would be introduced and the old language was still actively maintained, to make such a transition as easy as possible. Nonetheless, I'm going to write these notes with Python 3 in mind, since this is the version of the language that I use in my day-to-day job, and am most comfortable with. With this in mind, these notes assume you have a Python distribution that includes: Python version 3.5; Numpy, the core numerical extensions for linear algebra and multidimensional arrays; Scipy, additional libraries for scientific programming; Matplotlib, excellent plotting and graphing libraries; IPython, with the additional libraries required for the notebook interface. Pandas, Python version of R dataframe scikit-learn, Machine learning library! A good, easy to install option that supports Mac, Windows, and Linux, and that has all of these packages (and much more) is the Anaconda. Checking your installation You can run the following code to check the versions of the packages on your system: (in IPython notebook, press shift and return together to execute the contents of a cell) End of explanation """ 2+2 (50-5*6)/4 """ Explanation: I. Python Overview This is a quick introduction to Python. There are lots of other places to learn the language more thoroughly. I have collected a list of useful links, including ones to other learning resources, at the end of this notebook. If you want a little more depth, Python Tutorial is a great place to start, as is Zed Shaw's Learn Python the Hard Way. The lessons that follow make use of the IPython notebooks. There's a good introduction to notebooks in the IPython notebook documentation that even has a nice video on how to use the notebooks. You should probably also flip through the IPython tutorial in your copious free time. Briefly, notebooks have code cells (that are generally followed by result cells) and text cells. The text cells are the stuff that you're reading now. The code cells start with "In []:" with some number generally in the brackets. If you put your cursor in the code cell and hit Shift-Enter, the code will run in the Python interpreter and the result will print out in the output cell. You can then change things around and see whether you understand what's going on. If you need to know more, see the IPython notebook documentation or the IPython tutorial. Using Python as a Calculator Many of the things I used to use a calculator for, I now use Python for: End of explanation """ sqrt(81) from math import sqrt sqrt(81) """ Explanation: (If you're typing this into an IPython notebook, or otherwise using notebook file, you hit shift-Enter to evaluate a cell.) In the last few lines, we have sped by a lot of things that we should stop for a moment and explore a little more fully. We've seen, however briefly, two different data types: integers, also known as whole numbers to the non-programming world, and floating point numbers, also known (incorrectly) as decimal numbers to the rest of the world. We've also seen the first instance of an import statement. Python has a huge number of libraries included with the distribution. To keep things simple, most of these variables and functions are not accessible from a normal Python interactive session. Instead, you have to import the name. For example, there is a math module containing many useful functions. To access, say, the square root function, you can either first from math import sqrt and then End of explanation """ import math math.sqrt(81) """ Explanation: or you can simply import the math library itself End of explanation """ radius = 20 pi = math.pi area = pi * radius ** 2 area """ Explanation: You can define variables using the equals (=) sign: End of explanation """ return = 0 """ Explanation: You can name a variable almost anything you want. It needs to start with an alphabetical character or "_", can contain alphanumeric characters plus underscores ("_"). Certain words, however, are reserved for the language: and, as, assert, break, class, continue, def, del, elif, else, except, exec, finally, for, from, global, if, import, in, is, lambda, not, or, pass, print, raise, return, try, while, with, yield Trying to define a variable using one of these will result in a syntax error: End of explanation """ 'Hello, World!' """ Explanation: The Python Tutorial has more on using Python as an interactive shell. The IPython tutorial makes a nice complement to this, since IPython has a much more sophisticated iteractive shell. Strings Strings are lists of printable characters, and can be defined using either single quotes End of explanation """ "Hello, World!" """ Explanation: or double quotes End of explanation """ greeting = "Hello, World!" """ Explanation: Just like the other two data objects we're familiar with (ints and floats), you can assign a string to a variable End of explanation """ print(greeting) """ Explanation: The print statement is often used for printing character strings: End of explanation """ print("The area is " + area) print("The area is " + str(area)) """ Explanation: But it can also print data types other than strings: End of explanation """ statement = "Hello, " + "World!" print(statement) """ Explanation: In the above snipped, the number 600 (stored in the variable "area") is converted into a string before being printed out. You can use the + operator to concatenate strings together: Don't forget the space between the strings, if you want one there. End of explanation """ days_of_the_week = ["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"] """ Explanation: If you have a lot of words to concatenate together, there are other, more efficient ways to do this. But this is fine for linking a few strings together. Lists Very often in a programming language, one wants to keep a group of similar items together. Python does this using a data type called lists. End of explanation """ days_of_the_week[2] """ Explanation: You can access members of the list using the index of that item: End of explanation """ days_of_the_week[-1] """ Explanation: Python lists, like C, but unlike Fortran, use 0 as the index of the first element of a list. Thus, in this example, the 0 element is "Sunday", 1 is "Monday", and so on. If you need to access the nth element from the end of the list, you can use a negative index. For example, the -1 element of a list is the last element: End of explanation """ languages = ["Fortran","C","C++"] languages.append("Python") print(languages) """ Explanation: You can add additional items to the list using the .append() command: End of explanation """ list(range(10)) """ Explanation: The range() command is a convenient way to make sequential lists of numbers: End of explanation """ list(range(2,8)) """ Explanation: Note that range(n) starts at 0 and gives the sequential list of integers less than n. If you want to start at a different number, use range(start,stop) End of explanation """ evens = list(range(0,20,2)) evens evens[3] """ Explanation: The lists created above with range have a step of 1 between elements. You can also give a fixed step size via a third command: End of explanation """ ["Today",7,99.3,""] """ Explanation: Lists do not have to hold the same data type. For example, End of explanation """ help(len) len(evens) """ Explanation: However, it's good (but not essential) to use lists for similar objects that are somehow logically connected. If you want to group different data types together into a composite data object, it's best to use tuples, which we will learn about below. You can find out how long a list is using the len() command: End of explanation """ for day in days_of_the_week: print(day) """ Explanation: Iteration, Indentation, and Blocks One of the most useful things you can do with lists is to iterate through them, i.e. to go through each element one at a time. To do this in Python, we use the for statement: End of explanation """ for day in days_of_the_week: statement = "Today is " + day print(statement) """ Explanation: This code snippet goes through each element of the list called days_of_the_week and assigns it to the variable day. It then executes everything in the indented block (in this case only one line of code, the print statement) using those variable assignments. When the program has gone through every element of the list, it exists the block. (Almost) every programming language defines blocks of code in some way. In Fortran, one uses END statements (ENDDO, ENDIF, etc.) to define code blocks. In C, C++, and Perl, one uses curly braces {} to define these blocks. Python uses a colon (":"), followed by indentation level to define code blocks. Everything at a higher level of indentation is taken to be in the same block. In the above example the block was only a single line, but we could have had longer blocks as well: End of explanation """ for i in range(20): print("The square of ",i," is ",i*i) """ Explanation: The range() command is particularly useful with the for statement to execute loops of a specified length: End of explanation """ for letter in "Sunday": print(letter) """ Explanation: Slicing Lists and strings have something in common that you might not suspect: they can both be treated as sequences. You already know that you can iterate through the elements of a list. You can also iterate through the letters in a string: End of explanation """ days_of_the_week[0] """ Explanation: This is only occasionally useful. Slightly more useful is the slicing operation, which you can also use on any sequence. We already know that we can use indexing to get the first element of a list: End of explanation """ days_of_the_week[0:2] """ Explanation: If we want the list containing the first two elements of a list, we can do this via End of explanation """ days_of_the_week[:2] """ Explanation: or simply End of explanation """ days_of_the_week[-2:] """ Explanation: If we want the last items of the list, we can do this with negative slicing: End of explanation """ workdays = days_of_the_week[1:6] print(workdays) """ Explanation: which is somewhat logically consistent with negative indices accessing the last elements of the list. You can do: End of explanation """ day = "Sunday" abbreviation = day[:3] print(abbreviation) """ Explanation: Since strings are sequences, you can also do this to them: End of explanation """ numbers = list(range(0,40)) evens = numbers[2::2] evens """ Explanation: If we really want to get fancy, we can pass a third element into the slice, which specifies a step length (just like a third argument to the range() function specifies the step): End of explanation """ if day == "Sunday": print("Sleep in") else: print("Go to work") """ Explanation: Note that in this example I was even able to omit the second argument, so that the slice started at 2, went to the end of the list, and took every second element, to generate the list of even numbers less that 40. Booleans and Truth Testing We have now learned a few data types. We have integers and floating point numbers, strings, and lists to contain them. We have also learned about lists, a container that can hold any data type. We have learned to print things out, and to iterate over items in lists. We will now learn about boolean variables that can be either True or False. We invariably need some concept of conditions in programming to control branching behavior, to allow a program to react differently to different situations. If it's Monday, I'll go to work, but if it's Sunday, I'll sleep in. To do this in Python, we use a combination of boolean variables, which evaluate to either True or False, and if statements, that control branching based on boolean values. For example: End of explanation """ day == "Sunday" """ Explanation: (Quick quiz: why did the snippet print "Go to work" here? What is the variable "day" set to?) Let's take the snippet apart to see what happened. First, note the statement End of explanation """ 1 == 2 50 == 2*25 3 < 3.14159 1 == 1.0 1 != 0 1 <= 2 1 >= 1 """ Explanation: If we evaluate it by itself, as we just did, we see that it returns a boolean value, False. The "==" operator performs equality testing. If the two items are equal, it returns True, otherwise it returns False. In this case, it is comparing two variables, the string "Sunday", and whatever is stored in the variable "day", which, in this case, is the other string "Saturday". Since the two strings are not equal to each other, the truth test has the false value. The if statement that contains the truth test is followed by a code block (a colon followed by an indented block of code). If the boolean is true, it executes the code in that block. Since it is false in the above example, we don't see that code executed. The first block of code is followed by an else statement, which is executed if nothing else in the above if statement is true. Since the value was false, this code is executed, which is why we see "Go to work". You can compare any data types in Python: End of explanation """ 1 is 1.0 """ Explanation: We see a few other boolean operators here, all of which which should be self-explanatory. Less than, equality, non-equality, and so on. Particularly interesting is the 1 == 1.0 test, which is true, since even though the two objects are different data types (integer and floating point number), they have the same value. There is another boolean operator is, that tests whether two objects are the same object: End of explanation """ [1,2,3] == [1,2,4] [1,2,3] < [1,2,4] """ Explanation: We can do boolean tests on lists as well: End of explanation """ hours = 5 0 < hours < 24 """ Explanation: Finally, note that you can also string multiple comparisons together, which can result in very intuitive tests: End of explanation """ if day == "Sunday": print("Sleep in") elif day == "Saturday": print("Do chores") else: print("Go to work") """ Explanation: If statements can have elif parts ("else if"), in addition to if/else parts. For example: End of explanation """ for day in days_of_the_week: statement = "Today is " + day print(statement) if day == "Sunday": print(" Sleep in") elif day == "Saturday": print(" Do chores") else: print(" Go to work") """ Explanation: Of course we can combine if statements with for loops, to make a snippet that is almost interesting: End of explanation """ bool(1) bool(0) bool(["This "," is "," a "," list"]) """ Explanation: This is something of an advanced topic, but ordinary data types have boolean values associated with them, and, indeed, in early versions of Python there was not a separate boolean object. Essentially, anything that was a 0 value (the integer or floating point 0, an empty string "", or an empty list []) was False, and everything else was true. You can see the boolean value of any data object using the bool() function. End of explanation """ n = 10 sequence = [0,1] for i in range(2,n): # This is going to be a problem if we ever set n <= 2! sequence.append(sequence[i-1]+sequence[i-2]) print(sequence) """ Explanation: Code Example: The Fibonacci Sequence The Fibonacci sequence is a sequence in math that starts with 0 and 1, and then each successive entry is the sum of the previous two. Thus, the sequence goes 0,1,1,2,3,5,8,13,21,34,55,89,... A very common exercise in programming books is to compute the Fibonacci sequence up to some number n. First I'll show the code, then I'll discuss what it is doing. End of explanation """ def fibonacci(sequence_length): "Return the Fibonacci sequence of length *sequence_length*" sequence = [0,1] if sequence_length < 1: print("Fibonacci sequence only defined for length 1 or greater") return if 0 < sequence_length < 3: return sequence[:sequence_length] for i in range(2,sequence_length): sequence.append(sequence[i-1]+sequence[i-2]) return sequence """ Explanation: Let's go through this line by line. First, we define the variable n, and set it to the integer 20. n is the length of the sequence we're going to form, and should probably have a better variable name. We then create a variable called sequence, and initialize it to the list with the integers 0 and 1 in it, the first two elements of the Fibonacci sequence. We have to create these elements "by hand", since the iterative part of the sequence requires two previous elements. We then have a for loop over the list of integers from 2 (the next element of the list) to n (the length of the sequence). After the colon, we see a hash tag "#", and then a comment that if we had set n to some number less than 2 we would have a problem. Comments in Python start with #, and are good ways to make notes to yourself or to a user of your code explaining why you did what you did. Better than the comment here would be to test to make sure the value of n is valid, and to complain if it isn't; we'll try this later. In the body of the loop, we append to the list an integer equal to the sum of the two previous elements of the list. After exiting the loop (ending the indentation) we then print out the whole list. That's it! Functions We might want to use the Fibonacci snippet with different sequence lengths. We could cut an paste the code into another cell, changing the value of n, but it's easier and more useful to make a function out of the code. We do this with the def statement in Python: End of explanation """ fibonacci(2) fibonacci(12) """ Explanation: We can now call fibonacci() for different sequence_lengths: End of explanation """ help(fibonacci) """ Explanation: We've introduced a several new features here. First, note that the function itself is defined as a code block (a colon followed by an indented block). This is the standard way that Python delimits things. Next, note that the first line of the function is a single string. This is called a docstring, and is a special kind of comment that is often available to people using the function through the python command line: End of explanation """ t = (1,2,'hi',9.0) t """ Explanation: If you define a docstring for all of your functions, it makes it easier for other people to use them, since they can get help on the arguments and return values of the function. Next, note that rather than putting a comment in about what input values lead to errors, we have some testing of these values, followed by a warning if the value is invalid, and some conditional code to handle special cases. Two More Data Structures: Tuples and Dictionaries Before we end the Python overview, I wanted to touch on two more data structures that are very useful (and thus very common) in Python programs. A tuple is a sequence object like a list or a string. It's constructed by grouping a sequence of objects together with commas, either without brackets, or with parentheses: End of explanation """ t[1] """ Explanation: Tuples are like lists, in that you can access the elements using indices: End of explanation """ t.append(7) t[1]=77 """ Explanation: However, tuples are immutable, you can't append to them or change the elements of them: End of explanation """ ('Bob',0.0,21.0) """ Explanation: Tuples are useful anytime you want to group different pieces of data together in an object, but don't want to create a full-fledged class (see below) for them. For example, let's say you want the Cartesian coordinates of some objects in your program. Tuples are a good way to do this: End of explanation """ positions = [ ('Bob',0.0,21.0), ('Cat',2.5,13.1), ('Dog',33.0,1.2) ] """ Explanation: Again, it's not a necessary distinction, but one way to distinguish tuples and lists is that tuples are a collection of different things, here a name, and x and y coordinates, whereas a list is a collection of similar things, like if we wanted a list of those coordinates: End of explanation """ def minmax(objects): minx = 1e20 # These are set to really big numbers miny = 1e20 for obj in objects: name,x,y = obj if x < minx: minx = x if y < miny: miny = y return minx,miny x,y = minmax(positions) print(x,y) """ Explanation: Tuples can be used when functions return more than one value. Say we wanted to compute the smallest x- and y-coordinates of the above list of objects. We could write: End of explanation """ mylist = [1,2,9,21] """ Explanation: Dictionaries are an object called "mappings" or "associative arrays" in other languages. Whereas a list associates an integer index with a set of objects: End of explanation """ ages = {"Rick": 46, "Bob": 86, "Fred": 21} print("Rick's age is ",ages["Rick"]) """ Explanation: The index in a dictionary is called the key, and the corresponding dictionary entry is the value. A dictionary can use (almost) anything as the key. Whereas lists are formed with square brackets [], dictionaries use curly brackets {}: End of explanation """ dict(Rick=46,Bob=86,Fred=20) """ Explanation: There's also a convenient way to create dictionaries without having to quote the keys. End of explanation """ len(t) len(ages) """ Explanation: The len() command works on both tuples and dictionaries: End of explanation """ import this """ Explanation: Conclusion of the Python Overview There is, of course, much more to the language than I've covered here. I've tried to keep this brief enough so that you can jump in and start using Python to simplify your life and work. My own experience in learning new things is that the information doesn't "stick" unless you try and use it for something in real life. You will no doubt need to learn more as you go. I've listed several other good references, including the Python Tutorial and Learn Python the Hard Way. Additionally, now is a good time to start familiarizing yourself with the Python Documentation, and, in particular, the Python Language Reference. Tim Peters, one of the earliest and most prolific Python contributors, wrote the "Zen of Python", which can be accessed via the "import this" command: End of explanation """ import numpy as np import scipy as sp array = np.array([1,2,3,4,5,6]) array """ Explanation: No matter how experienced a programmer you are, these are words to meditate on. II. Numpy and Scipy Numpy contains core routines for doing fast vector, matrix, and linear algebra-type operations in Python. Scipy contains additional routines for optimization, special functions, and so on. Both contain modules written in C and Fortran so that they're as fast as possible. Together, they give Python roughly the same capability that the Matlab program offers. (In fact, if you're an experienced Matlab user, there a guide to Numpy for Matlab users just for you.) Making vectors and matrices Fundamental to both Numpy and Scipy is the ability to work with vectors and matrices. You can create vectors from lists using the array command: End of explanation """ array.shape """ Explanation: size of the array End of explanation """ mat = np.array([[0,1],[1,0]]) mat """ Explanation: To build matrices, you can either use the array command with lists of lists: End of explanation """ mat2 = np.c_[mat, np.ones(2)] mat2 """ Explanation: Add a column of ones to mat End of explanation """ mat2.shape """ Explanation: size of a matrix End of explanation """ np.zeros((3,3)) """ Explanation: You can also form empty (zero) matrices of arbitrary shape (including vectors, which Numpy treats as vectors with one row), using the zeros command: End of explanation """ np.identity(4) """ Explanation: There's also an identity command that behaves as you'd expect: End of explanation """ np.linspace(0,1) """ Explanation: as well as a ones command. Linspace, matrix functions, and plotting The linspace command makes a linear array of points from a starting to an ending value. End of explanation """ np.linspace(0,1,11) """ Explanation: If you provide a third argument, it takes that as the number of points in the space. If you don't provide the argument, it gives a length 50 linear space. End of explanation """ x = np.linspace(0,2*np.pi) np.sin(x) """ Explanation: linspace is an easy way to make coordinates for plotting. Functions in the numpy library (all of which are imported into IPython notebook) can act on an entire vector (or even a matrix) of points at once. Thus, End of explanation """ %matplotlib inline import matplotlib.pyplot as plt plt.style.use('ggplot') plt.plot(x,np.sin(x)) """ Explanation: In conjunction with matplotlib, this is a nice way to plot things: End of explanation """ 0.125*np.identity(3) """ Explanation: Matrix operations Matrix objects act sensibly when multiplied by scalars: End of explanation """ np.identity(2) + np.array([[1,1],[1,2]]) """ Explanation: as well as when you add two matrices together. (However, the matrices have to be the same shape.) End of explanation """ np.identity(2)*np.ones((2,2)) """ Explanation: Something that confuses Matlab users is that the times (*) operator give element-wise multiplication rather than matrix multiplication: End of explanation """ np.dot(np.identity(2),np.ones((2,2))) """ Explanation: To get matrix multiplication, you need the dot command: End of explanation """ v = np.array([3,4]) np.sqrt(np.dot(v,v)) """ Explanation: dot can also do dot products (duh!): End of explanation """ m = np.array([[1,2],[3,4]]) m.T np.linalg.inv(m) """ Explanation: as well as matrix-vector products. There are determinant, inverse, and transpose functions that act as you would suppose. Transpose can be abbreviated with ".T" at the end of a matrix object: End of explanation """ np.diag([1,2,3,4,5]) """ Explanation: There's also a diag() function that takes a list or a vector and puts it along the diagonal of a square matrix. End of explanation """ raw_data = """\ 3.1905781584582433,0.028208609537968457 4.346895074946466,0.007160804747670053 5.374732334047101,0.0046962988461934805 8.201284796573875,0.0004614473299618756 10.899357601713055,0.00005038370219939726 16.295503211991434,4.377451812785309e-7 21.82012847965739,3.0799922117601088e-9 32.48394004282656,1.524776208284536e-13 43.53319057815846,5.5012073588707224e-18""" """ Explanation: We'll find this useful later on. Least squares fitting Very often we deal with some data that we want to fit to some sort of expected behavior. Say we have the following: End of explanation """ data = [] for line in raw_data.splitlines(): words = line.split(',') data.append(words) data = np.array(data, dtype=np.float) data data[:, 0] plt.title("Raw Data") plt.xlabel("Distance") plt.plot(data[:,0],data[:,1],'bo') """ Explanation: There's a section below on parsing CSV data. We'll steal the parser from that. For an explanation, skip ahead to that section. Otherwise, just assume that this is a way to parse that text into a numpy array that we can plot and do other analyses with. End of explanation """ plt.title("Raw Data") plt.xlabel("Distance") plt.semilogy(data[:,0],data[:,1],'bo') """ Explanation: Since we expect the data to have an exponential decay, we can plot it using a semi-log plot. End of explanation """ params = sp.polyfit(data[:,0],np.log(data[:,1]),1) a = params[0] A = np.exp(params[1]) """ Explanation: For a pure exponential decay like this, we can fit the log of the data to a straight line. The above plot suggests this is a good approximation. Given a function $$ y = Ae^{-ax} $$ $$ \log(y) = \log(A) - ax$$ Thus, if we fit the log of the data versus x, we should get a straight line with slope $a$, and an intercept that gives the constant $A$. There's a numpy function called polyfit that will fit data to a polynomial form. We'll use this to fit to a straight line (a polynomial of order 1) End of explanation """ x = np.linspace(1,45) plt.title("Raw Data") plt.xlabel("Distance") plt.semilogy(data[:,0],data[:,1],'bo') plt.semilogy(x,A*np.exp(a*x),'b-') """ Explanation: Let's see whether this curve fits the data. End of explanation """ gauss_data = """\ -0.9902286902286903,1.4065274110372852e-19 -0.7566104566104566,2.2504438576596563e-18 -0.5117810117810118,1.9459459459459454 -0.31887271887271884,10.621621621621626 -0.250997150997151,15.891891891891893 -0.1463309463309464,23.756756756756754 -0.07267267267267263,28.135135135135133 -0.04426734426734419,29.02702702702703 -0.0015939015939017698,29.675675675675677 0.04689304689304685,29.10810810810811 0.0840994840994842,27.324324324324326 0.1700546700546699,22.216216216216214 0.370878570878571,7.540540540540545 0.5338338338338338,1.621621621621618 0.722014322014322,0.08108108108108068 0.9926849926849926,-0.08108108108108646""" data = [] for line in gauss_data.splitlines(): words = line.split(',') data.append(words) data = np.array(data, dtype=np.float) plt.plot(data[:,0],data[:,1],'bo') """ Explanation: If we have more complicated functions, we may not be able to get away with fitting to a simple polynomial. Consider the following data: End of explanation """ def gauss(x,A,a): return A*np.exp(a*x**2) """ Explanation: This data looks more Gaussian than exponential. If we wanted to, we could use polyfit for this as well, but let's use the curve_fit function from Scipy, which can fit to arbitrary functions. You can learn more using help(curve_fit). First define a general Gaussian function to fit to. End of explanation """ from scipy.optimize import curve_fit params,conv = curve_fit(gauss,data[:,0],data[:,1]) x = np.linspace(-1,1) plt.plot(data[:,0],data[:,1],'bo') A,a = params plt.plot(x,gauss(x,A,a),'g-') """ Explanation: Now fit to it using curve_fit: End of explanation """ from random import random rands = [] for i in range(100): rands.append(random()) plt.plot(rands) """ Explanation: The curve_fit routine we just used is built on top of a very good general minimization capability in Scipy. You can learn more at the scipy documentation pages. Monte Carlo and random numbers Many methods in scientific computing rely on Monte Carlo integration, where a sequence of (pseudo) random numbers are used to approximate the integral of a function. Python has good random number generators in the standard library. The random() function gives pseudorandom numbers uniformly distributed between 0 and 1: End of explanation """ from random import gauss grands = [] for i in range(100): grands.append(gauss(0,1)) plt.plot(grands) """ Explanation: random() uses the Mersenne Twister algorithm, which is a highly regarded pseudorandom number generator. There are also functions to generate random integers, to randomly shuffle a list, and functions to pick random numbers from a particular distribution, like the normal distribution: End of explanation """ plt.plot(np.random.rand(100)) """ Explanation: It is generally more efficient to generate a list of random numbers all at once, particularly if you're drawing from a non-uniform distribution. Numpy has functions to generate vectors and matrices of particular types of random distributions. End of explanation """ import pandas as pd import numpy as np """ Explanation: III. Introduction to Pandas End of explanation """ ser_1 = pd.Series([1, 1, 2, -3, -5, 8, 13]) ser_1 """ Explanation: Series A Series is a one-dimensional array-like object containing an array of data and an associated array of data labels. The data can be any NumPy data type and the labels are the Series' index. Create a Series: End of explanation """ ser_1.values """ Explanation: Get the array representation of a Series: End of explanation """ ser_1.index """ Explanation: Index objects are immutable and hold the axis labels and metadata such as names and axis names. Get the index of the Series: End of explanation """ ser_2 = pd.Series([1, 1, 2, -3, -5], index=['a', 'b', 'c', 'd', 'e']) ser_2 """ Explanation: Create a Series with a custom index: End of explanation """ ser_2[4] == ser_2['e'] """ Explanation: Get a value from a Series: End of explanation """ ser_2[['c', 'a', 'b']] """ Explanation: Get a set of values from a Series by passing in a list: End of explanation """ ser_2[ser_2 > 0] """ Explanation: Get values great than 0: End of explanation """ ser_2 * 2 """ Explanation: Scalar multiply: End of explanation """ np.exp(ser_2) """ Explanation: Apply a numpy math function: End of explanation """ dict_1 = {'foo' : 100, 'bar' : 200, 'baz' : 300} ser_3 = pd.Series(dict_1) ser_3 """ Explanation: A Series is like a fixed-length, ordered dict. Create a series by passing in a dict: End of explanation """ index = ['foo', 'bar', 'baz', 'qux'] ser_4 = pd.Series(dict_1, index=index) ser_4 """ Explanation: Re-order a Series by passing in an index (indices not found are NaN): End of explanation """ pd.isnull(ser_4) """ Explanation: Check for NaN with the pandas method: End of explanation """ ser_4.isnull() """ Explanation: Check for NaN with the Series method: End of explanation """ ser_3 + ser_4 """ Explanation: Series automatically aligns differently indexed data in arithmetic operations: End of explanation """ ser_4.name = 'foobarbazqux' """ Explanation: Name a Series: End of explanation """ ser_4.index.name = 'label' ser_4 """ Explanation: Name a Series index: End of explanation """ ser_4.index = ['fo', 'br', 'bz', 'qx'] ser_4 """ Explanation: Rename a Series' index in place: End of explanation """ data_1 = {'state' : ['VA', 'VA', 'VA', 'MD', 'MD'], 'year' : [2012, 2013, 2014, 2014, 2015], 'pop' : [5.0, 5.1, 5.2, 4.0, 4.1]} df_1 = pd.DataFrame(data_1) df_1 df_2 = pd.DataFrame(data_1, columns=['year', 'state', 'pop']) df_2 """ Explanation: DataFrame A DataFrame is a tabular data structure containing an ordered collection of columns. Each column can have a different type. DataFrames have both row and column indices and is analogous to a dict of Series. Row and column operations are treated roughly symmetrically. Columns returned when indexing a DataFrame are views of the underlying data, not a copy. To obtain a copy, use the Series' copy method. Create a DataFrame: End of explanation """ df_3 = pd.DataFrame(data_1, columns=['year', 'state', 'pop', 'unempl']) df_3 """ Explanation: Like Series, columns that are not present in the data are NaN: End of explanation """ df_3['state'] """ Explanation: Retrieve a column by key, returning a Series: End of explanation """ df_3.year """ Explanation: Retrive a column by attribute, returning a Series: End of explanation """ df_3.iloc[0] """ Explanation: Retrieve a row by position: End of explanation """ df_3['unempl'] = np.arange(5) df_3 """ Explanation: Update a column by assignment: End of explanation """ unempl = pd.Series([6.0, 6.0, 6.1], index=[2, 3, 4]) df_3['unempl'] = unempl df_3 """ Explanation: Assign a Series to a column (note if assigning a list or array, the length must match the DataFrame, unlike a Series): End of explanation """ df_3['state_dup'] = df_3['state'] df_3 """ Explanation: Assign a new column that doesn't exist to create a new column: End of explanation """ del df_3['state_dup'] df_3 """ Explanation: Delete a column: End of explanation """ df_3.T """ Explanation: Transpose the DataFrame: End of explanation """ pop = {'VA' : {2013 : 5.1, 2014 : 5.2}, 'MD' : {2014 : 4.0, 2015 : 4.1}} df_4 = pd.DataFrame(pop) df_4 """ Explanation: Create a DataFrame from a nested dict of dicts (the keys in the inner dicts are unioned and sorted to form the index in the result, unless an explicit index is specified): End of explanation """ data_2 = {'VA' : df_4['VA'][1:], 'MD' : df_4['MD'][2:]} df_5 = pd.DataFrame(data_2) df_5 """ Explanation: Create a DataFrame from a dict of Series: End of explanation """ df_5.index.name = 'year' df_5 """ Explanation: Set the DataFrame index name: End of explanation """ df_5.columns.name = 'state' df_5 """ Explanation: Set the DataFrame columns name: End of explanation """ df_5.values """ Explanation: Return the data contained in a DataFrame as a 2D ndarray: End of explanation """ df_3.values """ Explanation: If the columns are different dtypes, the 2D ndarray's dtype will accomodate all of the columns: End of explanation """ df_3 """ Explanation: Reindexing Create a new object with the data conformed to a new index. Any missing values are set to NaN. End of explanation """ df_3.reindex(list(reversed(range(0, 6)))) """ Explanation: Reindexing rows returns a new frame with the specified index: End of explanation """ df_3.reindex(columns=['state', 'pop', 'unempl', 'year']) """ Explanation: Reindex columns: End of explanation """ df_7 = df_3.drop([0, 1]) df_7 df_7 = df_7.drop('unempl', axis=1) df_7 """ Explanation: Dropping Entries Drop rows from a Series or DataFrame: End of explanation """ df_3 """ Explanation: Indexing, Selecting, Filtering Pandas supports indexing into a DataFrame. End of explanation """ df_3[['pop', 'unempl']] """ Explanation: Select specified columns from a DataFrame: End of explanation """ df_3[:2] df_3.iloc[1:3] """ Explanation: Select a slice from a DataFrame: End of explanation """ df_3[df_3['pop'] > 5] """ Explanation: Select from a DataFrame based on a filter: End of explanation """ df_3.loc[0:2, 'pop'] df_3 """ Explanation: Select a slice of rows from a specific column of a DataFrame: End of explanation """ np.random.seed(0) df_8 = pd.DataFrame(np.random.rand(9).reshape((3, 3)), columns=['a', 'b', 'c']) df_8 np.random.seed(1) df_9 = pd.DataFrame(np.random.rand(9).reshape((3, 3)), columns=['b', 'c', 'd']) df_9 df_8 + df_9 """ Explanation: Arithmetic and Data Alignment Adding DataFrame objects results in the union of index pairs for rows and columns if the pairs are not the same, resulting in NaN for indices that do not overlap: End of explanation """ df_10 = df_8.add(df_9, fill_value=0) df_10 """ Explanation: Set a fill value instead of NaN for indices that do not overlap: End of explanation """ ser_8 = df_10.iloc[0] df_11 = df_10 - ser_8 df_11 """ Explanation: Like NumPy, pandas supports arithmetic operations between DataFrames and Series. Match the index of the Series on the DataFrame's columns, broadcasting down the rows: End of explanation """ ser_9 = pd.Series(range(3), index=['a', 'd', 'e']) ser_9 df_11 - ser_9 """ Explanation: Match the index of the Series on the DataFrame's columns, broadcasting down the rows and union the indices that do not match: End of explanation """ df_11 = np.abs(df_11) df_11 """ Explanation: Function Application and Mapping NumPy ufuncs (element-wise array methods) operate on pandas objects: End of explanation """ df_11.apply(sum) """ Explanation: Apply a function on 1D arrays to each column: End of explanation """ df_11.apply(sum, axis=1) """ Explanation: Apply a function on 1D arrays to each row: End of explanation """ def func_3(x): return '%.2f' %x df_11.applymap(func_3) """ Explanation: Apply an element-wise Python function to a DataFrame: End of explanation """ df_12 = pd.DataFrame(np.arange(12).reshape((3, 4)), index=['three', 'one', 'two'], columns=['c', 'a', 'b', 'd']) df_12 """ Explanation: Sorting End of explanation """ df_12.sort_index() """ Explanation: Sort a DataFrame by its index: End of explanation """ df_12.sort_index(axis=1, ascending=False) """ Explanation: Sort a DataFrame by columns in descending order: End of explanation """ df_12.sort_values(by=['d', 'c']) """ Explanation: Sort a DataFrame's values by column: End of explanation """ df_15 = pd.DataFrame(np.random.randn(10, 3), columns=['a', 'b', 'c']) df_15['cat1'] = (np.random.rand(10) * 3).round(0) df_15['cat2'] = (np.random.rand(10)).round(0) df_15 """ Explanation: Summarizing and Computing Descriptive Statistics Unlike NumPy arrays, Pandas descriptive statistics automatically exclude missing data. NaN values are excluded unless the entire row or column is NA. End of explanation """ df_15.sum() df_15.sum(axis=1) df_15.mean(axis=0) """ Explanation: Sum and Mean End of explanation """ df_15['a'].describe() df_15['cat1'].value_counts() """ Explanation: Descriptive analysis End of explanation """ pd.pivot_table(df_15, index='cat1', aggfunc=np.mean) """ Explanation: Pivot tables group by cat1 and calculate mean End of explanation """
hanleilei/note
training/submit/PythonExercises1stAnd2nd.ipynb
cc0-1.0
planet = "Earth" diameter = 12742 """ Explanation: Python入门 第一周和第二周的练习 练习 回答下列粗体文字所描述的问题,如果需要,使用任何合适的方法,以掌握技能,完成自己想要的程序为目标,不用太在意实现的过程。 7 的四次方是多少? 分割以下字符串 s = "Hi there Sam!" 到一个列表中 提供了一下两个变量 planet = "Earth" diameter = 12742 使用format()函数输出一下字符串 The diameter of Earth is 12742 kilometers. End of explanation """ lst = [1,2,[3,4],[5,[100,200,['hello']],23,11],1,7] """ Explanation: 提供了以下嵌套列表,使用索引的方法获取单词‘hello' End of explanation """ d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]} """ Explanation: 提供以下嵌套字典,从中抓去单词 “hello” End of explanation """ # Just answer with text, no code necessary """ Explanation: 字典和列表之间的差别是什么?? End of explanation """ def fib_dyn(n): a,b = 1,1 for i in range(n-1): a,b = b,a+b return a fib_dyn(10) """ Explanation: 编写一个函数,该函数能够获取类似于以下email地址的域名部分 [email protected] 因此,对于这个示例,传入 "[email protected]" 将返回: domain.com 创建一个函数,如果输入的字符串中包含‘dog’,(请忽略corn case)统计一下'dog'的个数 创建一个函数,判断'dog' 是否包含在输入的字符串中(请同样忽略corn case) 如果你驾驶的过快,交警就会拦下你。编写一个函数来返回以下三种可能的情况之一:"No ticket", "Small ticket", 或者 "Big Ticket". 如果速度小于等于60, 结果为"No Ticket". 如果速度在61和80之间, 结果为"Small Ticket". 如果速度大于81,结果为"Big Ticket". 除非这是你的生日(传入一个boolean值),如果是生日当天,就允许超速5公里/小时。(同样,请忽略corn case)。 计算斐波那契数列,使用生成器实现 End of explanation """
statsmodels/statsmodels.github.io
v0.13.0/examples/notebooks/generated/gee_nested_simulation.ipynb
bsd-3-clause
import numpy as np import pandas as pd import statsmodels.api as sm """ Explanation: GEE nested covariance structure simulation study This notebook is a simulation study that illustrates and evaluates the performance of the GEE nested covariance structure. A nested covariance structure is based on a nested sequence of groups, or "levels". The top level in the hierarchy is defined by the groups argument to GEE. Subsequent levels are defined by the dep_data argument to GEE. End of explanation """ p = 5 """ Explanation: Set the number of covariates. End of explanation """ groups_var = 1 level1_var = 2 level2_var = 3 resid_var = 4 """ Explanation: These parameters define the population variance for each level of grouping. End of explanation """ n_groups = 100 """ Explanation: Set the number of groups End of explanation """ group_size = 20 level1_size = 10 level2_size = 5 """ Explanation: Set the number of observations at each level of grouping. Here, everything is balanced, i.e. within a level every group has the same size. End of explanation """ n = n_groups * group_size * level1_size * level2_size """ Explanation: Calculate the total sample size. End of explanation """ xmat = np.random.normal(size=(n, p)) """ Explanation: Construct the design matrix. End of explanation """ groups_ix = np.kron(np.arange(n // group_size), np.ones(group_size)).astype(int) level1_ix = np.kron(np.arange(n // level1_size), np.ones(level1_size)).astype(int) level2_ix = np.kron(np.arange(n // level2_size), np.ones(level2_size)).astype(int) """ Explanation: Construct labels showing which group each observation belongs to at each level. End of explanation """ groups_re = np.sqrt(groups_var) * np.random.normal(size=n // group_size) level1_re = np.sqrt(level1_var) * np.random.normal(size=n // level1_size) level2_re = np.sqrt(level2_var) * np.random.normal(size=n // level2_size) """ Explanation: Simulate the random effects. End of explanation """ y = groups_re[groups_ix] + level1_re[level1_ix] + level2_re[level2_ix] y += np.sqrt(resid_var) * np.random.normal(size=n) """ Explanation: Simulate the response variable. End of explanation """ df = pd.DataFrame(xmat, columns=["x%d" % j for j in range(p)]) df["y"] = y + xmat[:, 0] - xmat[:, 3] df["groups_ix"] = groups_ix df["level1_ix"] = level1_ix df["level2_ix"] = level2_ix """ Explanation: Put everything into a dataframe. End of explanation """ cs = sm.cov_struct.Nested() dep_fml = "0 + level1_ix + level2_ix" m = sm.GEE.from_formula( "y ~ x0 + x1 + x2 + x3 + x4", cov_struct=cs, dep_data=dep_fml, groups="groups_ix", data=df, ) r = m.fit() """ Explanation: Fit the model. End of explanation """ r.cov_struct.summary() """ Explanation: The estimated covariance parameters should be similar to groups_var, level1_var, etc. as defined above. End of explanation """
QuantCrimAtLeeds/PredictCode
examples/Case Study Chicago South Side/SEPP2.ipynb
artistic-2.0
%matplotlib inline from common import * datadir = os.path.join("//media", "disk", "Data") #datadir = os.path.join("..", "..", "..", "..", "..", "Data") import open_cp.logger open_cp.logger.log_to_true_stdout() south_side, points = load_data(datadir) points.time_range masked_grid = grid_for_south_side() masked_grid2 = grid_for_south_side(xsize=100, ysize=100) """ Explanation: SEPP2 Recall that the "SEPP2" model is a grid-based, Hawkes-type self-excited process model. It is a parameterised model, with the model parameters fitted using an EM algorithm. End of explanation """ import open_cp.seppexp as sepp trainer = sepp.SEPPTrainer(masked_grid.region(), grid_size=masked_grid.xsize) trainer.data = points predictor = trainer.train(iterations=100, use_corrected=True) trainer2 = sepp.SEPPTrainer(masked_grid2.region(), grid_size=masked_grid2.xsize) trainer2.data = points predictor2 = trainer2.train(iterations=100, use_corrected=False) background = predictor.background_prediction() background.mask_with(masked_grid) background2 = predictor2.background_prediction() background2.mask_with(masked_grid2) fig, ax = plt.subplots(ncols=2, figsize=(16,8)) for a in ax: a.set_aspect(1) a.add_patch(descartes.PolygonPatch(south_side, fc="none", ec="Black")) mappable = ax[0].pcolormesh(*background.mesh_data(), background.intensity_matrix * 10000, cmap=yellow_to_red) ax[0].set_title("Grid size of 250m") cbar = fig.colorbar(mappable, ax=ax[0]) cbar.set_label("Rate $10^{-4}$") mappable = ax[1].pcolormesh(*background2.mesh_data(), background2.intensity_matrix * 10000, cmap=yellow_to_red) ax[1].set_title("Grid size of 100m") cbar = fig.colorbar(mappable, ax=ax[1]) cbar.set_label("Rate $10^{-4}$") print("Predicted omega={}, omega^-1={}, theta={}x10^-4".format(predictor.omega, 1/predictor.omega, predictor.theta*10000)) print("Predicted omega={}, omega^-1={}, theta={}x10^-4".format(predictor2.omega, 1/predictor2.omega, predictor2.theta*10000)) """ Explanation: Fit the model We fit the model with the default grid size (250m) and a smaller grid (100m). The smaller grid often fails to converge, especially if edge correction is used. A very ad hoc justification of why this might be is that as the grid size gets smaller, because there is no interactions between different grid cells built into the model, it becomes increasingly different to tell the difference between a Poisson process and a self-exciting process, looking at just one grid cell. End of explanation """ def points_in(region): mask = (points.xcoords >= region.xmin) & (points.xcoords < region.xmax) mask &= (points.ycoords >= region.ymin) & (points.ycoords < region.ymax) return points[mask] by_grid = {} for x in range(masked_grid.xextent): for y in range(masked_grid.yextent): if masked_grid.is_valid(x, y): by_grid[(x,y)] = points_in(masked_grid.bounding_box_of_cell(x, y)) size_lookup = { key : tp.number_data_points for key, tp in by_grid.items() } size_lookup = list(size_lookup.items()) size_lookup.sort(key = lambda p : p[1]) size_lookup[-5:] distances = {} for key, tp in by_grid.items(): cell = masked_grid.bounding_box_of_cell(*key) midx, midy = (cell.xmin + cell.xmax)/2, (cell.ymin + cell.ymax)/2 distances[key] = np.sqrt((tp.xcoords - midx)**2 + (tp.ycoords - midy)**2) """ Explanation: We recall that the "aftershock kernel" has the form $$ \theta \omega e^{-\omega \Delta t} $$ where $\Delta t$ is the time gap. We work in units of minutes. This kernel is formed by $\theta$, the overall rate, and an exponential distribution with rate $\omega$ The mean of this exponential random variable is $1/\omega$. So the rate is high, about 190 times the maximum background rate. But the mean is 2 hours. E.g. if $\Delta t$ is a day, then the rate, scaled by $10^4$, is $< 10^{-5}$, i.e. effectively zero. In summary, the model predicts rather large aftershocks which are rather tightly localised in time. This seems unrealistic. For the grid size of 100m, the situation if anything gets worse, as the decay of exponential increases. Visualise some time series End of explanation """ start = points.time_range[0] end = points.time_range[1] length = (end - start) / np.timedelta64(1,"m") fig, axes = plt.subplots(nrows=5, figsize=(18,8)) for i, ax in enumerate(axes): key = size_lookup[-1-i][0] ts = (by_grid[key].timestamps - start) / np.timedelta64(1,"m") ax.scatter(ts, distances[key]) ax.set(xlabel="minutes", ylabel="distance") ax.set(title="For grid cell {}".format(key)) ax.set(xlim=[0,length]) fig.tight_layout() """ Explanation: In the following plot, for the 5 grid cells with the highest crime count, we plot the occurance of events - X axis is the number of minutes since the start of the study period - Y axis is the distance of the event from the centroid of the grid cell (as expected, there is no particular pattern in this). End of explanation """ key = size_lookup[-1][0] ts = by_grid[key].timestamps ts = (np.asarray(ts) - start) / np.timedelta64(1,"m") rate = len(ts) / length def comp(t): return rate * t def qq_data(a, b, ax): a, b = np.array(a), np.array(b) a.sort() b.sort() ax.scatter(a, b) def qq_exponential(a, ax, **kwargs): a = np.array(a) a.sort() b = [] for i, x in enumerate(a): p = i / len(a) b.append( -np.log(1-p) ) ax.scatter(a, b, **kwargs) ax.set(xlabel="data", ylabel="theory") def correlate(a, ax, **kwargs): # To uniform dist u = 1 - np.exp(-a) ax.scatter(u[:-1], u[1:], **kwargs) diffs = comp(ts) diffs = diffs[1:] - diffs[:-1] def do_test_plots(diffs, alpha=1): expected = np.random.exponential(size=len(diffs)) fig, axes = plt.subplots(ncols=2, nrows=2, figsize=(12, 12)) ax = axes[0][0] qq_exponential(diffs, ax, alpha=alpha) ax.plot([0,5], [0,5], color="red") ax.set(title="Q-Q plot against exponential") ax = axes[0][1] correlate(diffs, ax, alpha=alpha) ax.set(title="Autocorrelation") ax = axes[1][0] qq_exponential(expected, ax, color="green", alpha=alpha) ax.plot([0,5], [0,5], color="red") ax.set(title="Sample of exponential data") ax = axes[1][1] correlate(expected, ax, color="green", alpha=alpha) do_test_plots(diffs) """ Explanation: Can we tell the difference from a Poisson process? For the cell with the most data, we apply the following procedure: Transform the timeseries into "minutes since start of study" Pretend that the data is from a homogeneous Poisson process, and estimate the rate Use the "compensator" to transform the timeseries into what should be a unit-rate Poisson process Compute the waiting times, which should give IID samples from a unit rate exponential distribution Produce a Q-Q plot against the theoretical quartiles of a unit rate exponential. Transform to a $U[0,1]$ uniform random variable and plot $u_{i+1}$ against $u_i$ as a visual inspect of autocorrelation. Do the same for an actual sample from a unit rate exponential distribution Conclusion: Little visually to say that we don't just have a Poisson process! End of explanation """ diffs = [] for key, tp in by_grid.items(): ts = tp.timestamps if len(ts) < 10: continue ts = (np.asarray(ts) - start) / np.timedelta64(1,"m") rate = len(ts) / length ts *= rate d = ts[1:] - ts[:-1] diffs.extend(d) diffs = np.asarray(diffs) do_test_plots(diffs, alpha=0.2) diffs """ Explanation: For each cell, do the same, combine, and plot. End of explanation """
UWSEDS/LectureNotes
Fall2018/02_Procedural_Python/Lecture-Python-And-Data.ipynb
bsd-2-clause
# Integer arithematic 1 + 1 # Integer division version floating point division print (6 // 4, 6/ 4) """ Explanation: Software Engineering for Data Scientists Manipulating Data with Python DATA 515 A Today's Objectives 0. Cloning LectureNotes 1. Opening & Navigating the Jupyter Notebook 2. Data type basics 3. Loading data with pandas 4. Cleaning and Manipulating data with pandas 5. Visualizing data with pandas & matplotlib 0. Cloning Lecture Notes The course materials are maintained on github. The next lecture will discuss github in detail. Today, you'll get minimal instructions to get access to today's lecture materials. Open a terminal session Type 'git clone https://github.com/UWSEDS/LectureNotes.git' Wait until the download is complete cd LectureNotes cd 02_Procedural_Python 1. Opening and Navigating the IPython Notebook We will start today with the interactive environment that we will be using often through the course: the Jupyter Notebook. We will walk through the following steps together: Download miniconda (be sure to get Version 3.6) and install it on your system (hopefully you have done this before coming to class) Use the conda command-line tool to update your package listing and install the IPython notebook: Update conda's listing of packages for your system: $ conda update conda Install IPython notebook and all its requirements $ conda install jupyter notebook Navigate to the directory containing the course material. For example: $ cd LectureNotes/02_Procedural_Python You should see a number of files in the directory, including these: ``` $ ls ``` Type jupyter notebook in the terminal to start the notebook $ jupyter notebook If everything has worked correctly, it should automatically launch your default browser Click on Lecture-Python-And-Data.ipynb to open the notebook containing the content for this lecture. With that, you're set up to use the Jupyter notebook! 2. Data Types Basics 2.1 Data type theory Components with the same capabilities are of the same type. For example, the numbers 2 and 200 are both integers. A type is defined recursively. Some examples. A list is a collection of objects that can be indexed by position. A list of integers contains an integer at each position. A type has a set of supported operations. For example: Integers can be added Strings can be concatented A table can find the name of its columns What type is returned from the operation? In python, members (components and operations) are indicated by a '.' If a is a list, the a.append(1) adds 1 to the list. 2.2 Primitive types The primitive types are integers, floats, strings, booleans. 2.2.1 Integers End of explanation """ # Have the full set of "calculator functions" but need the numpy package import numpy as np print (6.0 * 3, np.sin(2*np.pi)) # Floats can have a null value called nan, not a number a = np.nan 3*a """ Explanation: 2.2.2 Floats End of explanation """ # Can concatenate, substring, find, count, ... a = "The lazy" b = "brown fox" print ("Concatenation: ", a + b) print ("First three letters: " + a[0:3]) print ("Index of 'z': " + str(a.find('z'))) """ Explanation: 2.2.3 Strings End of explanation """ a_tuple = (1, 'ab', (1,2)) a_tuple a_tuple[2] """ Explanation: 2.3 Tuples A tuple is an ordered sequence of objects. Tuples cannot be changed; they are immuteable. End of explanation """ a_list = [1, 'a', [1,2]] a_list[0] a_list.append(2) a_list a_list dir(a_list) help (a_list) a_list.count(1) """ Explanation: 2.4 Lists A list is an ordered sequence of objects that can be changed. End of explanation """ dessert_dict = {} # Empty dictionary dessert_dict['Dave'] = "Cake" dessert_dict["Joe"] = ["Cake", "Pie"] print (dessert_dict) dessert_dict["Dave"] # This produces an error dessert_dict["Bernease"] = {} dessert_dict dessert_dict["Bernease"] = {"Favorite": ["sorbet", "cobbler"], "Dislike": "Brownies"} """ Explanation: 2.5 Dictionaries A dictionary is a kind of associates a key with a value. A value can be any object, even another dictionary. End of explanation """ # A first name shell game first_int = 1 second_int = first_int second_int += 1 second_int # What is first_int? first_int # A second name shell game a_list = ['a', 'aa', 'aaa'] b_list = a_list b_list.append('bb') b_list # What is a_list? a_list # Create a deep copy import copy # A second name shell game a_list = ['a', 'aa', 'aaa'] b_list = copy.deepcopy(a_list) b_list.append('bb') print("b_list = %s" % str(b_list)) print("a_list = %s" % str(a_list)) """ Explanation: 2.7 A Shakespearean Detour: "What's in a Name?" Deep vs. Shallow Copies A deep copy can be manipulated separately. A shallow copy is a pointer to the same data as the original. End of explanation """ # Example 1 of name resolution in python var = 10 def func(val): var = val + 1 return val # What is returned? print("func(2) = %d" % func(2)) # What is var? print("var = %d" % var) # Example 2 of name resolution in python var = 10 def func(val): return val + var # What is returned? print("func(2) = %d" % func(2)) # What is var? print("var = %d" % var) """ Explanation: Key insight: Deep vs. Shallow Copies * A deep copy can be manipulated separately from the original. * A shallow copy cannot. * Assigning a python immutable creates a deep copy. Non-immutables are shallow copies. Name Resolution The most common errors that you'll see in your python codes are: * NameError * AttributeError A common error when using the bash shell is command not found. Name resolution: Associating a name with code or data. Resolving a name in the bash shell is done by searching the directories in the PATH environment variable. The first executable with the name is run. End of explanation """ # A list and a dict are objects. # dict has been implemented so that you see its values when you type # the instance name. # This is done with many python objects, like list. a_dict = {'a': [1, 2], 'b': [3, 4, 5]} a_dict # You access the data and methods (codes) associated with an object by # using the "." operator. These are referred to collectively # as attributes. Methods are followed by parentheses; # values (properties) are not. a_dict.keys() # You can discover the attributes of an object using "dir" dir(a_dict) """ Explanation: Insights on python name resolution * Names are assigned within a context. * Context changes with the function and module. * Assigning a name in a function creates a new name. * Referencing an unassigned name in function uses an existing name. 2.7 Object Essentials Objects are a "packaging" of data and code. Almost all python entities are objects. End of explanation """ # Pandas DataFrames as table elements import pandas as pd """ Explanation: 2.8 Summary <hr> | type | description | |------|------------| | primitive | int, float, string, bool | | tuple | An immutable collection of ordered objects | | list | A mutable collection of ordered objects | | dictionary | A mutable collection of named objects | | object | A packaging of codes and data | 3. Python's Data Science Ecosystem With this simple Python computation experience under our belt, we can now move to doing some more interesting analysis. Python's Data Science Ecosystem In addition to Python's built-in modules like the math module we explored above, there are also many often-used third-party modules that are core tools for doing data science with Python. Some of the most important ones are: numpy: Numerical Python Numpy is short for "Numerical Python", and contains tools for efficient manipulation of arrays of data. If you have used other computational tools like IDL or MatLab, Numpy should feel very familiar. scipy: Scientific Python Scipy is short for "Scientific Python", and contains a wide range of functionality for accomplishing common scientific tasks, such as optimization/minimization, numerical integration, interpolation, and much more. We will not look closely at Scipy today, but we will use its functionality later in the course. pandas: Labeled Data Manipulation in Python Pandas is short for "Panel Data", and contains tools for doing more advanced manipulation of labeled data in Python, in particular with a columnar data structure called a Data Frame. If you've used the R statistical language (and in particular the so-called "Hadley Stack"), much of the functionality in Pandas should feel very familiar. matplotlib: Visualization in Python Matplotlib started out as a Matlab plotting clone in Python, and has grown from there in the 15 years since its creation. It is the most popular data visualization tool currently in the Python data world (though other recent packages are starting to encroach on its monopoly). Installing Pandas & friends Because the above packages are not included in Python itself, you need to install them separately. While it is possible to install these from source (compiling the C and/or Fortran code that does the heavy lifting under the hood) it is much easier to use a package manager like conda. All it takes is to run $ conda install numpy scipy pandas matplotlib and (so long as your conda setup is working) the packages will be downloaded and installed on your system. 4. Introduction to DataFrames What are the elements of a table? End of explanation """ df = pd.DataFrame({'A': [1,2,3], 'B': [2, 4, 6], 'ccc': [1.0, 33, 4]}) df sub_df = df[['A', 'ccc']] sub_df df['A'] + 2*df['B'] # Operations on a Pandas DataFrame """ Explanation: What operations do we perform on tables? End of explanation """ !ls """ Explanation: 5. Manipulating Data with DataFrames Downloading the data shell commands can be run from the notebook by preceding them with an exclamation point: End of explanation """ #!curl -o pronto.csv https://data.seattle.gov/api/views/tw7j-dfaw/rows.csv?accessType=DOWNLOAD """ Explanation: uncomment this to download the data: End of explanation """ import pandas as pd df = pd.read_csv('pronto.csv') type(df) len(df) """ Explanation: Loading Data into a DataFrame Because we'll use it so much, we often import under a shortened name using the import ... as ... pattern: End of explanation """ df.head() df.columns """ Explanation: Now we can use the read_csv command to read the comma-separated-value data: Note: strings in Python can be defined either with double quotes or single quotes Viewing Pandas Dataframes The head() and tail() methods show us the first and last rows of the data End of explanation """ df.shape """ Explanation: The shape attribute shows us the number of elements: End of explanation """ df.dtypes """ Explanation: The columns attribute gives us the column names The index attribute gives us the index names The dtypes attribute gives the data types of each column: End of explanation """ df_small = df[ 'stoptime'] type(df_small) df_small.tolist() """ Explanation: Sophisticated Data Manipulation Here we'll cover some key features of manipulating data with pandas Access columns by name using square-bracket indexing: End of explanation """ trip_duration_hours = df['tripduration']/3600 trip_duration_hours[:3] df['trip_duration_hours'] = df['tripduration']/3600 del df['trip_duration_hours'] df.head() df.loc[[0,1],:] df_long_trips = df[df['tripduration'] >10000] sel = df['tripduration'] >10000 df_long_trips = df[sel] len(df) # Make a copy of a slice df_subset = df[['starttime', 'stoptime']].copy() df_subset['trip_hours'] = df['tripduration']/3600 """ Explanation: Mathematical operations on columns happen element-wise: End of explanation """ # """ Explanation: Columns can be created (or overwritten) with the assignment operator. Let's create a tripminutes column with the number of minutes for each trip More complicated mathematical operations can be done with tools in the numpy package: Working with Times One trick to know when working with columns of times is that Pandas DateTimeIndex provides a nice interface for working with columns of times. For a dataset of this size, using pd.to_datetime and specifying the date format can make things much faster (from the strftime reference, we see that the pronto data has format "%m/%d/%Y %I:%M:%S %p" (Note: you can also use infer_datetime_format=True in most cases to automatically infer the correct format, though due to a bug it doesn't work when AM/PM are present) With it, we can extract, the hour of the day, the day of the week, the month, and a wide range of other views of the time: Simple Grouping of Data The real power of Pandas comes in its tools for grouping and aggregating data. Here we'll look at value counts and the basics of group-by operations. Value Counts Pandas includes an array of useful functionality for manipulating and analyzing tabular data. We'll take a look at two of these here. The pandas.value_counts returns statistics on the unique values within each column. We can use it, for example, to break down rides by gender: End of explanation """ # """ Explanation: Or to break down rides by age: End of explanation """ # """ Explanation: By default, the values rather than the index are sorted. Use sort=False to turn this behavior off: End of explanation """ # """ Explanation: We can explore other things as well: day of week, hour of day, etc. End of explanation """ df.head() df_count = df.groupby(['from_station_id']).count() df_count.head() ser_count = df_count['trip_id'] type(ser_count) ser_count.sort_values() df_count1 = df_count['trip_id'] df_count2 = df_count1.rename(columns={'trip_id': 'count'}) df_count2['new'] = 1 df_count2.head() df_mean = df.groupby(['from_station_id']).mean() df_mean.head() dfgroup = df.groupby(['from_station_id']) dfgroup.groups """ Explanation: Group-by Operation One of the killer features of the Pandas dataframe is the ability to do group-by operations. You can visualize the group-by like this (image borrowed from the Python Data Science Handbook) End of explanation """ %matplotlib inline """ Explanation: The simplest version of a groupby looks like this, and you can use almost any aggregation function you wish (mean, median, sum, minimum, maximum, standard deviation, count, etc.) &lt;data object&gt;.groupby(&lt;grouping values&gt;).&lt;aggregate&gt;() for example, we can group by gender and find the average of all numerical columns: It's also possible to indes the grouped object like it is a dataframe: You can even group by multiple values: for example we can look at the trip duration by time of day and by gender: The unstack() operation can help make sense of this type of multiply-grouped data. What this technically does is split a multiple-valued index into an index plus columns: Visualizing data with pandas Of course, looking at tables of data is not very intuitive. Fortunately Pandas has many useful plotting functions built-in, all of which make use of the matplotlib library to generate plots. Whenever you do plotting in the IPython notebook, you will want to first run this magic command which configures the notebook to work well with plots: End of explanation """ import matplotlib.pyplot as plt df['tripduration'].hist() """ Explanation: Now we can simply call the plot() method of any series or dataframe to get a reasonable view of the data: End of explanation """ # A script for creating a dataframe with counts of the occurrence of a columns' values df_count = df.groupby('from_station_id').count() df_count1 = df_count[['trip_id']] df_count2 = df_count1.rename(columns={'trip_id': 'count'}) df_count2.head() def make_table_count(df_arg, groupby_column): df_count = df_arg.groupby(groupby_column).count() column_name = df.columns[0] df_count1 = df_count[[column_name]] df_count2 = df_count1.rename(columns={column_name: 'count'}) return df_count2 dff = make_table_count(df, 'from_station_id') dff.head() """ Explanation: Adjusting the Plot Style Matplotlib has a number of plot styles you can use. For example, if you like R you might use the ggplot style: Other plot types Pandas supports a range of other plotting types; you can find these by using the <TAB> autocomplete on the plot method: For example, we can create a histogram of trip durations: If you'd like to adjust the x and y limits of the plot, you can use the set_xlim() and set_ylim() method of the resulting object: Breakout: Exploring the Data Make a plot of the total number of rides as a function of month of the year (You'll need to extract the month, use a groupby, and find the appropriate aggregation to count the number in each group). Split this plot by gender. Do you see any seasonal ridership patterns by gender? Split this plot by user type. Do you see any seasonal ridership patterns by usertype? Repeat the above three steps, counting the number of rides by time of day rather thatn by month. Are there any other interesting insights you can discover in the data using these tools? Using Files Writing and running python modules Using python modules in your Jupyter Notebook End of explanation """
PublicHealthEngland/pygom
notebooks/Stochasticity.ipynb
gpl-2.0
import pygom import pkg_resources print('PyGOM version %s' %pkg_resources.get_distribution('pygom').version) """ Explanation: Stochastic simulation Examples taken from https://arxiv.org/pdf/1803.06934.pdf (see page 11 for stochastic simulations). Examples are performed on an SIR model. $\frac{dS}{dt} = -\beta S I $ $\frac{dI}{dt} = \beta S I - \gamma I$ $\frac{dR}{dt} = \gamma I$ End of explanation """ from pygom import Transition, TransitionType, SimulateOde import numpy as np # construct model states = ['S', 'I', 'R'] params = ['beta', 'gamma', 'N'] transitions = [Transition(origin='S', destination='I', equation='beta*S*I/N', transition_type=TransitionType.T), Transition(origin='I', destination='R', equation='gamma*I', transition_type=TransitionType.T)] model_p = SimulateOde(states, params, transition=transitions) # initial conditions N = 7781984.0 in_inf = round(0.0000001*N) init_state = [N - in_inf, in_inf, 0.0] # time t = np.linspace (0 , 50 , 101) # deterministic parameter values param_evals = [('beta', 3.6), ('gamma', 0.2), ('N', N)] # define parameter distributions from pygom.utilR import rgamma d = dict() d['beta'] = (rgamma, {'shape':3600.0, 'rate':1000.0}) d['gamma'] = (rgamma, {'shape':1000.0, 'rate':500.0}) d['N'] = N model_p.parameters = d model_p.initial_values = (init_state, t[0]) # solve for 10 parameter sets Ymean, Yall = model_p.simulate_param(t[1::], iteration=10, full_output=True) # plot solutions import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [12, 6] fig, (ax1, ax2, ax3) = plt.subplots(1,3) for i in range(np.shape(Yall)[0]): ax1.plot(t, Yall[i][:,0]) ax2.plot(t, Yall[i][:,1]) ax3.plot(t, Yall[i][:,2]) ax1.plot(t, Ymean[:,0], linewidth=3,color='k') ax2.plot(t, Ymean[:,1], linewidth=3,color='k') ax3.plot(t, Ymean[:,2], linewidth=3,color='k') ax1.ticklabel_format(style='sci', axis='y', scilimits=(0,0)) ax2.ticklabel_format(style='sci', axis='y', scilimits=(0,0)) ax3.ticklabel_format(style='sci', axis='y', scilimits=(0,0)) ax1.set_title('Susceptible') ax2.set_title('Infected') ax3.set_title('Removed') #!TODO Add legend with beta and gamma parameter """ Explanation: Parameter stochasticity Parameter values are sampled from a distribution. Deterministic solutions are found for a given set of parameter values. In this example, $\beta$ and $\gamma$ are sampled from a gamma distribution. End of explanation """ # construct model model_j = SimulateOde(states, params, transition=transitions) model_j.parameters = param_evals model_j.initial_values = (init_state, t[0]) # run 10 simulations simX, simT = model_j.simulate_jump(t[1::], iteration=10, full_output=True) """ Explanation: Jump process Movements between states are discrete, where the probability of transition is given by $Pr(process\ j\ jump\ within\ time\ \tau) = \lambda_j \exp^{-\lambda_j \tau}$. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/miroc/cmip6/models/miroc-es2h/atmos.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'miroc', 'miroc-es2h', 'atmos') """ Explanation: ES-DOC CMIP6 Model Properties - Atmos MIP Era: CMIP6 Institute: MIROC Source ID: MIROC-ES2H Topic: Atmos Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. Properties: 156 (127 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-20 15:02:40 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "AGCM" # "ARCM" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of atmospheric model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "primitive equations" # "non-hydrostatic" # "anelastic" # "Boussinesq" # "hydrostatic" # "quasi-hydrostatic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the atmosphere. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.4. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on the computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.high_top') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 2.5. High Top Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the dynamics, e.g. 30 min. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.2. Timestep Shortwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the shortwave radiative transfer, e.g. 1.5 hours. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Timestep Longwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the longwave radiative transfer, e.g. 3 hours. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "modified" # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the orography. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.changes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "related to ice sheets" # "related to tectonics" # "modified mean" # "modified variance if taken into account in model (cf gravity waves)" # TODO - please enter value(s) """ Explanation: 4.2. Changes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N If the orography type is modified describe the time adaptation changes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of grid discretisation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "spectral" # "fixed grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "finite elements" # "finite volumes" # "finite difference" # "centered finite difference" # TODO - please enter value(s) """ Explanation: 6.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "second" # "third" # "fourth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.3. Scheme Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation function order End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "filter" # "pole rotation" # "artificial island" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.4. Horizontal Pole Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal discretisation pole singularity treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Gaussian" # "Latitude-Longitude" # "Cubed-Sphere" # "Icosahedral" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.5. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "isobaric" # "sigma" # "hybrid sigma-pressure" # "hybrid pressure" # "vertically lagrangian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type of vertical coordinate system End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere dynamical core End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the dynamical core of the model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Adams-Bashforth" # "explicit" # "implicit" # "semi-implicit" # "leap frog" # "multi-step" # "Runge Kutta fifth order" # "Runge Kutta second order" # "Runge Kutta third order" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.3. Timestepping Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestepping framework type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "surface pressure" # "wind components" # "divergence/curl" # "temperature" # "potential temperature" # "total water" # "water vapour" # "water liquid" # "water ice" # "total water moments" # "clouds" # "radiation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of the model prognostic variables End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Top Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary heat treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.3. Top Wind Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary wind treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Type of lateral boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal diffusion scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "iterated Laplacian" # "bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal diffusion scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heun" # "Roe and VanLeer" # "Roe and Superbee" # "Prather" # "UTOPIA" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Tracer advection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Eulerian" # "modified Euler" # "Lagrangian" # "semi-Lagrangian" # "cubic semi-Lagrangian" # "quintic semi-Lagrangian" # "mass-conserving" # "finite volume" # "flux-corrected" # "linear" # "quadratic" # "quartic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "dry mass" # "tracer mass" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.3. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme conserved quantities End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Priestley algorithm" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.4. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracer advection scheme conservation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "VanLeer" # "Janjic" # "SUPG (Streamline Upwind Petrov-Galerkin)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Momentum advection schemes name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "2nd order" # "4th order" # "cell-centred" # "staggered grid" # "semi-staggered grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa D-grid" # "Arakawa E-grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.3. Scheme Staggering Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme staggering type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Angular momentum" # "Horizontal momentum" # "Enstrophy" # "Mass" # "Total energy" # "Vorticity" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.4. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme conserved quantities End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme conservation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.aerosols') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "sulphate" # "nitrate" # "sea salt" # "dust" # "ice" # "organic" # "BC (black carbon / soot)" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "polar stratospheric ice" # "NAT (nitric acid trihydrate)" # "NAD (nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particle)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Aerosols whose radiative effect is taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of shortwave radiation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme spectral integration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Shortwave radiation transport calculation methods End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme number of spectral intervals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud ice crystals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud liquid droplets End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with aerosols End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with gases End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of longwave radiation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the longwave radiation scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme spectral integration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Longwave radiation transport calculation methods End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 22.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme number of spectral intervals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud ice crystals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24.2. Physical Reprenstation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud liquid droplets End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with aerosols End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with gases End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere convection and turbulence End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Mellor-Yamada" # "Holtslag-Boville" # "EDMF" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Boundary layer turbulence scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "TKE prognostic" # "TKE diagnostic" # "TKE coupled with water" # "vertical profile of Kz" # "non-local diffusion" # "Monin-Obukhov similarity" # "Coastal Buddy Scheme" # "Coupled with convection" # "Coupled with gravity waves" # "Depth capped at cloud base" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Boundary layer turbulence scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Closure Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boundary layer turbulence scheme closure order End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.4. Counter Gradient Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Uses boundary layer turbulence scheme counter gradient End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Deep convection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "adjustment" # "plume ensemble" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CAPE" # "bulk" # "ensemble" # "CAPE/WFN based" # "TKE/CIN based" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vertical momentum transport" # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "updrafts" # "downdrafts" # "radiative effect of anvils" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of deep convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Shallow convection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "cumulus-capped boundary layer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N shallow convection scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "same as deep (unified)" # "included in boundary layer turbulence" # "separate diagnosis" # TODO - please enter value(s) """ Explanation: 32.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 shallow convection scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of shallow convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for shallow convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of large scale cloud microphysics and precipitation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the large scale precipitation parameterisation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "liquid rain" # "snow" # "hail" # "graupel" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 34.2. Hydrometeors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Precipitating hydrometeors taken into account in the large scale precipitation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the microphysics parameterisation scheme used for large scale clouds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mixed phase" # "cloud droplets" # "cloud ice" # "ice nucleation" # "water vapour deposition" # "effect of raindrops" # "effect of snow" # "effect of graupel" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 35.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Large scale cloud microphysics processes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the atmosphere cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "atmosphere_radiation" # "atmosphere_microphysics_precipitation" # "atmosphere_turbulence_convection" # "atmosphere_gravity_waves" # "atmosphere_solar" # "atmosphere_volcano" # "atmosphere_cloud_simulator" # TODO - please enter value(s) """ Explanation: 36.3. Atmos Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Atmosphere components that are linked to the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.4. Uses Separate Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "entrainment" # "detrainment" # "bulk cloud" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.6. Prognostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a prognostic scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.7. Diagnostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a diagnostic scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud amount" # "liquid" # "ice" # "rain" # "snow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.8. Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List the prognostic variables used by the cloud scheme, if applicable. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "random" # "maximum" # "maximum-random" # "exponential" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account overlapping of cloud layers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.2. Cloud Inhomogeneity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) """ Explanation: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 38.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 38.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) """ Explanation: 38.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale water distribution coupling with convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) """ Explanation: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 39.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 39.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) """ Explanation: 39.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale ice distribution coupling with convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of observation simulator characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "no adjustment" # "IR brightness" # "visible optical depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator ISSCP top height estimation methodUo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "lowest altitude level" # "highest altitude level" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41.2. Top Height Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator ISSCP top height direction End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Inline" # "Offline" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP run configuration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.2. Number Of Grid Points Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of grid points End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.3. Number Of Sub Columns Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.4. Number Of Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of levels End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar frequency (Hz) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "surface" # "space borne" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 43.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 43.3. Gas Absorption Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses gas absorption End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 43.4. Effective Radius Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses effective radius End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "ice spheres" # "ice non-spherical" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator lidar ice type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "max" # "random" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 44.2. Overlap Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator lidar overlap End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of gravity wave parameterisation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Rayleigh friction" # "Diffusive sponge layer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.2. Sponge Layer Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sponge layer in the upper levels in order to avoid gravity wave reflection at the top. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "continuous spectrum" # "discrete spectrum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.3. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background wave distribution End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "effect on drag" # "effect on lifting" # "enhanced topography" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.4. Subgrid Scale Orography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Subgrid scale orography effects taken into account. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the orographic gravity wave scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "linear mountain waves" # "hydraulic jump" # "envelope orography" # "low level flow blocking" # "statistical sub-grid scale variance" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave source mechanisms End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "non-linear calculation" # "more than two cardinal directions" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave calculation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "includes boundary layer ducting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave propogation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave dissipation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the non-orographic gravity wave scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convection" # "precipitation" # "background spectrum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave source mechanisms End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "spatially dependent" # "temporally dependent" # TODO - please enter value(s) """ Explanation: 47.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave calculation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave propogation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave dissipation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of solar insolation of the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SW radiation" # "precipitating energetic particles" # "cosmic rays" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Pathways for the solar forcing of the atmosphere model domain End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) """ Explanation: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the solar constant. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 50.2. Fixed Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the solar constant is fixed, enter the value of the solar constant (W m-2). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 50.3. Transient Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 solar constant transient characteristics (W m-2) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) """ Explanation: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of orbital parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 51.2. Fixed Reference Date Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date for fixed orbital parameters (yyyy) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 51.3. Transient Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of transient orbital parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Berger 1978" # "Laskar 2004" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 51.4. Computation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used for computing orbital parameters. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does top of atmosphere insolation impact on stratospheric ozone? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the implementation of volcanic effects in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "high frequency solar constant anomaly" # "stratospheric aerosols optical thickness" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How volcanic effects are modeled in the atmosphere. End of explanation """
hail-is/hail
datasets/notebooks/1kg_NYGC_30x_datasets.ipynb
mit
ht_samples = hl.import_table( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/1000_Genomes_NYGC_30x_samples_ped_population.txt.bgz", delimiter="\s+", impute=True ) ht_samples = ht_samples.annotate( FatherID = hl.if_else(ht_samples.FatherID == "0", hl.missing(hl.tstr), ht_samples.FatherID), MotherID = hl.if_else(ht_samples.MotherID == "0", hl.missing(hl.tstr), ht_samples.MotherID), Sex = hl.if_else(ht_samples.Sex == 1, "male", "female") ) ht_samples = ht_samples.key_by("SampleID") n_rows = ht_samples.count() n_partitions = ht_samples.n_partitions() ht_samples = ht_samples.annotate_globals( metadata=hl.struct( name="1000_Genomes_HighCov_samples", n_rows=n_rows, n_partitions=n_partitions) ) ht_samples.write("gs://hail-datasets-us/1000_Genomes_NYGC_30x_HighCov_samples.ht", overwrite=False) ht_samples = hl.read_table("gs://hail-datasets-us/1000_Genomes_NYGC_30x_HighCov_samples.ht") ht_samples.describe() """ Explanation: NYGC 30x HighCov samples Hail Table: End of explanation """ mt = hl.import_vcf( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/1000_Genomes_NYGC_30x_phased_chr{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22}_GRCh38.vcf.bgz", reference_genome="GRCh38" ) n_rows, n_cols = mt.count() n_partitions = mt.n_partitions() mt = mt.annotate_globals( metadata=hl.struct( name="1000_Genomes_HighCov_autosomes", reference_genome="GRCh38", n_rows=n_rows, n_cols=n_cols, n_partitions=n_partitions ) ) # Get list of INFO fields that are arrays known_keys = [x[0] for x in list(mt.row.info.items()) if "array" in str(x[1])] # Extract value from INFO array fields (all arrays are length 1) mt = mt.annotate_rows( info = mt.info.annotate( **{k: hl.or_missing(hl.is_defined(mt.info[k]), mt.info[k][0]) for k in known_keys} ) ) mt = mt.checkpoint( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/autosomes_phased_GRCh38.mt", overwrite=False, _read_if_exists=True ) mt = mt.annotate_cols(**ht_samples[mt.s]) mt = hl.sample_qc(mt) mt = hl.variant_qc(mt) mt.write("gs://hail-datasets-us/1000_Genomes/NYGC_30x/GRCh38/autosomes_phased.mt", overwrite=False) mt = hl.read_matrix_table("gs://hail-datasets-us/1000_Genomes/NYGC_30x/GRCh38/autosomes_phased.mt") mt.describe() """ Explanation: Phased genotypes Creating MTs for the phased data is straightforward, as multiallelic variants were split during phasing. Autosomes (phased): End of explanation """ mt = hl.import_vcf( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/1000_Genomes_NYGC_30x_phased_chrX_GRCh38.vcf.bgz", reference_genome="GRCh38" ) n_rows, n_cols = mt.count() n_partitions = mt.n_partitions() mt = mt.annotate_globals( metadata=hl.struct( name="1000_Genomes_HighCov_chrX", reference_genome="GRCh38", n_rows=n_rows, n_cols=n_cols, n_partitions=n_partitions ) ) # Get list of INFO fields that are arrays known_keys = [x[0] for x in list(mt.row.info.items()) if "array" in str(x[1])] # Extract appropriate value from INFO array fields (all arrays are length 1) mt = mt.annotate_rows( info = mt.info.annotate( **{k: hl.or_missing(hl.is_defined(mt.info[k]), mt.info[k][0]) for k in known_keys} ) ) mt = mt.checkpoint( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrX_phased_GRCh38.mt", overwrite=False, _read_if_exists=True ) mt = mt.annotate_cols(**ht_samples[mt.s]) mt = hl.sample_qc(mt) mt = hl.variant_qc(mt) mt.write("gs://hail-datasets-us/1000_Genomes/NYGC_30x/GRCh38/chrX_phased.mt", overwrite=False) mt = hl.read_matrix_table("gs://hail-datasets-us/1000_Genomes/NYGC_30x/GRCh38/chrX_phased.mt") mt.describe() """ Explanation: ChrX (phased): End of explanation """ mt = hl.import_vcf( ("gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/1000_Genomes_NYGC_30x_" "chr{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22}_" "GRCh38.vcf.bgz"), reference_genome="GRCh38", array_elements_required=False ) mt = mt.annotate_entries( PL = hl.if_else(mt.PL.contains(hl.missing(hl.tint32)), hl.missing(mt.PL.dtype), mt.PL) ) mt = mt.checkpoint( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/autosomes_unphased_GRCh38_imported_vcf.mt", overwrite=False, _read_if_exists=True ) """ Explanation: Unphased genotypes Autosomes (unphased): Import chr1-chr22 VCF to MatrixTable and checkpoint: End of explanation """ mt = hl.read_matrix_table( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/autosomes_unphased_GRCh38_imported_vcf.mt" ) bi = mt.filter_rows(hl.len(mt.alleles) == 2) bi = bi.annotate_rows(a_index=1, was_split=False) bi = bi.checkpoint( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/autosomes_unphased_GRCh38_biallelic.mt", overwrite=False, _read_if_exists=True ) multi = mt.filter_rows(hl.len(mt.alleles) > 2) multi = multi.annotate_entries(PL = hl.missing(multi.PL.dtype)) multi = multi.checkpoint( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/autosomes_unphased_GRCh38_multiallelic.mt", overwrite=False, _read_if_exists=True ) split = hl.split_multi_hts(multi, keep_star=True, permit_shuffle=True) split = split.checkpoint( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/autosomes_unphased_GRCh38_multiallelic_split.mt", overwrite=False, _read_if_exists=True ) unioned = split.union_rows(bi) unioned = unioned.checkpoint( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/autosomes_unphased_GRCh38_unioned.mt", overwrite=False, _read_if_exists=True ) unioned = unioned.repartition(12000, shuffle=True) unioned = unioned.checkpoint( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/autosomes_unphased_GRCh38_unioned_repart.mt", overwrite=False, _read_if_exists=True ) """ Explanation: Separate biallelic and multiallelic variants, split multiallelic variants with split_multi_hts, and then union_rows the split multiallelic MT back to the biallelic MT. For multiallelic variants we will just set PL to be missing, to avoid running into index out of bounds errors in split_multi_hts. End of explanation """ unioned = hl.read_matrix_table( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/autosomes_unphased_GRCh38_unioned_repart.mt" ) # Get list of INFO fields that are arrays known_keys = [x[0] for x in list(unioned.row.info.items()) if "array" in str(x[1])] # Extract appropriate values from INFO array fields after splitting mt = unioned.annotate_rows( info = unioned.info.annotate( **{k: hl.or_missing(hl.is_defined(unioned.info[k]), unioned.info[k][unioned.a_index - 1]) for k in known_keys} ) ) n_rows, n_cols = mt.count() n_partitions = mt.n_partitions() mt = mt.annotate_globals( metadata=hl.struct( name="1000_Genomes_HighCov_autosomes", reference_genome="GRCh38", n_rows=n_rows, n_cols=n_cols, n_partitions=n_partitions ) ) ht_samples = hl.read_table("gs://hail-datasets-us/1000_Genomes/NYGC_30x/samples.ht") mt = mt.annotate_cols(**ht_samples[mt.s]) mt = hl.sample_qc(mt) mt = hl.variant_qc(mt) mt.write("gs://hail-datasets-us/1000_Genomes/NYGC_30x/GRCh38/autosomes_unphased.mt", overwrite=False) mt = hl.read_matrix_table("gs://hail-datasets-us/1000_Genomes/NYGC_30x/GRCh38/autosomes_unphased.mt") mt.describe() """ Explanation: After splitting multiallelic variants, we need to extract the appropriate values from the INFO array fields with a_index. Then annotate globals with metadata, annotate columns with sample relationships, perform sample_qc and variant_qc, and write final MT to hail-datasets-us. End of explanation """ mt = hl.import_vcf( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/1000_Genomes_NYGC_30x_chrX_GRCh38.vcf.bgz", reference_genome="GRCh38", array_elements_required=False ) mt = mt.annotate_entries( PL = hl.if_else(mt.PL.contains(hl.missing(hl.tint32)), hl.missing(mt.PL.dtype), mt.PL) ) mt = mt.checkpoint( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrX_unphased_GRCh38_imported_vcf.mt", overwrite=False, _read_if_exists=True ) """ Explanation: ChrX (unphased): Import chrX VCF to MatrixTable and checkpoint: End of explanation """ mt = hl.read_matrix_table( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrX_unphased_GRCh38_imported_vcf.mt" ) bi = mt.filter_rows(hl.len(mt.alleles) == 2) bi = bi.annotate_rows(a_index=1, was_split=False) bi = bi.checkpoint( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrX_unphased_GRCh38_biallelic.mt", overwrite=False, _read_if_exists=True ) multi = mt.filter_rows(hl.len(mt.alleles) > 2) multi = multi.annotate_entries(PL = hl.missing(multi.PL.dtype)) multi = multi.checkpoint( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrX_unphased_GRCh38_multiallelic.mt", overwrite=False, _read_if_exists=True ) split = hl.split_multi_hts(multi, keep_star=True, permit_shuffle=True) split = split.checkpoint( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrX_unphased_GRCh38_multiallelic_split.mt", overwrite=False, _read_if_exists=True ) unioned = split.union_rows(bi) unioned = unioned.checkpoint( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrX_unphased_GRCh38_unioned.mt", overwrite=False, _read_if_exists=True ) unioned = unioned.repartition(512, shuffle=True) unioned = unioned.checkpoint( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrX_unphased_GRCh38_unioned_repart.mt", overwrite=False, _read_if_exists=True ) """ Explanation: Separate biallelic and multiallelic variants, split multiallelic variants with split_multi_hts, and then union_rows the split multiallelic MT back to the biallelic MT. For multiallelic variants we will just set PL to be missing, to avoid running into index out of bounds errors in split_multi_hts. End of explanation """ unioned = hl.read_matrix_table( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrX_unphased_GRCh38_unioned_repart.mt" ) # Get list of INFO fields that are arrays known_keys = [x[0] for x in list(unioned.row.info.items()) if "array" in str(x[1])] # Extract appropriate values from INFO array fields after splitting mt = unioned.annotate_rows( info = unioned.info.annotate( **{k: hl.or_missing(hl.is_defined(unioned.info[k]), unioned.info[k][unioned.a_index - 1]) for k in known_keys} ) ) n_rows, n_cols = mt.count() n_partitions = mt.n_partitions() mt = mt.annotate_globals( metadata=hl.struct( name="1000_Genomes_HighCov_chrX", reference_genome="GRCh38", n_rows=n_rows, n_cols=n_cols, n_partitions=n_partitions ) ) ht_samples = hl.read_table("gs://hail-datasets-us/1000_Genomes/NYGC_30x/samples.ht") mt = mt.annotate_cols(**ht_samples[mt.s]) mt = hl.sample_qc(mt) mt = hl.variant_qc(mt) mt.write("gs://hail-datasets-us/1000_Genomes/NYGC_30x/GRCh38/chrX_unphased.mt", overwrite=False) mt = hl.read_matrix_table("gs://hail-datasets-us/1000_Genomes/NYGC_30x/GRCh38/chrX_unphased.mt") mt.describe() """ Explanation: After splitting multiallelic variants, we need to extract the appropriate values from the INFO array fields with a_index. Then annotate globals with metadata, annotate columns with sample relationships, perform sample_qc and variant_qc, and write final MT to hail-datasets-us. End of explanation """ mt = hl.import_vcf( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/1000_Genomes_NYGC_30x_chrY_GRCh38.vcf.bgz", reference_genome="GRCh38", array_elements_required=False ) mt = mt.annotate_entries( PL = hl.if_else(mt.PL.contains(hl.missing(hl.tint32)), hl.missing(mt.PL.dtype), mt.PL) ) mt = mt.checkpoint( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrY_unphased_GRCh38_imported_vcf.mt", overwrite=False, _read_if_exists=True ) """ Explanation: ChrY (unphased): Import chrY VCF to MatrixTable and checkpoint: End of explanation """ mt = hl.read_matrix_table( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrY_unphased_GRCh38_imported_vcf.mt" ) bi = mt.filter_rows(hl.len(mt.alleles) == 2) bi = bi.annotate_rows(a_index=1, was_split=False) bi = bi.checkpoint( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrY_unphased_GRCh38_biallelic.mt", overwrite=False, _read_if_exists=True ) multi = mt.filter_rows(hl.len(mt.alleles) > 2) multi = multi.annotate_entries(PL = hl.missing(multi.PL.dtype)) multi = multi.checkpoint( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrY_unphased_GRCh38_multiallelic.mt", overwrite=False, _read_if_exists=True ) split = hl.split_multi_hts(multi, keep_star=True, permit_shuffle=True) split = split.checkpoint( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrY_unphased_GRCh38_multiallelic_split.mt", overwrite=False, _read_if_exists=True ) unioned = split.union_rows(bi) unioned = unioned.checkpoint( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrY_unphased_GRCh38_unioned.mt", overwrite=False, _read_if_exists=True ) unioned = unioned.repartition(8, shuffle=True) unioned = unioned.checkpoint( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrY_unphased_GRCh38_unioned_repart.mt", overwrite=False, _read_if_exists=True ) """ Explanation: Separate biallelic and multiallelic variants, split multiallelic variants with split_multi_hts, and then union_rows the split multiallelic MT back to the biallelic MT. For multiallelic variants we will just set PL to be missing, to avoid running into index out of bounds errors in split_multi_hts. End of explanation """ unioned = hl.read_matrix_table( "gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrY_unphased_GRCh38_unioned_repart.mt" ) # Get list of INFO fields that are arrays known_keys = [x[0] for x in list(unioned.row.info.items()) if "array" in str(x[1])] # Extract appropriate values from INFO array fields after splitting mt = unioned.annotate_rows( info = unioned.info.annotate( **{k: hl.or_missing(hl.is_defined(unioned.info[k]), unioned.info[k][unioned.a_index - 1]) for k in known_keys} ) ) n_rows, n_cols = mt.count() n_partitions = mt.n_partitions() mt = mt.annotate_globals( metadata=hl.struct( name="1000_Genomes_HighCov_chrY", reference_genome="GRCh38", n_rows=n_rows, n_cols=n_cols, n_partitions=n_partitions ) ) ht_samples = hl.read_table("gs://hail-datasets-us/1000_Genomes/NYGC_30x/samples.ht") mt = mt.annotate_cols(**ht_samples[mt.s]) mt = hl.sample_qc(mt) mt = hl.variant_qc(mt) mt.write("gs://hail-datasets-us/1000_Genomes/NYGC_30x/GRCh38/chrY_unphased.mt", overwrite=False) mt = hl.read_matrix_table("gs://hail-datasets-us/1000_Genomes/NYGC_30x/GRCh38/chrY_unphased.mt") mt.describe() """ Explanation: After splitting multiallelic variants, we need to extract the appropriate values from the INFO array fields with a_index. Then annotate globals with metadata, annotate columns with sample relationships, perform sample_qc and variant_qc, and write final MT to hail-datasets-us. End of explanation """ import json import os import textwrap output_dir = os.path.abspath("../../hail/python/hail/docs/datasets/schemas") datasets_path = os.path.abspath("../../hail/python/hail/experimental/datasets.json") with open(datasets_path, "r") as f: datasets = json.load(f) names = datasets.keys() for name in [name for name in names if "1000_Genomes_HighCov" in name]: versions = sorted(set(dataset["version"] for dataset in datasets[name]["versions"])) if not versions: versions = [None] reference_genomes = sorted(set(dataset["reference_genome"] for dataset in datasets[name]["versions"])) if not reference_genomes: reference_genomes = [None] print(name) # Create schemas for unphased versions, since phased entries only have GT if name == "1000_Genomes_HighCov_chrY": v = versions[0] else: v = versions[1] print(v) print(reference_genomes[0] + "\n") path = [dataset["url"]["gcp"]["us"] for dataset in datasets[name]["versions"] if all([dataset["version"] == v, dataset["reference_genome"] == reference_genomes[0]])] assert len(path) == 1 path = path[0] if path.endswith(".ht"): table = hl.methods.read_table(path) table_class = "hail.Table" else: table = hl.methods.read_matrix_table(path) table_class = "hail.MatrixTable" description = table.describe(handler=lambda x: str(x)).split("\n") description = "\n".join([line.rstrip() for line in description]) template = """.. _{dataset}: {dataset} {underline1} * **Versions:** {versions} * **Reference genome builds:** {ref_genomes} * **Type:** :class:`{class}` Schema ({version0}, {ref_genome0}) {underline2} .. code-block:: text {schema} """ context = { "dataset": name, "underline1": len(name) * "=", "version0": v, "ref_genome0": reference_genomes[0], "versions": ", ".join([str(version) for version in versions]), "ref_genomes": ", ".join([str(reference_genome) for reference_genome in reference_genomes]), "underline2": len("".join(["Schema (", str(v), ", ", str(reference_genomes[0]), ")"])) * "~", "schema": textwrap.indent(description, " "), "class": table_class } with open(output_dir + f"/{name}.rst", "w") as f: f.write(template.format(**context).strip()) """ Explanation: Create/update schemas End of explanation """
Diyago/Machine-Learning-scripts
time series regression/DL aproach for timeseries/pytorch_timeseries_RNN.ipynb
apache-2.0
import torch from torch import nn import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(8,5)) # how many time steps/data pts are in one batch of data seq_length = 20 # generate evenly spaced data pts time_steps = np.linspace(0, np.pi, seq_length + 1) data = np.sin(time_steps) data.resize((seq_length + 1, 1)) # size becomes (seq_length+1, 1), adds an input_size dimension x = data[:-1] # all but the last piece of data y = data[1:] # all but the first # display the data plt.plot(time_steps[1:], x, 'r.', label='input, x') # x plt.plot(time_steps[1:], y, 'b.', label='target, y') # y plt.legend(loc='best') plt.show() """ Explanation: Simple RNN In ths notebook, we're going to train a simple RNN to do time-series prediction. Given some set of input data, it should be able to generate a prediction for the next time step! <img src='assets/time_prediction.png' width=40% /> First, we'll create our data Then, define an RNN in PyTorch Finally, we'll train our network and see how it performs Import resources and create data End of explanation """ class RNN(nn.Module): def __init__(self, input_size, output_size, hidden_dim, n_layers): super(RNN, self).__init__() self.hidden_dim=hidden_dim # define an RNN with specified parameters # batch_first means that the first dim of the input and output will be the batch_size self.rnn = nn.RNN(input_size, hidden_dim, n_layers, batch_first=True) # last, fully-connected layer self.fc = nn.Linear(hidden_dim, output_size) def forward(self, x, hidden): # x (batch_size, seq_length, input_size) # hidden (n_layers, batch_size, hidden_dim) # r_out (batch_size, time_step, hidden_size) batch_size = x.size(0) # get RNN outputs r_out, hidden = self.rnn(x, hidden) # shape output to be (batch_size*seq_length, hidden_dim) r_out = r_out.view(-1, self.hidden_dim) # get final output output = self.fc(r_out) return output, hidden """ Explanation: Define the RNN Next, we define an RNN in PyTorch. We'll use nn.RNN to create an RNN layer, then we'll add a last, fully-connected layer to get the output size that we want. An RNN takes in a number of parameters: * input_size - the size of the input * hidden_dim - the number of features in the RNN output and in the hidden state * n_layers - the number of layers that make up the RNN, typically 1-3; greater than 1 means that you'll create a stacked RNN * batch_first - whether or not the input/output of the RNN will have the batch_size as the first dimension (batch_size, seq_length, hidden_dim) Take a look at the RNN documentation to read more about recurrent layers. End of explanation """ # test that dimensions are as expected test_rnn = RNN(input_size=1, output_size=1, hidden_dim=10, n_layers=2) # generate evenly spaced, test data pts time_steps = np.linspace(0, np.pi, seq_length) data = np.sin(time_steps) data.resize((seq_length, 1)) test_input = torch.Tensor(data).unsqueeze(0) # give it a batch_size of 1 as first dimension print('Input size: ', test_input.size()) # test out rnn sizes test_out, test_h = test_rnn(test_input, None) print('Output size: ', test_out.size()) print('Hidden state size: ', test_h.size()) """ Explanation: Check the input and output dimensions As a check that your model is working as expected, test out how it responds to input data. End of explanation """ # decide on hyperparameters input_size=1 output_size=1 hidden_dim=32 n_layers=1 # instantiate an RNN rnn = RNN(input_size, output_size, hidden_dim, n_layers) print(rnn) """ Explanation: Training the RNN Next, we'll instantiate an RNN with some specified hyperparameters. Then train it over a series of steps, and see how it performs. End of explanation """ # MSE loss and Adam optimizer with a learning rate of 0.01 criterion = nn.MSELoss() optimizer = torch.optim.Adam(rnn.parameters(), lr=0.01) """ Explanation: Loss and Optimization This is a regression problem: can we train an RNN to accurately predict the next data point, given a current data point? The data points are coordinate values, so to compare a predicted and ground_truth point, we'll use a regression loss: the mean squared error. It's typical to use an Adam optimizer for recurrent models. End of explanation """ # train the RNN def train(rnn, n_steps, print_every): # initialize the hidden state hidden = None for batch_i, step in enumerate(range(n_steps)): # defining the training data time_steps = np.linspace(step * np.pi, (step+1)*np.pi, seq_length + 1) data = np.sin(time_steps) data.resize((seq_length + 1, 1)) # input_size=1 x = data[:-1] y = data[1:] # convert data into Tensors x_tensor = torch.Tensor(x).unsqueeze(0) # unsqueeze gives a 1, batch_size dimension y_tensor = torch.Tensor(y) # outputs from the rnn prediction, hidden = rnn(x_tensor, hidden) ## Representing Memory ## # make a new variable for hidden and detach the hidden state from its history # this way, we don't backpropagate through the entire history hidden = hidden.data # calculate the loss loss = criterion(prediction, y_tensor) # zero gradients optimizer.zero_grad() # perform backprop and update weights loss.backward() optimizer.step() # display loss and predictions if batch_i%print_every == 0: print('Loss: ', loss.item()) plt.plot(time_steps[1:], x, 'r.') # input plt.plot(time_steps[1:], prediction.data.numpy().flatten(), 'b.') # predictions plt.show() return rnn # train the rnn and monitor results n_steps = 75 print_every = 15 trained_rnn = train(rnn, n_steps, print_every) """ Explanation: Defining the training function This function takes in an rnn, a number of steps to train for, and returns a trained rnn. This function is also responsible for displaying the loss and the predictions, every so often. Hidden State Pay close attention to the hidden state, here: * Before looping over a batch of training data, the hidden state is initialized * After a new hidden state is generated by the rnn, we get the latest hidden state, and use that as input to the rnn for the following steps End of explanation """
roatienza/Deep-Learning-Experiments
versions/2022/tools/python/np_demo.ipynb
mit
import numpy as np import matplotlib.pyplot as plt """ Explanation: Demonstration of numpy for data synthesis and manipulation numpy is a numerical computing library in Python. It supports linear algebra operations that are useful in deep learning. In particular, numpy is useful for data loading, preparation, synthesis and manipulation. Below are some examples where numpy is used in vision and speech. Note: Jupyter notebook is also supported in Visual Studio Code. In the Command Palette, type "Create New Jupyter Notenook". End of explanation """ img = np.random.randint(0, 255, size=(96,96), dtype=np.uint8) plt.imshow(img, cmap='gray', vmin=0, vmax=255) plt.show() """ Explanation: Generate a 96x96 pixel grayscale image. Each pixel has a random value. End of explanation """ img = np.ones((96,96), dtype=np.uint8)*255 for i in range(4): img[i*24:(i+1)*24, i*24:(i+1)*24] = 0 for i in range(2,4): img[i*24:(i+1)*24, (i-2)*24:(i-1)*24] = 0 for i in range(0,2): img[i*24:(i+1)*24, (i+2)*24:(i+3)*24] = 0 plt.imshow(img, cmap='gray', vmin=0, vmax=255) plt.show() """ Explanation: Let's create a 4x4 chessboard pattern. Image size is stil 96x96 pixel grayscale. First example is using loops. Not exactly efficient and scalable. End of explanation """ def chessboard(shape): return np.indices(shape).sum(axis=0) % 2 img = chessboard((4,4))*255 img = np.repeat(img, (24), axis=0) img = np.repeat(img, (24), axis=1) plt.imshow(img, cmap='gray', vmin=0, vmax=255) plt.show() """ Explanation: Second example is more efficient as the operations are parallelizable. End of explanation """ imgs = np.split(img, 4, axis=0) img = np.hstack(imgs) plt.imshow(img, cmap='gray', vmin=0, vmax=255) plt.show() """ Explanation: Another example is reshaping the pattern. For example, we might want to flatten the chessboard pattern. End of explanation """ from matplotlib import image img = image.imread("aki_dog.jpg") plt.imshow(img) plt.show() """ Explanation: With matplotlib, we can load an image from the filesystem into a numpy array. In this example, the image file is in the same directory as this jupyter notebook. End of explanation """ img = np.mean(img, axis=-1) print(img.shape) plt.imshow(img, cmap='gray', vmin=0, vmax=255) plt.show() """ Explanation: numpy can perform rgb to grayscale conversion. For example, by taking the mean of the rgb components. End of explanation """ import numpy as np from IPython.display import Audio import matplotlib.pyplot as plt samples_per_sec = 22050 freq = 500 n_points = samples_per_sec*5 t = np.linspace(0,5,n_points) data = np.sin(2*np.pi*freq*t) Audio(data,rate=samples_per_sec) """ Explanation: A limitation of numpy is it can not easily do image transformations such as shearing, rotating, etc. For that, other libraries are used such PIL or torchvision. numpy can also be used to synthesize audio waveforms. For example, let us synthesize a 500Hz sine wave. End of explanation """
pyemma/deeplearning
assignment2/FullyConnectedNets.ipynb
gpl-3.0
# As usual, a bit of setup import time import numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.fc_net import * from cs231n.data_utils import get_CIFAR10_data from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array from cs231n.solver import Solver %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): """ returns relative error """ return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) # Load the (preprocessed) CIFAR10 data. data = get_CIFAR10_data() for k, v in data.iteritems(): print '%s: ' % k, v.shape """ Explanation: Fully-Connected Neural Nets In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures. In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this: ```python def layer_forward(x, w): """ Receive inputs x and weights w """ # Do some computations ... z = # ... some intermediate value # Do some more computations ... out = # the output cache = (x, w, z, out) # Values we need to compute gradients return out, cache ``` The backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this: ```python def layer_backward(dout, cache): """ Receive derivative of loss with respect to outputs and cache, and compute derivative with respect to inputs. """ # Unpack cache values x, w, z, out = cache # Use values in cache to compute derivatives dx = # Derivative of loss with respect to x dw = # Derivative of loss with respect to w return dx, dw ``` After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures. In addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks. End of explanation """ # Test the affine_forward function num_inputs = 2 input_shape = (4, 5, 6) output_dim = 3 input_size = num_inputs * np.prod(input_shape) weight_size = output_dim * np.prod(input_shape) x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape) w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim) b = np.linspace(-0.3, 0.1, num=output_dim) out, _ = affine_forward(x, w, b) correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297], [ 3.25553199, 3.5141327, 3.77273342]]) # Compare your output with ours. The error should be around 1e-9. print 'Testing affine_forward function:' print 'difference: ', rel_error(out, correct_out) """ Explanation: Affine layer: foward Open the file cs231n/layers.py and implement the affine_forward function. Once you are done you can test your implementaion by running the following: End of explanation """ # Test the affine_backward function x = np.random.randn(10, 2, 3) w = np.random.randn(6, 5) b = np.random.randn(5) dout = np.random.randn(10, 5) dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout) _, cache = affine_forward(x, w, b) dx, dw, db = affine_backward(dout, cache) # The error should be around 1e-10 print 'Testing affine_backward function:' print 'dx error: ', rel_error(dx_num, dx) print 'dw error: ', rel_error(dw_num, dw) print 'db error: ', rel_error(db_num, db) """ Explanation: Affine layer: backward Now implement the affine_backward function and test your implementation using numeric gradient checking. End of explanation """ # Test the relu_forward function x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4) out, _ = relu_forward(x) correct_out = np.array([[ 0., 0., 0., 0., ], [ 0., 0., 0.04545455, 0.13636364,], [ 0.22727273, 0.31818182, 0.40909091, 0.5, ]]) # Compare your output with ours. The error should be around 1e-8 print 'Testing relu_forward function:' print 'difference: ', rel_error(out, correct_out) """ Explanation: ReLU layer: forward Implement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following: End of explanation """ x = np.random.randn(10, 10) dout = np.random.randn(*x.shape) dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout) _, cache = relu_forward(x) dx = relu_backward(dout, cache) # The error should be around 1e-12 print 'Testing relu_backward function:' print 'dx error: ', rel_error(dx_num, dx) """ Explanation: ReLU layer: backward Now implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking: End of explanation """ from cs231n.layer_utils import affine_relu_forward, affine_relu_backward x = np.random.randn(2, 3, 4) w = np.random.randn(12, 10) b = np.random.randn(10) dout = np.random.randn(2, 10) out, cache = affine_relu_forward(x, w, b) dx, dw, db = affine_relu_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout) print 'Testing affine_relu_forward:' print 'dx error: ', rel_error(dx_num, dx) print 'dw error: ', rel_error(dw_num, dw) print 'db error: ', rel_error(db_num, db) """ Explanation: "Sandwich" layers There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py. For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass: End of explanation """ num_classes, num_inputs = 10, 50 x = 0.001 * np.random.randn(num_inputs, num_classes) y = np.random.randint(num_classes, size=num_inputs) dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False) loss, dx = svm_loss(x, y) # Test svm_loss function. Loss should be around 9 and dx error should be 1e-9 print 'Testing svm_loss:' print 'loss: ', loss print 'dx error: ', rel_error(dx_num, dx) dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False) loss, dx = softmax_loss(x, y) # Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8 print '\nTesting softmax_loss:' print 'loss: ', loss print 'dx error: ', rel_error(dx_num, dx) """ Explanation: Loss layers: Softmax and SVM You implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py. You can make sure that the implementations are correct by running the following: End of explanation """ N, D, H, C = 3, 5, 50, 7 X = np.random.randn(N, D) y = np.random.randint(C, size=N) std = 1e-2 model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std) print 'Testing initialization ... ' W1_std = abs(model.params['W1'].std() - std) b1 = model.params['b1'] W2_std = abs(model.params['W2'].std() - std) b2 = model.params['b2'] assert W1_std < std / 10, 'First layer weights do not seem right' assert np.all(b1 == 0), 'First layer biases do not seem right' assert W2_std < std / 10, 'Second layer weights do not seem right' assert np.all(b2 == 0), 'Second layer biases do not seem right' print 'Testing test-time forward pass ... ' model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H) model.params['b1'] = np.linspace(-0.1, 0.9, num=H) model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C) model.params['b2'] = np.linspace(-0.9, 0.1, num=C) X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T scores = model.loss(X) correct_scores = np.asarray( [[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096], [12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143], [12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]]) scores_diff = np.abs(scores - correct_scores).sum() assert scores_diff < 1e-6, 'Problem with test-time forward pass' print 'Testing training loss (no regularization)' y = np.asarray([0, 5, 1]) loss, grads = model.loss(X, y) correct_loss = 3.4702243556 assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss' model.reg = 1.0 loss, grads = model.loss(X, y) correct_loss = 26.5948426952 assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss' for reg in [0.0, 0.7]: print 'Running numeric gradient check with reg = ', reg model.reg = reg loss, grads = model.loss(X, y) for name in sorted(grads): f = lambda _: model.loss(X, y)[0] grad_num = eval_numerical_gradient(f, model.params[name], verbose=False) print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])) """ Explanation: Two-layer network In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations. Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation. End of explanation """ model = TwoLayerNet() solver = None ############################################################################## # TODO: Use a Solver instance to train a TwoLayerNet that achieves at least # # 50% accuracy on the validation set. # ############################################################################## solver = Solver(model, data, update_rule='sgd', optim_config={'learning_rate': 1e-3}, lr_decay=0.95, num_epochs=10, batch_size=100, print_every=100) solver.train() ############################################################################## # END OF YOUR CODE # ############################################################################## # Run this cell to visualize training loss and train / val accuracy plt.subplot(2, 1, 1) plt.title('Training loss') plt.plot(solver.loss_history, 'o') plt.xlabel('Iteration') plt.subplot(2, 1, 2) plt.title('Accuracy') plt.plot(solver.train_acc_history, '-o', label='train') plt.plot(solver.val_acc_history, '-o', label='val') plt.plot([0.5] * len(solver.val_acc_history), 'k--') plt.xlabel('Epoch') plt.legend(loc='lower right') plt.gcf().set_size_inches(15, 12) plt.show() """ Explanation: Solver In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class. Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set. End of explanation """ N, D, H1, H2, C = 2, 15, 20, 30, 10 X = np.random.randn(N, D) y = np.random.randint(C, size=(N,)) for reg in [0, 3.14]: print 'Running check with reg = ', reg model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C, reg=reg, weight_scale=5e-2, dtype=np.float64) loss, grads = model.loss(X, y) print 'Initial loss: ', loss for name in sorted(grads): f = lambda _: model.loss(X, y)[0] grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5) print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])) """ Explanation: Multilayer network Next you will implement a fully-connected network with an arbitrary number of hidden layers. Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py. Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon. Initial loss and gradient check As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable? For gradient checking, you should expect to see errors around 1e-6 or less. End of explanation """ # TODO: Use a three-layer Net to overfit 50 training examples. num_train = 50 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } weight_scale = 1e-2 learning_rate = 1e-2 model = FullyConnectedNet([100, 100], weight_scale=weight_scale, dtype=np.float64) solver = Solver(model, small_data, print_every=10, num_epochs=20, batch_size=25, update_rule='sgd', optim_config={ 'learning_rate': learning_rate, } ) solver.train() plt.plot(solver.loss_history, 'o') plt.title('Training loss history') plt.xlabel('Iteration') plt.ylabel('Training loss') plt.show() """ Explanation: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs. End of explanation """ # TODO: Use a five-layer Net to overfit 50 training examples. num_train = 50 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } learning_rate = 1e-3 weight_scale = 1e-1 model = FullyConnectedNet([100, 100, 100, 100], weight_scale=weight_scale, dtype=np.float64) solver = Solver(model, small_data, print_every=10, num_epochs=20, batch_size=25, update_rule='sgd', optim_config={ 'learning_rate': learning_rate, } ) solver.train() plt.plot(solver.loss_history, 'o') plt.title('Training loss history') plt.xlabel('Iteration') plt.ylabel('Training loss') plt.show() """ Explanation: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs. End of explanation """ from cs231n.optim import sgd_momentum N, D = 4, 5 w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D) dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D) v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D) config = {'learning_rate': 1e-3, 'velocity': v} next_w, _ = sgd_momentum(w, dw, config=config) expected_next_w = np.asarray([ [ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789], [ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526], [ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263], [ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]]) expected_velocity = np.asarray([ [ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158], [ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105], [ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053], [ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]]) print 'next_w error: ', rel_error(next_w, expected_next_w) print 'velocity error: ', rel_error(expected_velocity, config['velocity']) """ Explanation: Inline question: Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net? Answer: [FILL THIS IN] Update rules So far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD. SGD+Momentum Stochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent. Open the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8. End of explanation """ num_train = 4000 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } solvers = {} for update_rule in ['sgd', 'sgd_momentum']: print 'running with ', update_rule model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2) solver = Solver(model, small_data, num_epochs=5, batch_size=100, update_rule=update_rule, optim_config={ 'learning_rate': 1e-2, }, verbose=True) solvers[update_rule] = solver solver.train() print plt.subplot(3, 1, 1) plt.title('Training loss') plt.xlabel('Iteration') plt.subplot(3, 1, 2) plt.title('Training accuracy') plt.xlabel('Epoch') plt.subplot(3, 1, 3) plt.title('Validation accuracy') plt.xlabel('Epoch') for update_rule, solver in solvers.iteritems(): plt.subplot(3, 1, 1) plt.plot(solver.loss_history, 'o', label=update_rule) plt.subplot(3, 1, 2) plt.plot(solver.train_acc_history, '-o', label=update_rule) plt.subplot(3, 1, 3) plt.plot(solver.val_acc_history, '-o', label=update_rule) for i in [1, 2, 3]: plt.subplot(3, 1, i) plt.legend(loc='upper center', ncol=4) plt.gcf().set_size_inches(15, 15) plt.show() """ Explanation: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster. End of explanation """ # Test RMSProp implementation; you should see errors less than 1e-7 from cs231n.optim import rmsprop N, D = 4, 5 w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D) dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D) cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D) config = {'learning_rate': 1e-2, 'cache': cache} next_w, _ = rmsprop(w, dw, config=config) expected_next_w = np.asarray([ [-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247], [-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774], [ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447], [ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]]) expected_cache = np.asarray([ [ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321], [ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377], [ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936], [ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]]) print 'next_w error: ', rel_error(expected_next_w, next_w) print 'cache error: ', rel_error(expected_cache, config['cache']) # Test Adam implementation; you should see errors around 1e-7 or less from cs231n.optim import adam N, D = 4, 5 w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D) dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D) m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D) v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D) config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5} next_w, _ = adam(w, dw, config=config) expected_next_w = np.asarray([ [-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977], [-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929], [ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969], [ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]]) expected_v = np.asarray([ [ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,], [ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,], [ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,], [ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]]) expected_m = np.asarray([ [ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474], [ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316], [ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158], [ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]]) print 'next_w error: ', rel_error(expected_next_w, next_w) print 'v error: ', rel_error(expected_v, config['v']) print 'm error: ', rel_error(expected_m, config['m']) """ Explanation: RMSProp and Adam RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients. In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below. [1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude." COURSERA: Neural Networks for Machine Learning 4 (2012). [2] Diederik Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization", ICLR 2015. End of explanation """ learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3} for update_rule in ['adam', 'rmsprop']: print 'running with ', update_rule model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2) solver = Solver(model, small_data, num_epochs=5, batch_size=100, update_rule=update_rule, optim_config={ 'learning_rate': learning_rates[update_rule] }, verbose=True) solvers[update_rule] = solver solver.train() print plt.subplot(3, 1, 1) plt.title('Training loss') plt.xlabel('Iteration') plt.subplot(3, 1, 2) plt.title('Training accuracy') plt.xlabel('Epoch') plt.subplot(3, 1, 3) plt.title('Validation accuracy') plt.xlabel('Epoch') for update_rule, solver in solvers.iteritems(): plt.subplot(3, 1, 1) plt.plot(solver.loss_history, 'o', label=update_rule) plt.subplot(3, 1, 2) plt.plot(solver.train_acc_history, '-o', label=update_rule) plt.subplot(3, 1, 3) plt.plot(solver.val_acc_history, '-o', label=update_rule) for i in [1, 2, 3]: plt.subplot(3, 1, i) plt.legend(loc='upper center', ncol=4) plt.gcf().set_size_inches(15, 15) plt.show() """ Explanation: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules: End of explanation """ best_model = None ################################################################################ # TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might # # batch normalization and dropout useful. Store your best model in the # # best_model variable. # ################################################################################ pass ################################################################################ # END OF YOUR CODE # ################################################################################ """ Explanation: Train a good model! Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net. If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets. You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models. End of explanation """ y_test_pred = np.argmax(best_model.loss(X_test), axis=1) y_val_pred = np.argmax(best_model.loss(X_val), axis=1) print 'Validation set accuracy: ', (y_val_pred == y_val).mean() print 'Test set accuracy: ', (y_test_pred == y_test).mean() """ Explanation: Test you model Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set. End of explanation """
davisincubator/digblood
notebooks/jfa-1.0-initial_data_exploration.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np data_dir = '../data/raw/' data_filename = 'blood_train.csv' df_blood = pd.read_csv(data_dir+data_filename) df_blood.head() """ Explanation: Predicting Blood Donations: Initial Data Exploration To do: - Import data - Clean data - Visualize data Import Data Functions used: - pandas.read_csv - [pandas df].head() End of explanation """ # FILL IN TEST # FILL IN ACTION """ Explanation: Clean Data Are there any missing values? End of explanation """ df_blood.iloc[:, 1:].describe() """ Explanation: Visualize Data Table: Summary Statistics To get a feel for the data as a whole. Functions Used: - [pandas df].iloc() - [pandas df].describe() End of explanation """ plot_scatter = pd.scatter_matrix(df_blood.iloc[:, 1:], figsize=(20,20)) """ Explanation: Insights from Summary stats table: | Variable | Value | Interpretation | |----: |:----: |:---- | | Number of data points N | 576 | Not too big of a dataset | | Average number of donations in March, 2007 | 0.2396 | Whether blood was donated in March was low in general | | Max Months since 1st Donation | 98 | Earliest donation was 98 months (~8 years) ago | | Average number of donations | 5.427 | People in dataset donate an average of ~5.5 times | Plot: Scatter Matrix of all of the variables + histograms Note: - Number of donations & Total Volume Donated are perfectly correlated - thus can probably drop one of the variables - More likely to NOT have donated in March 2008 (from Made Donation histogram) End of explanation """ import seaborn as sns # sns.set_context("notebook", font_scale=1.1) # sns.set_style("ticks") sns.set_context("notebook", font_scale=1.5, rc={'figure.figsize': [11, 8]}) sns.set_style("darkgrid", {"axes.facecolor": ".9"}) g = sns.lmplot(data=df_blood, x='Number of Donations', y='Months since First Donation', hue='Made Donation in March 2007', fit_reg=False, palette='RdYlBu', aspect=3/1, scatter_kws={"marker": "D", "s": 50}) """ Explanation: Plot data as a scatter plot (w/r 'Made Donations in March 2007') In order to visually inspect whether the given data is linearly separable - want to create scatter plots of the data (like those in Abu-Mostafa, et al., 2012) 2-dim Scatterplot: Number of Donations + Months since First Donation ~ Made Donation in March 2007 With 2-dimensions/factors (Number of Donations & Months since First Donation), can we linearly separate whether a donation was made in March, 2007? End of explanation """
NeuroDataDesign/pan-synapse
pipeline_1/background/Sort.ipynb
apache-2.0
def newRandomCentroids(n, l, u): diff = u-l return [[random()*diff+l for _ in range(3)] for _ in range(n)] newRandomCentroids(10, 10, 100) """ Explanation: Goal The goal of this notebook is to explore better methods for the final l2 centorid match during registration in the pipeline. Generate Data End of explanation """ def l2(a, b): return np.sqrt(np.sum(np.power(np.subtract(a, b), 2))) def bruteMatch(A, B, r): pairs = [] for a in A: loss = [l2(a, b) for b in B] if np.min(loss) < r: pairs.append([a, B[np.argmin(loss)]]) else: pairs.append([a, [0, 0, 0]]) return pairs def makePairSet(n, l, u, o): A = newRandomCentroids(n, l, u) B = [[(elem[0]+random()*o)-o/2, (elem[1]+random()*o)-o/2, (elem[2]+random()*o)-o/2] for elem in A] return A, B A, B = makePairSet(10, 0, 100, 10) reshapeA = zip(*(A)) reshapeB = zip(*(B)) trace1 = go.Scatter3d( x = reshapeA[0], y = reshapeA[1], z = reshapeA[2], mode = 'markers', marker = dict( size=12, color=100, opacity=.7 ) ) trace2 = go.Scatter3d( x = reshapeB[0], y = reshapeB[1], z = reshapeB[2], mode = 'markers', marker = dict( size=12, color=0, opacity=.710 ) ) data = [trace1, trace2] layout = go.Layout(margin=dict(l=0, r=0, t=0, b=0)) fig = go.Figure(data=data, layout=layout) iplot(fig) pairs = bruteMatch(A, B, 100) data = [] for pair in pairs: i = "rgb(" + str(random()*255) + ',' + str(random()*255) + ',' + str(random()*255)+')' data.append(go.Scatter3d( x = zip(*(pair))[0], y = zip(*(pair))[1], z = zip(*(pair))[2], marker = dict(size=12, color=i, opacity=.7), line = dict(color=i, width=1) ) ) fig = go.Figure(data=data, layout=layout) iplot(fig) timeStats = [] for i in range(2000,10000,2000): A, B = makePairSet(i, 0, 1000, 100) print i s = time.time() pairs = bruteMatch(A, B, 100) e = time.time() timeStats.append([i, e-s]) plt.figure() plt.title('Run time vs Number of points') x, y = zip(*(timeStats)) plt.scatter(x, y) plt.show() """ Explanation: Benchmark Current Approach End of explanation """ def KDMatch(A, B, r): tree = KDTree(B) pairs = [] for a in A: dist, idx = tree.query(a, k=1, distance_upper_bound = r) if dist == float('Inf'): pairs.append(a, [0, 0, 0]) else: pairs.append([a, B[idx]]) return pairs A, B = makePairSet(10, 0, 100, 10) reshapeA = zip(*(A)) reshapeB = zip(*(B)) trace1 = go.Scatter3d( x = reshapeA[0], y = reshapeA[1], z = reshapeA[2], mode = 'markers', marker = dict( size=12, color=100, opacity=.7 ) ) trace2 = go.Scatter3d( x = reshapeB[0], y = reshapeB[1], z = reshapeB[2], mode = 'markers', marker = dict( size=12, color=0, opacity=.710 ) ) data = [trace1, trace2] layout = go.Layout(margin=dict(l=0, r=0, t=0, b=0)) fig = go.Figure(data=data, layout=layout) iplot(fig) pairs = KDMatch(A, B, 10) data = [] for pair in pairs: i = "rgb(" + str(random()*255) + ',' + str(random()*255) + ',' + str(random()*255)+')' data.append(go.Scatter3d( x = zip(*(pair))[0], y = zip(*(pair))[1], z = zip(*(pair))[2], marker = dict(size=12, color=i, opacity=.7), line = dict(color=i, width=1) ) ) fig = go.Figure(data=data, layout=layout) iplot(fig) timeStats = [] for i in range(2000,10000,2000): A, B = makePairSet(i, 0, 1000, 100) print i s = time.time() pairs = KDMatch(A, B, 100) e = time.time() timeStats.append([i, e-s]) plt.figure() plt.title('Run time vs Number of points') x, y = zip(*(timeStats)) plt.scatter(x, y) plt.show() kdTimeStats = [] bruteTimeStats = [] for i in range(2000,10000,2000): A, B = makePairSet(i, 0, 1000, 100) print i s = time.time() pairs = KDMatch(A, B, 100) e = time.time() kdTimeStats.append([i, e-s]) s = time.time() pairs = bruteMatch(A, B, 100) e = time.time() bruteTimeStats.append([i, e-s]) plt.figure() plt.title('Run time vs Number of points (KD=Red, Brute=Blue)') x, y = zip(*(kdTimeStats)) plt.scatter(x, y, c='r') x, y = zip(*(bruteTimeStats)) plt.scatter(x, y, c='b') plt.show() """ Explanation: KD Tree Matching In order to solve the runtime issue, I decided to perform only 1-nearest neighbor lookups (since thats what we really care about). The KD tree implements this in log(n) for the time of the lookup, with nlog(n) overhead for building the tree. End of explanation """
carthach/essentia
src/examples/tutorial/example_discontinuitydetector.ipynb
agpl-3.0
import essentia.standard as es import numpy as np import matplotlib.pyplot as plt from IPython.display import Audio from essentia import array as esarr plt.rcParams["figure.figsize"] =(12,9) def compute(x, frame_size=1024, hop_size=512, **kwargs): discontinuityDetector = es.DiscontinuityDetector(frameSize=frame_size, hopSize=hop_size, **kwargs) locs = [] amps = [] for idx, frame in enumerate(es.FrameGenerator(x, frameSize=frame_size, hopSize=hop_size, startFromZero=True)): frame_locs, frame_ampls = discontinuityDetector(frame) for l in frame_locs: locs.append((l + hop_size * idx) / 44100.) for a in frame_ampls: amps.append(a) return locs, amps """ Explanation: DiscontinuityDetector use example This algorithm uses LPC and some heuristics to detect discontinuities in anaudio signal. [1]. References: [1] Mühlbauer, R. (2010). Automatic Audio Defect Detection. End of explanation """ def testRegression(self, frameSize=512, hopSize=256): fs = 44100 audio = MonoLoader(filename=join(testdata.audio_dir, 'recorded/cat_purrrr.wav'), sampleRate=fs)() originalLen = len(audio) startJump = originalLen / 4 groundTruth = [startJump / float(fs)] # make sure that the artificial jump produces a prominent discontinuity if audio[startJump] > 0: end = next(idx for idx, i in enumerate(audio[startJump:]) if i < -.3) else: end = next(idx for idx, i in enumerate(audio[startJump:]) if i > .3) endJump = startJump + end audio = esarr(np.hstack([audio[:startJump], audio[endJump:]])) frameList = [] discontinuityDetector = self.InitDiscontinuityDetector( frameSize=frameSize, hopSize=hopSize, detectionThreshold=10) for idx, frame in enumerate(FrameGenerator( audio, frameSize=frameSize, hopSize=hopSize, startFromZero=True)): locs, _ = discontinuityDetector(frame) if not len(locs) == 0: for loc in locs: frameList.append((idx * hopSize + loc) / float(fs)) self.assertAlmostEqualVector(frameList, groundTruth, 1e-7) fs = 44100. audio_dir = '../../audio/' audio = es.MonoLoader(filename='{}/{}'.format(audio_dir, 'recorded/vignesh.wav'), sampleRate=fs)() originalLen = len(audio) startJumps = np.array([originalLen / 4, originalLen / 2]) groundTruth = startJumps / float(fs) for startJump in startJumps: # make sure that the artificial jump produces a prominent discontinuity if audio[startJump] > 0: end = next(idx for idx, i in enumerate(audio[startJump:]) if i < -.3) else: end = next(idx for idx, i in enumerate(audio[startJump:]) if i > .3) endJump = startJump + end audio = esarr(np.hstack([audio[:startJump], audio[endJump:]])) for point in groundTruth: l1 = plt.axvline(point, color='g', alpha=.5) times = np.linspace(0, len(audio) / fs, len(audio)) plt.plot(times, audio) plt.title('Signal with artificial clicks of different amplitudes') l1.set_label('Click locations') plt.legend() """ Explanation: Generating some discontinuities examples Let's start by degrading some audio files with some discontinuities. Discontinuities are generally occasioned by hardware issues in the process of recording or copying. Let's simulate this by removing a random number of samples from the input audio file. End of explanation """ Audio(audio, rate=fs) """ Explanation: Lets listen to the clip to have an idea on how audible the discontinuities are End of explanation """ locs, amps = compute(audio) fig, ax = plt.subplots(len(groundTruth)) plt.subplots_adjust(hspace=.4) for idx, point in enumerate(groundTruth): l1 = ax[idx].axvline(locs[idx], color='r', alpha=.5) l2 = ax[idx].axvline(point, color='g', alpha=.5) ax[idx].plot(times, audio) ax[idx].set_xlim([point-.001, point+.001]) ax[idx].set_title('Click located at {:.2f}s'.format(point)) fig.legend((l1, l2), ('Detected discontinuity', 'Ground truth'), 'upper right') """ Explanation: The algorithm This algorithm outputs the starts and ends timestapms of the clicks. The following plots show how the algorithm performs in the previous examples End of explanation """
littlewizardLI/Udacity-ML-nanodegrees
Project1-boston_housing/boston_housing.ipynb
apache-2.0
# Import libraries necessary for this project import numpy as np import pandas as pd from sklearn.cross_validation import ShuffleSplit # Import supplementary visualizations code visuals.py import visuals as vs # Pretty display for notebooks %matplotlib inline # Load the Boston housing dataset data = pd.read_csv('housing.csv') prices = data['MEDV'] features = data.drop('MEDV', axis = 1) # Success print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape) """ Explanation: Machine Learning Engineer Nanodegree Model Evaluation & Validation Project: Predicting Boston Housing Prices Welcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully! In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Getting Started In this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a good fit could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis. The dataset for this project originates from the UCI Machine Learning Repository. The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preprocessing steps have been made to the dataset: - 16 data points have an 'MEDV' value of 50.0. These data points likely contain missing or censored values and have been removed. - 1 data point has an 'RM' value of 8.78. This data point can be considered an outlier and has been removed. - The features 'RM', 'LSTAT', 'PTRATIO', and 'MEDV' are essential. The remaining non-relevant features have been excluded. - The feature 'MEDV' has been multiplicatively scaled to account for 35 years of market inflation. Run the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported. End of explanation """ # Minimum price of the data minimum_price = np.amin(prices) # Maximum price of the data maximum_price = np.amax(prices) # Mean price of the data mean_price = np.mean(prices) # Median price of the data median_price = np.median(prices) # Standard deviation of prices of the data std_price = np.std(prices) # Show the calculated statistics print "Statistics for Boston housing dataset:\n" print "Minimum price: ${:,.2f}".format(minimum_price) print "Maximum price: ${:,.2f}".format(maximum_price) print "Mean price: ${:,.2f}".format(mean_price) print "Median price ${:,.2f}".format(median_price) print "Standard deviation of prices: ${:,.2f}".format(std_price) """ Explanation: Data Exploration In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results. Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively. Implementation: Calculate Statistics For your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since numpy has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model. In the code cell below, you will need to implement the following: - Calculate the minimum, maximum, mean, median, and standard deviation of 'MEDV', which is stored in prices. - Store each calculation in their respective variable. End of explanation """ from sklearn.metrics import r2_score def performance_metric(y_true, y_predict): """ Calculates and returns the performance score between true and predicted values based on the metric chosen. """ # TODO: Calculate the performance score between 'y_true' and 'y_predict' score = r2_score(y_true, y_predict) # Return the score return score """ Explanation: Question 1 - Feature Observation As a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood): - 'RM' is the average number of rooms among homes in the neighborhood. - 'LSTAT' is the percentage of homeowners in the neighborhood considered "lower class" (working poor). - 'PTRATIO' is the ratio of students to teachers in primary and secondary schools in the neighborhood. Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an increase in the value of 'MEDV' or a decrease in the value of 'MEDV'? Justify your answer for each. Hint: Would you expect a home that has an 'RM' value of 6 be worth more or less than a home that has an 'RM' value of 7? Answer: - I think 'MEDV' will increase with an increase in the value of 'RM',the reason is that larger 'RM' means more people around and more richer men. - 'LSTAT' increase , 'MEDV' decrease.The reson just like the above one.Less 'lower class' mean more 'higher class' in the neighborhood. - 'PTRATIO' increase, 'MEDV' decrease.when 'PTRATIO' is low mean the teachers have more attention to take good care of every students.All people want their children have good education Developing a Model In this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions. Implementation: Define a Performance Metric It is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the coefficient of determination, R<sup>2</sup>, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions. The values for R<sup>2</sup> range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the target variable. A model with an R<sup>2</sup> of 0 is no better than a model that always predicts the mean of the target variable, whereas a model with an R<sup>2</sup> of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the features. A model can be given a negative R<sup>2</sup> as well, which indicates that the model is arbitrarily worse than one that always predicts the mean of the target variable. For the performance_metric function in the code cell below, you will need to implement the following: - Use r2_score from sklearn.metrics to perform a performance calculation between y_true and y_predict. - Assign the performance score to the score variable. End of explanation """ # Calculate the performance of this model score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3]) print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score) """ Explanation: Question 2 - Goodness of Fit Assume that a dataset contains five data points and a model made the following predictions for the target variable: | True Value | Prediction | | :-------------: | :--------: | | 3.0 | 2.5 | | -0.5 | 0.0 | | 2.0 | 2.1 | | 7.0 | 7.8 | | 4.2 | 5.3 | Would you consider this model to have successfully captured the variation of the target variable? Why or why not? Run the code cell below to use the performance_metric function and calculate this model's coefficient of determination. End of explanation """ from sklearn.model_selection import train_test_split # Shuffle and split the data into training and testing subsets X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=2) # Success print "Training and testing split was successful." """ Explanation: Answer: I think this model have successfully captured the cariation of the target variable. reason: Best possible score is 1.0,and the result 0.923 is high enough Implementation: Shuffle and Split Data Your next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset. For the code cell below, you will need to implement the following: - Use train_test_split from sklearn.cross_validation to shuffle and split the features and prices data into training and testing sets. - Split the data into 80% training and 20% testing. - Set the random_state for train_test_split to a value of your choice. This ensures results are consistent. - Assign the train and testing splits to X_train, X_test, y_train, and y_test. End of explanation """ # Produce learning curves for varying training set sizes and maximum depths vs.ModelLearning(features, prices) """ Explanation: Question 3 - Training and Testing What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm? Hint: What could go wrong with not having a way to test your model? Answer: if we don't split training and testing data,just like we give the answers to the students who is taking an exam.Of course they can get high score.but it can't tell us whether it is a good model to predict other data. Analyzing Model Performance In this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing 'max_depth' parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone. Learning Curves The following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination. Run the code cell below and use these graphs to answer the following question. End of explanation """ vs.ModelComplexity(X_train, y_train) """ Explanation: Question 4 - Learning the Data Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model? Hint: Are the learning curves converging to particular scores? Answer: I choose the second graph, mac_depth=3 1. What happens to the score of the training curve as more training points are added? as the the increase , the score decrease quiltly, but the rate of decrease speed reduce,and the score trend to be a constant. 2. What about the testing curve? the score increase quiltly, but the rate of decrease speed reduce,and the score trend to be a constant. 3. Would having more training points benefit the model? I don't think so. Complexity Curves The following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the learning curves, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the performance_metric function. Run the code cell below and use this graph to answer the following two questions. End of explanation """ from sklearn.tree import DecisionTreeRegressor from sklearn.metrics import make_scorer from sklearn.grid_search import GridSearchCV def fit_model(X, y): """ Performs grid search over the 'max_depth' parameter for a decision tree regressor trained on the input data [X, y]. """ # Create cross-validation sets from the training data cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0) regressor = DecisionTreeRegressor() # Create a dictionary for the parameter 'max_depth' with a range from 1 to 10 params = {'max_depth': range(1, 11, 1)} print params # TODO: Transform 'performance_metric' into a scoring function using 'make_scorer' scoring_fnc = make_scorer(performance_metric) # Create the grid search object grid = GridSearchCV(regressor, params, scoring_fnc, cv=cv_sets) # Fit the grid search object to the data to compute the optimal model grid = grid.fit(X, y) # Return the optimal model after fitting the data return grid.best_estimator_ """ Explanation: Question 5 - Bias-Variance Tradeoff When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions? Hint: How do you know when a model is suffering from high bias or high variance? Answer: 1. max_depth=1 high bias 2. max_depth=10 high variance 3. train_score,when max_depth=1, train _score is small than 0.5,it must be high bias;when max_depth=10 train_score is about 1.0 at the same time test_score is very low which means over-fit。 Question 6 - Best-Guess Optimal Model Which maximum depth do you think results in a model that best generalizes to unseen data? What intuition lead you to this answer? Answer: max_depth=4 is the best according to the largest test_score. Evaluating Model Performance In this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from fit_model. Question 7 - Grid Search What is the grid search technique and how it can be applied to optimize a learning algorithm? Answer: 1. exhaustive search over specified parameter values for an estimator. 2. it can automatically cross validation using each of those parameters keeping track of the resulting scores Question 8 - Cross-Validation What is the k-fold cross-validation training technique? What benefit does this technique provide for grid search when optimizing a model? Hint: Much like the reasoning behind having a testing set, what could go wrong with using grid search without a cross-validated set? Answer: the data split into smaller sets which number is k. Step1:we choose one of the k fold as test data, and other k-1 folds as train folds. Step2:get the score Step3:repeat Step1 and Step2 until every k fold is used as test data. Step4:get the mean of the scores. if we don't use k-fold,there are two main problem: The first one is the results can depend on a particular random choice for the pair of (train, validation) sets. The second one we reduce the numbers of samples that can be used in learning model. luckily, k-fold can fix these problem. Implementation: Fitting a Model Your final implementation requires that you bring everything together and train a model using the decision tree algorithm. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the 'max_depth' parameter for the decision tree. The 'max_depth' parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called supervised learning algorithms. In addition, you will find your implementation is using ShuffleSplit() for an alternative form of cross-validation (see the 'cv_sets' variable). While it is not the K-Fold cross-validation technique you describe in Question 8, this type of cross-validation technique is just as useful!. The ShuffleSplit() implementation below will create 10 ('n_iter') shuffled sets, and for each shuffle, 20% ('test_size') of the data will be used as the validation set. While you're working on your implementation, think about the contrasts and similarities it has to the K-fold cross-validation technique. For the fit_model function in the code cell below, you will need to implement the following: - Use DecisionTreeRegressor from sklearn.tree to create a decision tree regressor object. - Assign this object to the 'regressor' variable. - Create a dictionary for 'max_depth' with the values from 1 to 10, and assign this to the 'params' variable. - Use make_scorer from sklearn.metrics to create a scoring function object. - Pass the performance_metric function as a parameter to the object. - Assign this scoring function to the 'scoring_fnc' variable. - Use GridSearchCV from sklearn.grid_search to create a grid search object. - Pass the variables 'regressor', 'params', 'scoring_fnc', and 'cv_sets' as parameters to the object. - Assign the GridSearchCV object to the 'grid' variable. End of explanation """ # Fit the training data to the model using grid search reg = fit_model(X_train, y_train) # Produce the value for 'max_depth' print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth']) """ Explanation: Making Predictions Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on. Question 9 - Optimal Model What maximum depth does the optimal model have? How does this result compare to your guess in Question 6? Run the code block below to fit the decision tree regressor to the training data and produce an optimal model. End of explanation """ # Produce a matrix for client data client_data = [[5, 17, 15], # Client 1 [4, 32, 22], # Client 2 [8, 3, 12]] # Client 3 # Show predictions for i, price in enumerate(reg.predict(client_data)): print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price) """ Explanation: Answer: Just like my guess in Question, Max_depth=4 has the best performance Question 10 - Predicting Selling Prices Imagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients: | Feature | Client 1 | Client 2 | Client 3 | | :---: | :---: | :---: | :---: | | Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms | | Neighborhood poverty level (as %) | 17% | 32% | 3% | | Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 | What price would you recommend each client sell his/her home at? Do these prices seem reasonable given the values for the respective features? Hint: Use the statistics you calculated in the Data Exploration section to help justify your response. Run the code block below to have your optimized model make predictions for each client's home. End of explanation """ vs.PredictTrials(features, prices, fit_model, client_data) """ Explanation: Answer: - Client 1 : $415,800.00 Client 2: $236,478.26 Client 3: $888,720.00 yes,these prices seem reasonable。 Reason: In Question1 I have explain the relationship between features and prices. according to the datas. RM: Client3 > Client1 > Client2 LSTAT: Client3 < Client1 < Client2 PTRATIO: Client3 > Client1 > Client2 So Client 3's home should have the highest price,and Client 2's home have the lowest price。 Sensitivity An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the fit_model function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on. End of explanation """
jmhsi/justin_tinker
data_science/courses/deeplearning1/fastai-course-1-pytorch/lesson5-pytorch.ipynb
apache-2.0
from keras.datasets import imdb idx = imdb.get_word_index() """ Explanation: Setup data We're going to look at the IMDB dataset, which contains movie reviews from IMDB, along with their sentiment. Keras comes with some helpers for this dataset. End of explanation """ idx_arr = sorted(idx, key=idx.get) idx_arr[:10] """ Explanation: This is the word list: End of explanation """ idx2word = {v: k for k, v in idx.items()} """ Explanation: ...and this is the mapping from id to word End of explanation """ from keras.utils.data_utils import get_file import pickle path = get_file('imdb_full.pkl', origin='https://s3.amazonaws.com/text-datasets/imdb_full.pkl', md5_hash='d091312047c43cf9e4e38fef92437263') f = open(path, 'rb') (x_train, labels_train), (x_test, labels_test) = pickle.load(f) len(x_train) """ Explanation: We download the reviews using code copied from keras.datasets: End of explanation """ ', '.join(map(str, x_train[0])) """ Explanation: Here's the 1st review. As you see, the words have been replaced by ids. The ids can be looked up in idx2word. End of explanation """ idx2word[23022] """ Explanation: The first word of the first review is 23022. Let's see what that is. End of explanation """ ' '.join([idx2word[o] for o in x_train[0]]) """ Explanation: Here's the whole review, mapped from ids to words. End of explanation """ labels_train_tensor = torch.from_numpy(np.array(labels_train)) labels_test_tensor = torch.from_numpy(np.array(labels_test)) labels_train[:10] """ Explanation: The labels are 1 for positive, 0 for negative. End of explanation """ vocab_size = 5000 trn = [np.array([i if i < vocab_size - 1 else vocab_size - 1 for i in s]) for s in x_train] test = [np.array([i if i < vocab_size - 1 else vocab_size - 1 for i in s]) for s in x_test] """ Explanation: Reduce vocab size by setting rare words to max index. End of explanation """ lens = np.array(list(map(len, trn))) (lens.max(), lens.min(), lens.mean()) """ Explanation: Look at distribution of lengths of sentences. End of explanation """ seq_len = 500 from keras.preprocessing import sequence trn = sequence.pad_sequences(trn, maxlen=seq_len, value=0) test = sequence.pad_sequences(test, maxlen=seq_len, value=0) trn_tensor = torch.from_numpy(trn).long() test_tensor = torch.from_numpy(test).long() """ Explanation: Pad (with zero) or truncate each sentence to make consistent length. End of explanation """ trn_tensor.size() """ Explanation: This results in nice rectangular matrices that can be passed to ML algorithms. Reviews shorter than 500 words are pre-padded with zeros, those greater are truncated. End of explanation """ import torch.nn as nn import torch.nn.functional as F class SingleHiddenLayerModule(nn.Module): def __init__(self): super().__init__() num_dimensions = 32 self.embedding = nn.Embedding(vocab_size, num_dimensions) self.fc1 = nn.Linear(seq_len * num_dimensions, 100) self.dropout = nn.Dropout(0.7) self.fc2 = nn.Linear(100, 2) self.init() def forward(self, words_ids): x = self.embedding(words_ids) # x => torch.Size([64, 500, 32]) x = x.view(x.size(0), -1) # x => torch.Size([64, 16000]) x = self.fc1(x) x = F.relu(x, True) x = self.dropout(x) x = self.fc2(x) # result = F.sigmoid(x) result = x return result def init(self): torch.nn.init.constant(self.fc1.bias, val=0.0) torch.nn.init.constant(self.fc2.bias, val=0.0) %autoreload 2 # criterion = nn.BCELoss() criterion = nn.CrossEntropyLoss() model = SingleHiddenLayerModule() if(use_cuda): model.cuda() criterion.cuda() trainer = ModuleTrainer(model) trainer.set_optimizer(optim.Adam, lr=1e-3) trainer.set_loss(criterion) trainer.set_initializers([Uniform(module_filter="embedding*", a=-0.05, b=0.05), XavierUniform(module_filter="fc*")]) trainer.set_metrics([CategoricalAccuracy()]) # trainer.summary((trn_tensor.size(0), labels_train_tensor.size(0))) model trainer.fit(trn_tensor, labels_train_tensor, validation_data=(test_tensor, labels_test_tensor), nb_epoch=2, batch_size=batch_size, shuffle=True) """ Explanation: Create simple models Single hidden layer NN The simplest model that tends to give reasonable results is a single hidden layer net. So let's try that. Note that we can't expect to get any useful results by feeding word ids directly into a neural net - so instead we use an embedding to replace them with a vector of 32 (initially random) floats for each word in the vocab. End of explanation """ import torch.nn as nn import torch.nn.functional as F class CnnMaxPoolingModule(nn.Module): def __init__(self): super().__init__() num_dimensions = 32 self.embedding = nn.Embedding(vocab_size, num_dimensions) self.drop1 = nn.Dropout(0.2) self.conv1 = nn.Conv1d(in_channels=32, out_channels=64, kernel_size=5, padding=2, groups=1) self.fc1 = nn.Linear(seq_len * num_dimensions, 100) self.dropout = nn.Dropout(0.7) self.fc2 = nn.Linear(100, 2) self.init() def forward(self, words_ids): x = self.embedding(words_ids) # x => torch.Size([B, 500, 32]) x = x.permute(0, 2, 1) # print('emb', x.size()) x = self.drop1(x) # x => torch.Size([B, 500, 32]) x = self.conv1(x) # x => torch.Size([B, 500, 64]) x = F.relu(x, True) # print('conv1', x.size()) x = self.drop1(x) # x => torch.Size([B, 500, 64]) x = F.max_pool1d(x, kernel_size=2) # print('max', x.size()) x = x.view(x.size(0), -1) # print(x.size()) x = self.fc1(x) x = F.relu(x, True) x = self.dropout(x) x = self.fc2(x) # result = F.sigmoid(x) result = x #raise 'Error' return result def init(self): torch.nn.init.constant(self.conv1.bias, val=0.0) torch.nn.init.constant(self.fc1.bias, val=0.0) torch.nn.init.constant(self.fc2.bias, val=0.0) %autoreload 2 # criterion = nn.BCELoss() criterion = nn.CrossEntropyLoss() model = CnnMaxPoolingModule() if(use_cuda): model.cuda() criterion.cuda() trainer = ModuleTrainer(model) trainer.set_optimizer(optim.Adam, lr=1e-3) trainer.set_loss(criterion) trainer.set_initializers([Uniform(module_filter="embedding*", a=-0.05, b=0.05), XavierUniform(module_filter="fc*"), XavierUniform(module_filter="conv*")]) trainer.set_metrics([CategoricalAccuracy()]) # trainer.summary((trn_tensor.size(0), labels_train_tensor.size(0))) model trainer.fit(trn_tensor, labels_train_tensor, validation_data=(test_tensor, labels_test_tensor), nb_epoch=2, batch_size=batch_size, shuffle=True) trainer.fit(trn_tensor, labels_train_tensor, validation_data=(test_tensor, labels_test_tensor), nb_epoch=4, batch_size=batch_size, shuffle=True) """ Explanation: The stanford paper that this dataset is from cites a state of the art accuracy (without unlabelled data) of 0.883. ~~So we're short of that, but on the right track.~~ We've already beaten the state of the art in 2011 with a simple Neural Net. Single conv layer with max pooling A CNN is likely to work better, since it's designed to take advantage of ordered data. We'll need to use a 1D CNN, since a sequence of words is 1D. End of explanation """ import torch import re from torchtext.vocab import load_word_vectors wv_dict, wv_arr, wv_size = load_word_vectors('.', 'glove.6B', 50) print('Loaded', len(wv_arr), 'words') """ Explanation: Pre-trained vectors You may want to look at wordvectors.ipynb before moving on. In this section, we replicate the previous CNN, but using pre-trained embeddings. End of explanation """ def get_word(word): return wv_arr[wv_dict[word]] def create_emb(): num_dimensions_glove = wv_arr.size()[1] embedding = nn.Embedding(vocab_size, num_dimensions_glove) # If we can't find the word in glove, randomly initialize torch.nn.init.uniform(embedding.weight, a=-0.05, b=0.05) num_found, num_not_found = 0, 0 for i in range(1,len(embedding.weight)): word = idx2word[i] if word and re.match(r"^[a-zA-Z0-9\-]*$", word): embedding.weight.data[i] = get_word(word) num_found += 1 else: num_not_found +=1 # This is our "rare word" id - we want to randomly initialize torch.nn.init.uniform(embedding.weight.data[-1], a=-0.05, b=0.05) embedding.weight.requires_grad = False # This speeds up training. Can it be replaced by BatchNorm1d? embedding.weight.data /= 3 print("Words found: {}, not found: {}".format(num_found, num_not_found)) return embedding """ Explanation: The glove word ids and imdb word ids use different indexes. So we create a simple function that creates an embedding matrix using the indexes from imdb, and the embeddings from glove (where they exist). End of explanation """ import torch.nn as nn import torch.nn.functional as F class CnnMaxPoolingModuleWithEmbedding(nn.Module): def __init__(self, embedding): super().__init__() num_dimensions = 32 self.embedding = embedding self.drop1 = nn.Dropout(0.25) self.batchnorm = nn.BatchNorm1d(500) self.conv1 = nn.Conv1d(in_channels=embedding.weight.size()[1], out_channels=64, kernel_size=5, padding=2, groups=1) self.fc1 = nn.Linear(seq_len * num_dimensions, 100) self.dropout = nn.Dropout(0.7) self.fc2 = nn.Linear(100, 2) self.init() def forward(self, words_ids): x = self.embedding(words_ids) # x = self.batchnorm(x) x = x.permute(0, 2, 1) x = self.drop1(x) x = self.conv1(x) x = F.relu(x, True) x = self.drop1(x) x = F.max_pool1d(x, kernel_size=2) x = x.view(x.size(0), -1) x = self.fc1(x) x = F.relu(x, True) x = self.dropout(x) x = self.fc2(x) result = x return result def init(self): torch.nn.init.constant(self.conv1.bias, val=0.0) torch.nn.init.constant(self.fc1.bias, val=0.0) torch.nn.init.constant(self.fc2.bias, val=0.0) def parameters(self): p = filter(lambda p: p.requires_grad, nn.Module.parameters(self)) return p %autoreload 2 emb = create_emb() # criterion = nn.BCELoss() criterion = nn.CrossEntropyLoss() model = CnnMaxPoolingModuleWithEmbedding(emb) if(use_cuda): model.cuda() criterion.cuda() trainer = ModuleTrainer(model) trainer.set_optimizer(optim.Adam, lr=1e-3) trainer.set_loss(criterion) trainer.set_initializers([XavierUniform(module_filter="fc*"), XavierUniform(module_filter="conv*")]) trainer.set_metrics([CategoricalAccuracy()]) # trainer.summary((trn_tensor.size(0), labels_train_tensor.size(0))) trainer.fit(trn_tensor, labels_train_tensor, validation_data=(test_tensor, labels_test_tensor), nb_epoch=10, batch_size=batch_size, shuffle=True) """ Explanation: We pass our embedding matrix to the Embedding constructor, and set it to non-trainable. End of explanation """ model.embedding.weight.requires_grad = True trainer = ModuleTrainer(model) trainer.set_optimizer(optim.Adam, lr=1e-4) trainer.set_loss(criterion) trainer.set_metrics([CategoricalAccuracy()]) trainer.fit(trn_tensor, labels_train_tensor, validation_data=(test_tensor, labels_test_tensor), nb_epoch=1, batch_size=batch_size, shuffle=True) """ Explanation: We already have beaten our previous model! But let's fine-tune the embedding weights - especially since the words we couldn't find in glove just have random embeddings. End of explanation """ import torch.nn as nn import torch.nn.functional as F class CnnMaxPoolingModuleMultiSizeWithEmbedding(nn.Module): def __init__(self, embedding): super().__init__() num_dimensions = 32 self.embedding = embedding self.drop1 = nn.Dropout(0.25) self.batchnorm = nn.BatchNorm1d(500) self.convs = [self.create_conv(embedding, fsz) for fsz in range (3, 6)] self.fc1 = nn.Linear(25000, 100) self.dropout = nn.Dropout(0.7) self.fc2 = nn.Linear(100, 2) self.init() def create_conv(self, embedding, fsz): return nn.Conv1d(in_channels=embedding.weight.size()[1], out_channels=64, kernel_size=5, padding=2, groups=1) def conv(self, c, x): x = c(x) x = F.relu(x, True) x = self.drop1(x) x = F.max_pool1d(x, kernel_size=2) return x def forward(self, words_ids): x = self.embedding(words_ids) x = x.permute(0, 2, 1) x = self.drop1(x) convs = [self.conv(conv, x) for conv in self.convs] torch.cat(convs, dim=1) x = x.view(x.size(0), -1) x = self.fc1(x) x = F.relu(x, True) x = self.dropout(x) x = self.fc2(x) result = x return result def init(self): torch.nn.init.constant(self.fc1.bias, val=0.0) torch.nn.init.constant(self.fc2.bias, val=0.0) for conv in self.convs: torch.nn.init.xavier_uniform(conv.weight.data, gain=1.0) torch.nn.init.constant(conv.bias, val=0.0) def parameters(self): p = filter(lambda p: p.requires_grad, nn.Module.parameters(self)) return p %autoreload 2 emb = create_emb() criterion = nn.CrossEntropyLoss() model = CnnMaxPoolingModuleMultiSizeWithEmbedding(emb) model.embedding.weight.requires_grad = True if(use_cuda): model.cuda() criterion.cuda() trainer = ModuleTrainer(model) trainer.set_optimizer(optim.Adam, lr=1e-3) trainer.set_loss(criterion) trainer.set_initializers([XavierUniform(module_filter="fc*")]) trainer.set_metrics([CategoricalAccuracy()]) trainer.fit(trn_tensor, labels_train_tensor, validation_data=(test_tensor, labels_test_tensor), nb_epoch=10, batch_size=batch_size, shuffle=True) """ Explanation: Multi-size CNN This is an implementation of a multi-size CNN as shown in Ben Bowles' excellent blog post. We create multiple conv layers of different sizes, and then concatenate them. End of explanation """ import torch.nn as nn import torch.nn.functional as F class LstmEmbeddingModule(nn.Module): def __init__(self): super().__init__() num_dimensions = 32 self.num_hidden = 100 self.embedding = nn.Embedding(vocab_size, num_dimensions) self.drop1 = nn.Dropout(0.2) self.lstm1 = nn.LSTM(input_size=32, hidden_size=self.num_hidden, num_layers=1, batch_first=True) self.fc1 = nn.Linear(50000, 2) self.hidden = self.init_hidden(batch_size) self.init() def forward(self, words_ids): # We detach the hidden state from how it was previously produced. # If we didn't, the model would try backpropagating all the way to start of the dataset. # self.hidden = self.repackage_hidden(self.hidden) x = self.embedding(words_ids) x = self.drop1(x) #print('embd', x.size()) self.hidden = self.init_hidden(x.size(0)) #lenghts = [vocab_size for _ in range(x.size(0))] #x = torch.nn.utils.rnn.pack_padded_sequence(x, lenghts, batch_first=True) #print('pack', x.data.size()) x, self.hidden = self.lstm1(x, self.hidden) #print('lstm', x.data.size()) #x, _ = torch.nn.utils.rnn.pad_packed_sequence(x, batch_first=True) #print('unpk', x.size()) # print(self.hidden) # TODO can we get rid of contiguous? x = x.contiguous().view(x.size(0), -1) #print('view', x.size()) x = self.fc1(x) x = F.relu(x, True) return x def init(self): torch.nn.init.constant(self.fc1.bias, val=0.0) def init_hidden(self, batch_size): num_layers = 1 weight = next(self.parameters()).data return (Variable(weight.new(num_layers, batch_size, self.num_hidden).zero_()), Variable(weight.new(num_layers, batch_size, self.num_hidden).zero_())) def repackage_hidden(self, h): """Wraps hidden states in new Variables, to detach them from their history.""" if type(h) == Variable: return Variable(h.data) else: return tuple(self.repackage_hidden(v) for v in h) %autoreload 2 criterion = nn.CrossEntropyLoss() model = LstmEmbeddingModule() if(use_cuda): model.cuda() criterion.cuda() trainer = ModuleTrainer(model) trainer.set_optimizer(optim.Adam, lr=1e-3) trainer.set_loss(criterion) # TODO init LSTM trainer.set_initializers([Uniform(module_filter="embedding*", a=-0.05, b=0.05), XavierUniform(module_filter="fc*"), XavierUniform(module_filter="conv*")]) trainer.set_metrics([CategoricalAccuracy()]) # TODO figure out how to do this in PyTorch trainer.fit(trn_tensor, labels_train_tensor, validation_data=(test_tensor, labels_test_tensor), nb_epoch=5, batch_size=batch_size, shuffle=True) """ Explanation: This is clearly over-fitting. But it does get the highest accuracy on validation set. LSTM We haven't covered this bit yet! End of explanation """
kimkipyo/dss_git_kkp
Python 복습/14일차.금_pandas + SQL_2/14일차_1T_os, shutil 모듈을 이용한 파일,폴더 관리하기 (1) - 폴더 생성 및 제거.ipynb
mit
import os #os 모듈을 통해서 #운영체제 레벨(서버는 ex.우분투)에서 다루는 파일 폴더 생성하고 삭제하기가 가능 #기존에는 ("../../~~") 이런 식으로 경로를 직접 입력 했으나 os.listdir() #현재 폴더 안에 있는 파일들을 리스트로 뽑는 것 os.listdir("../") for csv_file in os.listdir("../"): pass """ Explanation: 1T_os, shutil 모듈을 이용한 파일,폴더 관리하기 (1) - 폴더 생성 및 제거 영화별 매출 - Revenue per Film 이거 어려워. 이거 뽑아 보겠음 데이터를 저장하고 관리하기 위해서 os, shutil - python 내장 라이브러리를 쓸 것임 각 국가별 이름으로 (korea.csv / japan.csv...) 저장하는 거를 할 것임 1T에는 os module, shutil module로 파일, 폴더, 압축파일(데이터) 등을 저장하고 읽고 쓰고 관리할 것임. 파이썬으로만 End of explanation """ [ file_name for file_name in os.listdir("../01일차.수_입문/") if file_name.endswith(".ipynb") # csv 파일 가져오기, 엑셀 파일 가져오기로 사용 ] """ Explanation: ipynb 라는 확장자로 끝나는 파일들만 가지고 오려면 End of explanation """ os.path.join("data", "data.csv") os.curdir os.path.join(os.curdir, "data", "data.csv") # 이렇게 하면 경로를 알려줘. 앞으로 만들 때는 무조건 이렇게 만들겠다. # os.path.join(os.curdir, "data", file_name) """ Explanation: 파일에 대한 경로를 생성할 때 현재 폴더 안에 있는, "data"라는 폴더의 "data.csv"의 경로 "data/data.csv" "./data/data.csv" ( String 으로 입력할 때 이렇게 직접 ) // 무조건 이 방법으로 "/home/python/notebooks/dobestan/dss/....../data/data.csv" - 절대 경로 // 잘 안 씀 End of explanation """ os.makedirs("data") #잠재적인 문제가 있다. os.listdir() #폴더 만들기는 쉽게 됩니다. os.rmdir("data") #잠재적인 문제가 있다. os.listdir() """ Explanation: os.curdir #current directory os.path.join(...) os.listdir(...) End of explanation """ os.makedirs("data") # DATA라는 폴더 안에 간단한 텍스트 파일 만들기 os.listdir(os.path.join(os.curdir,"data")) os.rmdir("data") # 폴더 안에 파일이 있으면 삭제가 안 된다 # os.listdir()로 찾아본 다음에 폴더면 또 들어가서 다시 재귀적으로 찾아보고, # 파일이면 삭제하고 상위폴더로 올라와서 그리고 rmdir() ... """ Explanation: 폴더를 만들 때, os.listdir()로 특정 폴더가 있는지 확인한 후에, 만약 있으면 삭제하고 새로운 폴더를 생성한다. 폴더를 지울 때, 만약에 End of explanation """ import shutil """ Explanation: 설정 파일 같은 것을 수정하거나 삭제할 때 만약에 .bash_profile => .bash_profile.tmp / ... (복사해주고 작업을 한다.) 복구는 안 된다. 위와 같은 과정의 flow는 어려워 그래서 shutil이라는 파이썬 내장 모듈을 사용할 것임 End of explanation """ os.listdir(os.path.join(os.curdir, "data")) shutil.rmtree(os.path.join(os.curdir, "data")) os.listdir(os.path.join(os.curdir, "data")) os.makedirs(os.path.join(os.curdir, "data")) shutil.rmtree(os.path.join(os.curdir, "data")) """ Explanation: os - low-level (저수준)으로 파일/폴더/운영체제를 관리했다면 shutil - high-level (고수준) 으로 파일/폴더를 관리 End of explanation """ os.makedirs(os.path.join(os.curdir, "data")) os.makedirs(os.path.join(os.curdir, "data", "world")) # 만약 "data", "world"라는 폴더가 있으면, 삭제하는 기능 ... """ Explanation: 1. 국가명.csv 파일로 만들기 => world.tar.gz (world.zip) 압축하기 2. 대륙명/국가명.csv 파일로 만들기 => 대륙명.tar.gz 압축하기 ex) Angola.csv -- 도시 정보 csv파일이 국가별로 있어야 합니다. ("data/world/____.csv" 이 200개 정도 있어야 함) End of explanation """ # 폴더의 유무를 확인하고, 있으면 삭제한다. if "data" in os.listdir(): print("./data/ 폴더를 삭제합니다.") shutil.rmtree(os.path.join(os.curdir, "data")) # "data"라는 폴더를 생성하고, 그 안에 "world"라는 폴더를 생성한다. print("./data/ 폴더를 생성합니다.") os.makedirs(os.path.join(os.curdir, "data")) os.makedirs(os.path.join(os.curdir, "data", "world")) import pymysql db = pymysql.connect( "db.fastcamp.us", "root", "dkstncks", "world", charset='utf8' ) country_df = pd.read_sql("SELECT * FROM Country;", db) city_df = pd.read_sql("SELECT * FROM City;", db) #Country.Code를 바탕으로, City.CountryCode와 매칭해서 찾아야 함 #Country.Name은 반드시 가지고 와야지 파일명으로 저장이 가능 city_groups = city_df.groupby("CountryCode") for index, row in country_df.iterrows(): country_code = row["Code"] country_name = row["Name"] city_df = city_groups.get_group(country_code) city_df.to_csv(os.path.join("data", "world", "{country_name},csv".format(country_name=country_name))) #"ATA"라는 애가 없다고 나오니까 테스트 SQL_QUERY = """ SELECT * FROM City WHERE CountryCode = "ATA" ; """ pd.read_sql(SQL_QUERY, db) city_groups.get_group("ATA") "ATA" in city_groups["CountryCode"].unique() #없는게 증명 됐으니 if문 첨가 for index, row in country_df.iterrows(): country_code = row["Code"] country_name = row["Name"] if country_code in city_df["CountryCode"].unique(): one_city_df = city_groups.get_group(country_code) one_city_df.to_csv(os.path.join(os.curdir, "data", "world", "{country_name}.csv".format(country_name=country_name))) """ Explanation: df.to_csv(os.path.join(, , ___.csv)) df.to_csv("./data/world/Angola.csv") End of explanation """
ContextLab/quail
docs/tutorial/egg.ipynb
mit
import quail %matplotlib inline """ Explanation: The Egg data object This tutorial will go over the basics of the Egg data object, the essential quail data structure that contains all the data you need to run analyses and plot the results. An egg is made up of two primary pieces of data: pres data - stimuli/features that were presented to a subject rec data - stimuli/features that were recalled by the subject. You cannot create an egg without both of these components. Additionally, there are a few optional fields: dist_funcs dictionary - this field allows you to control the distance functions for each of the stimulus features. For more on this, see the fingerprint tutorial. meta dictionary - this is an optional field that allows you to store custom meta data about the dataset, such as the date collected, experiment version etc. There are also a few other fields and functions to make organizing and modifying eggs easier (discussed at the bottom). Now, lets dive in and create an egg from scratch. Load in the library End of explanation """ presented_words = [['cat', 'bat', 'hat', 'goat'],['zoo', 'animal', 'zebra', 'horse']] """ Explanation: The pres data structure The first piece of an egg is the pres data, or in other words the stimuli that were presented to the subject. For a single subject's data, the form of the input will be a list of lists, where each list is comprised of the words presented to the subject during a particular study block. Let's create a fake dataset of one subject who saw two encoding lists: End of explanation """ recalled_words = [['bat', 'cat', 'goat', 'hat'],['animal', 'horse', 'zoo']] """ Explanation: The rec data structure The second fundamental component of an egg is the rec data, or the words/stimuli that were recalled by the subject. Now, let's create the recall lists: End of explanation """ egg = quail.Egg(pres=presented_words, rec=recalled_words) """ Explanation: We now have the two components necessary to build an egg, so let's do that and then take a look at the result. End of explanation """ egg.info() """ Explanation: That's it! We've created our first egg. Let's take a closer look at how the egg is setup. We can use the info method to get a quick snapshot of the egg: End of explanation """ egg.get_pres_items() """ Explanation: Now, let's take a closer look at how the egg is structured. First, we will check out the pres field: End of explanation """ egg.get_rec_items() """ Explanation: As you can see above, the pres field was turned into a multi-index Pandas DataFrame organized by subject and by list. This is how the pres data is stored within an egg, which will make more sense when we consider larger datasets with more subjects. Next, let's take a look at the rec data: End of explanation """ # presented words sub1_presented=[['cat', 'bat', 'hat', 'goat'],['zoo', 'animal', 'zebra', 'horse']] sub2_presented=[['cat', 'bat', 'hat', 'goat'],['zoo', 'animal', 'zebra', 'horse']] # recalled words sub1_recalled=[['bat', 'cat', 'goat', 'hat'],['animal', 'horse', 'zoo']] sub2_recalled=[['cat', 'goat', 'bat', 'hat'],['horse', 'zebra', 'zoo', 'animal']] # combine subject data presented_words = [sub1_presented, sub2_presented] recalled_words = [sub1_recalled, sub2_recalled] # create Egg multisubject_egg = quail.Egg(pres=presented_words, rec=recalled_words) multisubject_egg.info() """ Explanation: The rec data is also stored as a DataFrame. Notice that if the number of recalled words is shorter than the number of presented words, those columns are filled with a NaN value. Now, let's create an egg with two subject's data and take a look at the result. Multisubject eggs End of explanation """ multisubject_egg.get_pres_items() """ Explanation: As you can see above, in order to create an egg with more than one subject's data, all you do is create a list of subjects. Let's see how the pres data is organized in the egg with more than one subject: End of explanation """ multisubject_egg.get_rec_items() """ Explanation: Looks identical to the single subject data, but now we have two unique subject identifiers in the DataFrame. The rec data is set up in the same way: End of explanation """ cat_features = { 'item': 'cat', 'category': 'animal', 'word_length': 3, 'starting_letter': 'c', } """ Explanation: As you add more subjects, they are simply appended to the bottom of the df with a unique subject identifier. Adding features to the egg Stimuli can also be passed as a dictionary containing the stimulus and features of the stimulus. You can include any stimulus feature you want in this dictionary, such as the position of the word on the screen, the color, or perhaps the font of the word: End of explanation """ # presentation features presented_words = [ [ { 'item': 'cat', 'category': 'animal', 'word_length': 3, 'starting_letter': 'c' }, { 'item': ' bat', 'category': 'object', 'word_length': 3, 'starting_letter': 'b' }, { 'item': 'hat', 'category': 'object', 'word_length': 3, 'starting_letter': 'h' }, { 'item': 'goat', 'category': 'animal', 'word_length': 4, 'starting_letter': 'g' }, ], [ { 'item': 'zoo', 'category': 'place', 'word_length': 3, 'starting_letter': 'z' }, { 'item': 'donkey', 'category' : 'animal', 'word_length' : 6, 'starting_letter' : 'd' }, { 'item': 'zebra', 'category': 'animal', 'word_length': 5, 'starting_letter': 'z' }, { 'item': 'horse', 'category': 'animal', 'word_length': 5, 'starting_letter': 'h' }, ], ] recalled_words = [ [ { 'item': ' bat', 'category': 'object', 'word_length': 3, 'starting_letter': 'b' }, { 'item': 'cat', 'category': 'animal', 'word_length': 3, 'starting_letter': 'c' }, { 'item': 'goat', 'category': 'animal', 'word_length': 4, 'starting_letter': 'g' }, { 'item': 'hat', 'category': 'object', 'word_length': 3, 'starting_letter': 'h' }, ], [ { 'item': 'donkey', 'category' : 'animal', 'word_length' : 6, 'starting_letter' : 'd' }, { 'item': 'horse', 'category': 'animal', 'word_length': 5, 'starting_letter': 'h' }, { 'item': 'zoo', 'category': 'place', 'word_length': 3, 'starting_letter': 'z' }, ], ] # create egg object egg = quail.Egg(pres=presented_words, rec=recalled_words) """ Explanation: Let's try creating an egg with additional stimulus features: End of explanation """ egg.get_pres_items() """ Explanation: Like before, you can use the get_pres_items method to retrieve the presented items: End of explanation """ egg.get_pres_features() """ Explanation: The stimulus features can be accessed by calling the get_pres_features method: End of explanation """ dist_funcs = { 'word_length' : lambda x,y: (x-y)**2 } egg = quail.Egg(pres=presented_words, rec=recalled_words, dist_funcs=dist_funcs) """ Explanation: Defining custom distance functions for the stimulus feature dimensions As described in the fingerprint tutorial, the features data structure is used to estimate how subjects cluster their recall responses with respect to the features of the encoded stimuli. Briefly, these estimates are derived by computing the similarity of neighboring recall words along each feature dimension. For example, if you recall "dog", and then the next word you recall is "cat", your clustering by category score would increase because the two recalled words are in the same category. Similarly, if after you recall "cat" you recall the word "can", your clustering by starting letter score would increase, since both words share the first letter "c". This logic can be extended to any number of feature dimensions. Similarity between stimuli can be computed in a number of ways. By default, the distance function for all textual features (like category, starting letter) is binary. In other words, if the words are in the same category (cat, dog), there similarity would be 1, whereas if they are in different categories (cat, can) their similarity would be 0. For numerical features (such as word length), by default similarity between words is computed using Euclidean distance. However, the point of this digression is that you can define your own distance functions by passing a dist_func dictionary to the Egg class. This could be for all feature dimensions, or only a subset. Let's see an example: End of explanation """ meta = { 'Researcher' : 'Andy Heusser', 'Study' : 'Egg Tutorial' } egg = quail.Egg(pres=presented_words, rec=recalled_words, meta=meta) egg.info() """ Explanation: In the example code above, similarity between words for the word_length feature dimension will now be computed using this custom distance function, while all other feature dimensions will be set to the default. Adding meta data to an egg Lastly, we can add meta data to the egg. We added this field to help researchers keep their eggs organized by adding custom meta data to the egg object. The data is added to the egg by passing the meta key word argument when creating the egg: End of explanation """ # presented words sub1_presented=[['cat', 'bat', 'hat', 'goat'],['zoo', 'animal', 'zebra', 'horse']] sub2_presented=[['cat', 'bat', 'hat', 'goat'],['zoo', 'animal', 'zebra', 'horse']] # recalled words sub1_recalled=[['bat', 'cat', 'goat', 'hat'],['animal', 'horse', 'zoo']] sub2_recalled=[['cat', 'goat', 'bat', 'hat'],['horse', 'zebra', 'zoo', 'animal']] # combine subject data presented_words = [sub1_presented, sub2_presented] recalled_words = [sub1_recalled, sub2_recalled] # create Egg multisubject_egg = quail.Egg(pres=presented_words,rec=recalled_words, subjgroup=['condition1', 'condition2'], listgroup=['early','late']) """ Explanation: Adding listgroup and subjgroup to an egg While the listgroup and subjgroup arguments can be used within the analyze function, they can also be attached directly to the egg, allowing you to save condition labels for easy organization and easy data sharing. To do this, simply pass one or both of the arguments when creating the egg: End of explanation """ # subject 1 data sub1_presented=[['cat', 'bat', 'hat', 'goat'],['zoo', 'animal', 'zebra', 'horse']] sub1_recalled=[['bat', 'cat', 'goat', 'hat'],['animal', 'horse', 'zoo']] # create subject 2 egg subject1_egg = quail.Egg(pres=sub1_presented, rec=sub1_recalled) # subject 2 data sub2_presented=[['cat', 'bat', 'hat', 'goat'],['zoo', 'animal', 'zebra', 'horse']] sub2_recalled=[['cat', 'goat', 'bat', 'hat'],['horse', 'zebra', 'zoo', 'animal']] # create subject 2 egg subject2_egg = quail.Egg(pres=sub2_presented, rec=sub2_recalled) stacked_eggs = quail.stack_eggs([subject1_egg, subject2_egg]) stacked_eggs.get_pres_items() """ Explanation: Saving an egg Once you have created your egg, you can save it for use later, or to share with colleagues. To do this, simply call the save method with a filepath: multisubject_egg.save('myegg') To load this egg later, simply call the load_egg function with the path of the egg: egg = quail.load('myegg') Stacking eggs We now have two separate eggs, each with a single subject's data. Let's combine them by passing a list of eggs to the stack_eggs function: End of explanation """ cracked_egg = quail.crack_egg(stacked_eggs, subjects=[1], lists=[0]) cracked_egg.get_pres_items() """ Explanation: Cracking eggs You can use the crack_egg function to slice out a subset of subjects or lists: End of explanation """ stacked_eggs.crack(subjects=[0,1], lists=[1]).get_pres_items() """ Explanation: Alternatively, you can use the crack method, which does the same thing: End of explanation """
kmunve/APS
Predict_aval_problem_combined.ipynb
mit
import pandas as pd import numpy as np import json import sklearn import matplotlib import matplotlib.pyplot as plt %matplotlib inline import warnings warnings.simplefilter('ignore') print('Pandas:\t', pd.__version__) print('Numpy:\t', np.__version__) print('Scikit Learn:\t', sklearn.__version__) print('Matplotlib:\t', matplotlib.__version__) """ Explanation: <a href="https://colab.research.google.com/github/kmunve/APS/blob/master/Predict_aval_problem_combined.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Predicting tomorrow's avalanche problem We use features from the last 3 days avalanche warnings to predict the main avalanche problem for the coming day. An avalanche problem describes why an avalanche danger exists and how severe it is. An avalanche problem contains a cause, a distribution, a potential avalanche size and a sensitivity of triggering. E.g. Cause: Wind slabs Distribution: Widespread Size: Large Sensitivity: Easy to trigger This is encoded as a 4-digit number where each digit encodes one of the four parameters, e.g. 5332. We use differnet decision tree approaches to predict these four elements. Imports End of explanation """ # define the decoder function for the 4-digit avalanche problem target !curl https://raw.githubusercontent.com/kmunve/APS/master/aps/config/snoskred_keys.json > snoskred_keys.json def print_aval_problem_combined(aval_combined_int): aval_combined_str = str(aval_combined_int) with open('snoskred_keys.json') as jdata: snoskred_keys = json.load(jdata) type_ = snoskred_keys["Class_AvalancheProblemTypeName"][aval_combined_str[0]] dist_ = snoskred_keys["Class_AvalDistributionName"][aval_combined_str[1]] sens_ = snoskred_keys["Class_AvalSensitivityId"][aval_combined_str[2]] size_ = snoskred_keys["DestructiveSizeId"][aval_combined_str[3]] return f"{type_} : {dist_} : {sens_} : {size_}" print(print_aval_problem_combined(6231)) # get the data ### Dataset with previous forecasts and observations v_df = pd.read_csv('https://raw.githubusercontent.com/hvtola/HTLA/master/varsom_ml_preproc_htla2.csv', index_col=0) # --- Added even more data from RegObs # v_df = pd.read_csv('https://raw.githubusercontent.com/hvtola/HTLA/master/varsom_ml_preproc_htla.csv', index_col=0) ### Dataset with previous forecasts only # v_df = pd.read_csv('https://raw.githubusercontent.com/kmunve/APS/master/aps/notebooks/ml_varsom/varsom_ml_preproc_3y.csv', index_col=0).drop_duplicates() # for some reason we got all rows twice in that file :-( # v_df[['date', 'region_id', 'region_group_id', 'danger_level', 'avalanche_problem_1_cause_id']].head(791*4+10) # v_df['region_id'].value_counts() v_df['region_id'].value_counts() # v_df['date'].value_counts() # keep only numeric columns from pandas.api.types import is_numeric_dtype num_cols = [var for var in v_df.columns.values if is_numeric_dtype(v_df[var])] print(len(num_cols)) num_cols # drop features that are related to the forecast we want to predict and features that should have no influence drop_list = [ 'danger_level', 'aval_problem_1_combined', 'avalanche_problem_1_cause_id', 'avalanche_problem_1_destructive_size_ext_id', 'avalanche_problem_1_distribution_id', 'avalanche_problem_1_exposed_height_1', 'avalanche_problem_1_exposed_height_2', 'avalanche_problem_1_ext_id', 'avalanche_problem_1_probability_id', 'avalanche_problem_1_problem_id', 'avalanche_problem_1_problem_type_id', 'avalanche_problem_1_trigger_simple_id', 'avalanche_problem_1_type_id', 'avalanche_problem_2_cause_id', 'avalanche_problem_2_destructive_size_ext_id', 'avalanche_problem_2_distribution_id', 'avalanche_problem_2_exposed_height_1', 'avalanche_problem_2_exposed_height_2', 'avalanche_problem_2_ext_id', 'avalanche_problem_2_probability_id', 'avalanche_problem_2_problem_id', 'avalanche_problem_2_problem_type_id', 'avalanche_problem_2_trigger_simple_id', 'avalanche_problem_2_type_id', 'avalanche_problem_3_cause_id', 'avalanche_problem_3_destructive_size_ext_id', 'avalanche_problem_3_distribution_id', 'avalanche_problem_3_exposed_height_1', 'avalanche_problem_3_exposed_height_2', 'avalanche_problem_3_ext_id', 'avalanche_problem_3_probability_id', 'avalanche_problem_3_problem_id', 'avalanche_problem_3_problem_type_id', 'avalanche_problem_3_trigger_simple_id', 'avalanche_problem_3_type_id', 'avalanche_problem_1_problem_type_id_class', 'avalanche_problem_1_sensitivity_id_class', 'avalanche_problem_1_trigger_simple_id_class', 'avalanche_problem_2_problem_type_id_class', 'avalanche_problem_2_sensitivity_id_class', 'avalanche_problem_2_trigger_simple_id_class', 'avalanche_problem_3_problem_type_id_class', 'avalanche_problem_3_sensitivity_id_class', 'avalanche_problem_3_trigger_simple_id_class', 'emergency_warning_Ikke gitt', 'emergency_warning_Naturlig utløste skred', 'author_Andreas@nve', 'author_Eldbjorg@MET', 'author_Espen Granan', 'author_EspenN', 'author_Halvor@NVE', 'author_HåvardT@met', 'author_Ida@met', 'author_Ingrid@NVE', 'author_John Smits', 'author_JonasD@ObsKorps', 'author_Julie@SVV', 'author_Jørgen@obskorps', 'author_Karsten@NVE', 'author_MSA@nortind', 'author_Matilda@MET', 'author_Odd-Arne@NVE', 'author_Ragnar@NVE', 'author_Ronny@NVE', 'author_Silje@svv', 'author_Tommy@NVE', 'author_ToreV@met', 'author_anitaaw@met', 'author_emma@nve', '[email protected]', '[email protected]', 'author_jan arild@obskorps', 'author_jegu@NVE', 'author_jostein@nve', 'author_knutinge@svv', 'author_magnush@met', 'author_martin@svv', 'author_ragnhildn@met', 'author_rue@nve', 'author_siri@met', 'author_solveig@NVE', 'author_torehum@svv', 'author_torolav@obskorps' ] v_df.describe() v_df = v_df.fillna(0) # be careful here !!! target_name = 'aval_problem_1_combined' y_df = v_df[target_name] y = y_df.values X_df = v_df.filter(num_cols).drop(drop_list, axis='columns') X = X_df.values feature_names = X_df.columns.values print(len(feature_names)) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=2442) # Fikk feilmelding med stratify=y X_train.shape, y_train.shape, X_test.shape, y_test.shape """ Explanation: Data End of explanation """ from sklearn.tree import DecisionTreeClassifier clf = DecisionTreeClassifier(max_depth=10) %time clf.fit(X_train, y_train) print('Decision tree with {0} leaves has a training score of {1} and a test score of {2}'.format(clf.tree_.max_depth, clf.score(X_train, y_train), clf.score(X_test, y_test))) # just checking if the values make sense k = 21 # error when using 1230 for i in range(len(feature_names)): print(feature_names[i], ':\t', X_test[k, i]) prediction_ = clf.predict(X_test[k, :].reshape(1, -1)) print(target_name, ':\t', y_test[k], prediction_) print(print_aval_problem_combined(prediction_[0])) # add information about dangerlevel # Finding the best parameters s_test = [] s_train = [] ks = np.arange(1, 30, dtype=int) for k in ks: clf_ = DecisionTreeClassifier(max_depth = k) clf_.fit(X_train, y_train) s_train.append(clf_.score(X_train, y_train)) s_test.append(clf_.score(X_test, y_test)) #clf.score(X_train, y_train), clf.score(X_test, y_test))) s_test = np.array(s_test) print(s_test.max(), s_test.argmax()) plt.figure(figsize=(10, 8)) plt.plot(ks, s_test, color='red', label='test') plt.plot(ks, s_train, color='blue', label='train') plt.legend() """ Explanation: Decision tree End of explanation """ importance = clf.feature_importances_ feature_indexes_by_importance = importance.argsort() for i in feature_indexes_by_importance: print('{}-{:.2f}%'.format(feature_names[i], (importance[i] *100.0))) fig, ax = plt.subplots(figsize=(8,20)) y_pos = np.arange(len(feature_names)) ax.barh(y_pos, clf.feature_importances_*100, align='center') ax.set_yticks(y_pos) ax.set_yticklabels(feature_names) ax.invert_yaxis() # labels read top-to-bottom ax.set_xlabel('Feature importance') ax.set_title('How much does each feature contribute?') """ Explanation: Feature importance End of explanation """ from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier rfc = RandomForestClassifier(n_estimators=30, min_samples_split=15) rfc.fit(X_train, y_train) predic_proba_rfc = rfc.predict_proba(X_test) predictions_rfc = rfc.predict(X_test) print('Random Forest Classifier with {0} leaves has a training score of {1} and a test score of {2}'.format(rf.max_depth, rf.score(X_train, y_train), rf.score(X_test, y_test))) print(predictions_rfc) print(predic_proba_rfc) rf = RandomForestRegressor(n_estimators=30, min_samples_split=15) rf.fit(X_train, y_train) predictions_rf = rf.predict(X_test) print('Random Forest Regressor with {0} leaves has a training score of {1} and a test score of {2}'.format(rf.max_depth, rf.score(X_train, y_train), rf.score(X_test, y_test))) print(predictions_rf) importance_rf = rf.feature_importances_ feature_indexes_by_importance_rf = importance_rf.argsort() for i in feature_indexes_by_importance_rf: print('{}-{:.2f}%'.format(feature_names[i], (importance_rf[i] *100.0))) """ Explanation: Using RandomForest End of explanation """
bigdata-i523/hid335
experiment/Python_SKL_NeuralNetworkClassifier.ipynb
gpl-3.0
display(mglearn.plots.plot_logistic_regression_graph()) """ Explanation: Introduction to Machine Learning Andreas Mueller and Sarah Guido (2017) O'Reilly Ch. 2 Supervised Learning Neural Networks (Deep Learning) MLP feedforward neural network Generalization of linear models for classification and regression Prediction by a linear regressor is given as: y_hat = w[0]*x[0] + w[1]*x[1] + ... w[p]*x[p] Visualization of logistic regression Input features and predictions are shown as nodes Coefficients are connections between the nodes End of explanation """ display(mglearn.plots.plot_single_hidden_layer_graph()) """ Explanation: MLP feedforward neural network Process of computing weighted sumsis repeated multiple times Computing hidden units, which are combined to yield final result Non-linear function After computing a weighted sum for each hidden unit, a non-linear function is applied to the results Usually the rectifying nonlinearyity (i.e., rectified linear unit, or relu) or the 'tangens hyperbolicus (tanh) Result of this function is then used in weighted sum that computes the output, or target y_hat Either non-linear functoin allows neural network to learn more complicated functions that a linear model could End of explanation """ display(mglearn.plots.plot_two_hidden_layer_graph()) """ Explanation: Parameter: # Nodes in hidden layer Number of nodes in the hidden layer needs to be set by the user As small as 10 for simple dataset, or high as 10,000 for cmoplex dataset Can also add additional hidden layers Plot: MLP with two hidden layers Having large neural network with many layers of computation and hidden units inspired the term 'deep learning' End of explanation """ from sklearn.neural_network import MLPClassifier from sklearn.model_selection import train_test_split from sklearn.datasets import make_moons X, y = make_moons(n_samples=100, noise=0.25, random_state=3) X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=42) mlp = MLPClassifier(solver='lbfgs', random_state=0, hidden_layer_sizes=[10,10]) mlp.fit(X_train, y_train) mglearn.plots.plot_2d_separator(mlp, X_train, fill=True, alpha=0.3) mglearn.discrete_scatter(X_train[:, 0], X_train[:, 1], y_train) plt.xlabel("Feature 0") plt.ylabel("Feature 1") """ Explanation: Tuning Neural Networks By default, MLP uses 100 hidden nodes (a lot for small dataset) With only 10 hidden units, the decision boundary looks more ragged End of explanation """ mlp = MLPClassifier(solver='lbfgs', activation='tanh', random_state=0, hidden_layer_sizes=[10,10]) mlp.fit(X_train, y_train) mglearn.plots.plot_2d_separator(mlp, X_train, fill=True, alpha=0.3) mglearn.discrete_scatter(X_train[:, 0], X_train[:, 1], y_train) plt.xlabel("Feature 0") plt.ylabel("Feature 1") """ Explanation: MLP with two layers for smoother boundary Can add more hidden units, add a second layer or use tanh nonlinearity End of explanation """ fig, axes = plt.subplots(2, 4, figsize=(20, 8)) for axx, n_hidden_nodes in zip(axes, [10, 100]): for ax, alpha in zip(axx, [0.0001, 0.01, 0.1, 1]): mlp = MLPClassifier(solver='lbfgs', random_state=0, hidden_layer_sizes=[n_hidden_nodes, n_hidden_nodes], alpha=alpha) mlp.fit(X_train, y_train) mglearn.plots.plot_2d_separator(mlp, X_train, fill=True, alpha=0.3, ax=ax) ax.set_title("n_hidden=[{}, {}]\nalpha={:.4f}".format(n_hidden_nodes, n_hidden_nodes, alpha)) """ Explanation: L2 Penalty and Neural Network Control complexity of NN using L2 penalty to shrink weights toward zero, as with Ridge regression and linear classifiers alpha parameter in MLPClassifier, is set to low value by default (little regularization) Plots shows effects of different alpha values with two hidden layers of 10 or 100 units: End of explanation """ from sklearn.datasets import load_breast_cancer cancer = load_breast_cancer() print("Cancer data per-feature maxima\n{}".format(cancer.data.max(axis=0))) X_train, X_test, y_train, y_test = train_test_split( cancer.data, cancer.target, stratify=cancer.target, random_state=0) mlp = MLPClassifier(random_state=42) mlp.fit(X_train, y_train) print("Accurary on Training set: {:.2f}".format(mlp.score(X_train, y_train))) print("Accuracy Test set: {:.2f}".format(mlp.score(X_test, y_test))) """ Explanation: Neural Network Weights Weights are set randomly before learning is started, random initialization affects the model that is learned Even using same parameters, can get very differen models using different SEEDS Apply MLPClassifier to Breast Cancer Dataset Start with default parameters End of explanation """ # Compute mean value per feature on Training set mean_on_train = X_train.mean(axis=0) # Compute standard deviation of each feature on Training set std_on_train = X_train.std(axis=0) # Subtract the mean, and scale by inverse standard deviation X_train_scaled = (X_train - mean_on_train) / std_on_train # Do the same for the test set, using min and range of training set X_test_scaled = (X_test - mean_on_train) / std_on_train mlp = MLPClassifier(random_state=0) mlp.fit(X_train_scaled, y_train) print("Accurary on Training set: {:.3f}".format(mlp.score(X_train_scaled, y_train))) print("Accuracy Test set: {:.3f}".format(mlp.score(X_test_scaled, y_test))) """ Explanation: Rescale the data Accuracy of MLP is good, but as with SVC model, scaling of data is problem Normalize the data (mean=0, stdev=1) for both training and test sets Rerun the MLP analysis on rescaled data End of explanation """ mlp = MLPClassifier(max_iter=1000, alpha=1, random_state=0) mlp.fit(X_train_scaled, y_train) print("Accurary on Training set: {:.3f}".format(mlp.score(X_train_scaled, y_train))) print("Accuracy Test set: {:.3f}".format(mlp.score(X_test_scaled, y_test))) """ Explanation: Warning from model Results are better after scaling, but warning tells us maximum iterations is reached adam algorithm tells us we should increase the number of iterations Increasing iterations only affect training set performance Tuning Parameters: alpha Decrease model complexity to get better generalization performance Increase alpha parameter quite aggressively (from 0.001 to 1.0) Adds stronger regularization to the coefficient weights End of explanation """ plt.figure(figsize=(20,5)) plt.imshow(mlp.coefs_[0], interpolation='none', cmap='viridis') plt.yticks(range(30), cancer.feature_names) plt.xlabel("Columns in weight matrix") plt.ylabel("Input feature") plt.colorbar() """ Explanation: Analysis of Model Analyzing neural network is tricker than analyzing linear model or tree-based model Look at the weights (coefficients) in the model Heatmap Plot Shows weights learned connecting the input to the first hidden layer Rows in the plot correspond to the 30 input features Columns in plot correspond to the 100 hidden units Light colors show large positive values, dark colors represent negative numbers End of explanation """
mrcslws/nupic.research
projects/archive/dynamic_sparse/notebooks/ExperimentAnalysis-SigOptTest.ipynb
agpl-3.0
%load_ext autoreload %autoreload 2 from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import glob import tabulate import pprint import click import numpy as np import pandas as pd from ray.tune.commands import * from nupic.research.frameworks.dynamic_sparse.common.browser import * import re import matplotlib import matplotlib.pyplot as plt from matplotlib import rcParams %config InlineBackend.figure_format = 'retina' import seaborn as sns sns.set(style="whitegrid") sns.set_palette("colorblind") """ Explanation: Experiment: Compare bayesian optimization experiments with random hyperparameter search Motivation. Evaluate bayesian optimization as a hyperparameter search tool Conclusion End of explanation """ exps = ['sigopt_baseline_comp', 'test_sigopt.py'] paths = [os.path.expanduser("~/nta/results/{}".format(e)) for e in exps] df = load_many(paths) test_string = '0_learning_' re.match('^\d+', test_string)[0] df.head(5) df.columns df['experiment_file_name'].unique() def fix_name(s): if s == '/Users/lsouza/nta/results/sigopt_baseline_comp/experiment_state-2020-03-16_03-33-36.json': return 'Random Search' elif s == '/Users/lsouza/nta/results/test_sigopt.py/experiment_state-2020-03-15_23-03-55.json': return "SigOpt-A" elif s == '/Users/lsouza/nta/results/test_sigopt.py/experiment_state-2020-03-16_01-05-45.json': return "SigOpt-B" df['experiment_file_name'] = df['experiment_file_name'].apply(fix_name) df['experiment_file_name'].unique() def get_index(s): return int(re.match('^\d+', s)[0]) df['index_pos'] = df['Experiment Name'].apply(get_index) df['density'] = df['on_perc'] df.iloc[17] df.groupby('experiment_file_name')['model'].count() """ Explanation: Load and check data End of explanation """ # helper functions def mean_and_std(s): return "{:.3f} ± {:.3f}".format(s.mean(), s.std()) def round_mean(s): return "{:.0f}".format(round(s.mean())) stats = ['min', 'max', 'mean', 'std'] def agg(columns, filter=None, round=3): if filter is None: return (df.groupby(columns) .agg({'val_acc_max_epoch': round_mean, 'val_acc_max': stats, 'model': ['count']})).round(round) else: return (df[filter].groupby(columns) .agg({'val_acc_max_epoch': round_mean, 'val_acc_max': stats, 'model': ['count']})).round(round) agg(['experiment_file_name']) def plot_acc_over_time(plot_title): plt.figure(figsize=(12,6)) df_plot = df[df['experiment_file_name'] == plot_title] sns.lineplot(df_plot['index_pos'], y=df_plot['val_acc_last']) plt.xticks(np.arange(0,100,5)) plt.ylim(0.67,0.76) plt.title(plot_title) # how to plot? plot_acc_over_time('Random Search') plot_acc_over_time('SigOpt-A') plot_acc_over_time('SigOpt-B') def accumulate(series): series = list(series) cum_series = [series[0]] for i in range(1, len(series)): cum_series.append(max(cum_series[i-1], series[i])) return cum_series def plot_best_acc_over_time(plot_title): plt.figure(figsize=(12,6)) df_plot = df[df['experiment_file_name'] == plot_title].sort_values('index_pos') df_plot['cum_acc'] = accumulate(df_plot['val_acc_last']) sns.lineplot(df_plot['index_pos'], y=df_plot['cum_acc']) plt.xticks(np.arange(0,101,5)) plt.ylim(0.71,0.76) plt.title(plot_title) plot_best_acc_over_time('Random Search') plot_best_acc_over_time('SigOpt-A') plot_best_acc_over_time('SigOpt-B') # list top 5 values of each # show best values def show_best(experiment): df_exp = df[df['experiment_file_name'] == experiment].sort_values('val_acc_last', ascending=False)[:5] return df_exp[['index_pos', 'learning_rate', 'density', 'val_acc_last']] show_best('Random Search') show_best('SigOpt-A') show_best('SigOpt-B') """ Explanation: ## Analysis End of explanation """
dogrdon/native_ad_data
analysis/native_ad_analysis.ipynb
gpl-3.0
import pandas as pd from datetime import datetime import dateutil import matplotlib.pyplot as plt from IPython.core.display import display, HTML import re from urllib.parse import urlparse import json """ Explanation: Performing Clean-up and Analysis on Native Ad Data Scraped "From Around the Web" End of explanation """ data = pd.read_csv('../data/in/native_ad_data.csv') data.head() """ Explanation: Data Load and Cleaning End of explanation """ data['headline'] = data['headline'].apply(lambda x: re.sub('(?<=[a-z])\.?([A-Z](.*))' , '', x.strip())) data.head() """ Explanation: As a side note, the headlines from zergnet all have some newlines we need to get rid of and they appear to have concatenated the headline with the provider. So let's clean those up. End of explanation """ data['img_file'] = data['img_file'].apply(lambda x: re.sub('\.\/imgs\/' , '', str(x).strip())) """ Explanation: OK, that's better. The img_file column values also have ./imgs/ appended to the front of each file name. Let's get rid of those: End of explanation """ for col in data.columns: print((col, sum(data[col].isnull()))) """ Explanation: Now, let's check, do we have any null values? End of explanation """ data.describe() """ Explanation: For now only the orig_article column has nulls, as we had not collected those consistently End of explanation """ data['img_host'] = data['img'].apply(lambda x: urlparse(x).netloc) data['link_host'] = data['final_link'].apply(lambda x: urlparse(x).netloc) """ Explanation: Already we can see some interesting trends here. Out of 129399 unique records, only 18022 of the headlines are unique, but 43315 of the links are unique and 23866 of the image files are unique (assuming for sure that there were issues with downloading images). So it seems already that there are content links which might reuse the same headline, or image for different destination articles. Also, because we want to inspect the hosts from which the articles and images are coming from, let's parse those out in the data. Data Preparation End of explanation """ left = ['http://www.politico.com/magazine/', 'https://www.washingtonpost.com/', 'http://www.huffingtonpost.com/', 'http://gothamist.com/news', 'http://www.metro.us/news', 'http://www.politico.com/politics', 'http://www.nydailynews.com/news', 'http://www.thedailybeast.com/'] right = ['http://www.breitbart.com', 'http://www.rt.com', 'https://nypost.com/news/', 'http://www.infowars.com/', 'https://www.therebel.media/news', 'http://observer.com/latest/'] center = ['http://www.ibtimes.com/', 'http://www.businessinsider.com/', 'http://thehill.com'] tabloid = ['http://tmz.com', 'http://www.dailymail.co.uk/', 'https://downtrend.com/', 'http://reductress.com/', 'http://preventionpulse.com/', 'http://elitedaily.com/', 'http://worldstarhiphop.com/videos/'] def get_classification(source): if source in left: return 'left' if source in right: return 'right' if source in center: return 'center' if source in tabloid: return 'tabloid' data['source_class'] = data['source'].apply(lambda x: get_classification(x)) data.head() """ Explanation: Next, let's classify each site by a very relaxed set of tags based on perceived political bias. I might be a little off on some, I referenced https://www.allsides.com/ where possible, but that was not entirely helpful in all cases. Otherwise, I just went with my own idea of where I felt a site fell on the political spectrum (e.g., left, right, or center). There is also a tag for tabloids, or primarily sites that probably don't really have an editorial perspective so much as a desire to publish whatever gets the most traffic. End of explanation """ deduped = data.drop_duplicates(subset=['headline', 'link', 'img', 'provider', 'source', 'img_file', 'final_link'], keep=False) deduped.describe() """ Explanation: Now let's remove duplicates based on a subset of the columns using pandas' drop_duplicates for DataFrames End of explanation """ for col in deduped.columns: print((col, sum(deduped[col].isnull()))) """ Explanation: And let's just check on those null values again... End of explanation """ (43630/129399)*100 """ Explanation: Out of curiousity, as we're only left with 43630 records after deduping, let's take a look at the rate of success for our record collection. End of explanation """ deduped['headline'].groupby(deduped['img']).value_counts().nlargest(10) """ Explanation: Crud, doing a harvest yields results where only 33% of our sample is worth examining further. Data Exploration Let's get the top 10 headlines grouped by img End of explanation """ deduped['headline'].value_counts().nlargest(10) """ Explanation: But hang on. let's just see what the top headlines are. There's certainly overlap, but it's not a one to one relationship between headlines and their images (or at least maybe it's the same image, but coming from a different URL). End of explanation """ deduped['source'].value_counts().nlargest(25) """ Explanation: Note: perhaps something we will want to look into is how many different headline, image permutations there are. I am particularly interested in the reuse of images across different headlines. And how are our sources distributed? End of explanation """ deduped['source_class'].value_counts() """ Explanation: TMZ is a bit over-represented here And what about by classification End of explanation """ deduped.groupby(['provider', 'source_class'])['source'].value_counts() """ Explanation: Looks like the over-representation of TMZ is pushing on Tabloids a bit. Not terribly even between left, right, and center, either. Let's take a look at the sources again as broken down by bother provider and our classification. End of explanation """ IMG_MAX=5 topimgs_center = deduped['img'][deduped['source_class'].isin(['center'])].value_counts().nlargest(IMG_MAX).index.tolist() bottomimgs_center = deduped['img'][deduped['source_class'].isin(['center'])].value_counts().nsmallest(IMG_MAX).index.tolist() topimgs_left = deduped['img'][deduped['source_class'].isin(['left'])].value_counts().nlargest(IMG_MAX).index.tolist() bottomimgs_left = deduped['img'][deduped['source_class'].isin(['left'])].value_counts().nsmallest(IMG_MAX).index.tolist() topimgs_right = deduped['img'][deduped['source_class'].isin(['right'])].value_counts().nlargest(IMG_MAX).index.tolist() bottomimgs_right = deduped['img'][deduped['source_class'].isin(['right'])].value_counts().nsmallest(IMG_MAX).index.tolist() topimgs_tabloid = deduped['img'][deduped['source_class'].isin(['tabloid'])].value_counts().nlargest(IMG_MAX).index.tolist() bottomimgs_tabloid = deduped['img'][deduped['source_class'].isin(['tabloid'])].value_counts().nsmallest(IMG_MAX).index.tolist() for i in topimgs_center: displaystring = '<img src={} width="200"/>'.format(i) display(HTML(displaystring)) for i in bottomimgs_center: displaystring = '<img src={} width="200"/>'.format(i) display(HTML(displaystring)) for i in topimgs_left: displaystring = '<img src={} width="200"/>'.format(i) display(HTML(displaystring)) for i in bottomimgs_left: displaystring = '<img src={} width="200"/>'.format(i) display(HTML(displaystring)) for i in topimgs_right: displaystring = '<img src={} width="200"/>'.format(i) display(HTML(displaystring)) for i in bottomimgs_right: displaystring = '<img src={} width="200"/>'.format(i) display(HTML(displaystring)) for i in topimgs_tabloid: displaystring = '<img src={} width="200"/>'.format(i) display(HTML(displaystring)) for i in bottomimgs_tabloid: displaystring = '<img src={} width="200"/>'.format(i) display(HTML(displaystring)) """ Explanation: OK so what are the most frequent and least images per classification? End of explanation """ deduped_date_idx = deduped.copy(deep=False) deduped_date_idx['date'] = pd.to_datetime(deduped_date_idx.date) deduped_date_idx.set_index('date',inplace=True) """ Explanation: Yawn! I have to admit this isnt's as interesting as I thought it might be. Explore over time Next perhaps let's explore trends over time. First we'll want to make a version of the Data Frame that is indexed by date End of explanation """ "Start: {} - End: {}".format(deduped_date_idx.index.min(), deduped_date_idx.index.max()) """ Explanation: See what dates we're working with End of explanation """ deduped_date_idx['2017-03-01':'2017-07-07'].groupby('source_class').resample('M').size().plot(kind='bar') plt.show() """ Explanation: Let's examine the distribution of the classifications over time End of explanation """ deduped_date_idx['2017-03-01':'2017-07-07'].groupby(['provider']).resample('M').size().plot(kind='bar') plt.show() """ Explanation: I think what we're mostly seeing here is that our scraper was most active during the month of June. Let's see the same distribution for provider. End of explanation """ (deduped_date_idx[deduped_date_idx['headline'].str.contains('Trump')]['2017-03-01':'2017-07-07']).groupby('source_class').resample('M').size().plot(title="Headlines Containing 'Trump' By Month and Classification", kind='bar', color="pink") plt.show() (deduped_date_idx[deduped_date_idx['headline'].str.contains('Clinton')]['2017-03-01':'2017-07-07']).groupby('source_class').resample('M').size().plot(title="Headlines Containing 'Clinton' By Month and Classification", kind='bar', color="gray") plt.show() (deduped_date_idx[deduped_date_idx['headline'].str.contains('Hillary')]['2017-03-01':'2017-07-07']).groupby('source_class').resample('M').size().plot(title="Headlines Containing 'Hillary' By Month and Classification" ,kind='bar', color="gray") plt.show() (deduped_date_idx[deduped_date_idx['headline'].str.contains('Obama')]['2017-03-01':'2017-07-07']).groupby('source_class').resample('M').size().plot(title="Headlines Containing 'Obama' By Month and Classification", kind='bar') plt.show() """ Explanation: Same, we're seeing that our results are biased towards June. What about if we check all results mentioning certain people End of explanation """ (deduped_date_idx['2017-03-27':'2017-07-07'])['headline'].value_counts().nlargest(15) (deduped_date_idx['2017-03-27':'2017-07-07'])['headline'].value_counts().nsmallest(15) deduped['headline'][deduped['source_class'].isin(['center'])].value_counts().nlargest(25) deduped['headline'][deduped['source_class'].isin(['center'])].value_counts().nsmallest(25) deduped['headline'][deduped['source_class'].isin(['left'])].value_counts().nlargest(25) deduped['headline'][deduped['source_class'].isin(['left'])].value_counts().nsmallest(25) deduped['headline'][deduped['source_class'].isin(['right'])].value_counts().nlargest(25) deduped['headline'][deduped['source_class'].isin(['right'])].value_counts().nsmallest(25) deduped['headline'][deduped['source_class'].isin(['tabloid'])].value_counts().nlargest(25) deduped['headline'][deduped['source_class'].isin(['tabloid'])].value_counts().nsmallest(25) """ Explanation: Again, seeing more of a trend around our data collection. There is an interesting trend that Trump articles are appearing on way more Tabloid articles than we might expect. Obama is appearing a lot on Right classified site articles, but again this is for June, so might just be an artifact of increased data collection. Finally, we see way more results for "Hillary" than we do "Clinton", and most of those are on Tabloid sites in April. And let's check out some bucketed headline trends, both largest and smallest overall and for the various classifications. End of explanation """ def imgs_from_headlines(headline): """ A function to spit out all the different images used for a headline, assuming there's no more than 50/headline """ all_images = deduped['img'][deduped['headline'].isin([headline])].value_counts().nlargest(50).index.tolist() for i in all_images: displaystring = '<img src={} width="200"/>'.format(i) display(HTML(displaystring)) imgs_from_headlines("Trump Voters Shocked After Watching This Leaked Video") imgs_from_headlines("What Tiger Woods' Ex-Wife Looks Like Now Left Us With No Words") imgs_from_headlines("Nicole Kidman's Yacht Is Far From You'd Expect") imgs_from_headlines("He Never Mentions His Son, Here's Why") imgs_from_headlines("Do This Tonight to Make Fungus Disappear by Morning (Try Today)") """ Explanation: Finally, we wanted to see if any headlines had more than one image. Let's check a few. End of explanation """ timestamp = datetime.now().strftime('%Y-%m-%d-%H_%M') datefile = '../data/out/{}_native_ad_data_deduped.csv'.format(timestamp) deduped.to_csv(datefile, index=False) """ Explanation: Well, that was edifying. Export the data End of explanation """ img_json_data = {} for index, row in deduped.iterrows(): img_json_data[row['img_file']] = {'url':row['img'], 'dates':[], 'sources':[], 'providers':[], 'classifications':[], 'headlines':[], 'locations':[], } print(len(img_json_data.keys())) for index, row in deduped.iterrows(): record = img_json_data[row['img_file']] if row['date'] not in record['dates']: record['dates'].append(row['date']) if row['headline'] not in record['headlines']: record['headlines'].append(row['headline']) if row['provider'] not in record['providers']: record['providers'].append(row['provider']) if row['source_class'] not in record['classifications']: record['classifications'].append(row['source_class']) if row['source'] not in record['sources']: record['sources'].append(row['source']) if row['final_link'] not in record['locations']: record['locations'].append(row['final_link']) for i in list(img_json_data.keys())[0:5]: print(img_json_data[i]) hl_json_data = {} for index, row in deduped.iterrows(): hl_json_data[row['headline']] = {'img_urls':[], 'dates':[], 'sources':[], 'providers':[], 'classifications':[], 'imgs':[], 'locations':[], } print(len(hl_json_data.keys())) for index, row in deduped.iterrows(): record = hl_json_data[row['headline']] if row['img'] not in record['img_urls']: record['img_urls'].append(row['img']) if row['date'] not in record['dates']: record['dates'].append(row['date']) if row['img_file'] not in record['imgs']: record['imgs'].append(row['img_file']) if row['provider'] not in record['providers']: record['providers'].append(row['provider']) if row['source_class'] not in record['classifications']: record['classifications'].append(row['source_class']) if row['source'] not in record['sources']: record['sources'].append(row['source']) if row['final_link'] not in record['locations']: record['locations'].append(row['final_link']) for i in list(hl_json_data.keys())[0:5]: print(i, " = " ,hl_json_data[i]) def to_json_file(json_data, prefix): filename = "../data/out/{}_grouped_data.json".format(prefix) with open(filename, 'w') as outfile: json.dump(json_data, outfile, indent=4) to_json_file(img_json_data, "images") to_json_file(hl_json_data, "headlines") """ Explanation: Finally, let's generate a json file where each item is an individual image, and for each image we are listing out all the original sources, dates, headlines, classifications, and final locations for it. End of explanation """
dvkonst/ml_mipt
task_5/hw1_Modules.ipynb
gpl-3.0
class Module(object): def __init__ (self): self.output = None self.gradInput = None self.training = True """ Basically, you can think of a module as of a something (black box) which can process `input` data and produce `ouput` data. This is like applying a function which is called `forward`: output = module.forward(input) The module should be able to perform a backward pass: to differentiate the `forward` function. More, it should be able to differentiate it if is a part of chain (chain rule). The latter implies there is a gradient from previous step of a chain rule. gradInput = module.backward(input, gradOutput) """ def forward(self, input): """ Takes an input object, and computes the corresponding output of the module. """ return self.updateOutput(input) def backward(self,input, gradOutput): """ Performs a backpropagation step through the module, with respect to the given input. This includes - computing a gradient w.r.t. `input` (is needed for further backprop), - computing a gradient w.r.t. parameters (to update parameters while optimizing). """ self.updateGradInput(input, gradOutput) self.accGradParameters(input, gradOutput) return self.gradInput def updateOutput(self, input): """ Computes the output using the current parameter set of the class and input. This function returns the result which is stored in the `output` field. Make sure to both store the data in `output` field and return it. """ # The easiest case: # self.output = input # return self.output pass def updateGradInput(self, input, gradOutput): """ Computing the gradient of the module with respect to its own input. This is returned in `gradInput`. Also, the `gradInput` state variable is updated accordingly. The shape of `gradInput` is always the same as the shape of `input`. Make sure to both store the gradients in `gradInput` field and return it. """ # The easiest case: # self.gradInput = gradOutput # return self.gradInput pass def accGradParameters(self, input, gradOutput): """ Computing the gradient of the module with respect to its own parameters. No need to override if module has no parameters (e.g. ReLU). """ pass def zeroGradParameters(self): """ Zeroes `gradParams` variable if the module has params. """ pass def getParameters(self): """ Returns a list with its parameters. If the module does not have parameters return empty list. """ return [] def getGradParameters(self): """ Returns a list with gradients with respect to its parameters. If the module does not have parameters return empty list. """ return [] def training(self): """ Sets training mode for the module. Training and testing behaviour differs for Dropout, BatchNorm. """ self.training = True def evaluate(self): """ Sets evaluation mode for the module. Training and testing behaviour differs for Dropout, BatchNorm. """ self.training = False def __repr__(self): """ Pretty printing. Should be overrided in every module if you want to have readable description. """ return "Module" """ Explanation: Module is an abstract class which defines fundamental methods necessary for a training a neural network. You do not need to change anything here, just read the comments. End of explanation """ class Sequential(Module): """ This class implements a container, which processes `input` data sequentially. `input` is processed by each module (layer) in self.modules consecutively. The resulting array is called `output`. """ def __init__ (self): super(Sequential, self).__init__() self.modules = [] def add(self, module): """ Adds a module to the container. """ self.modules.append(module) def updateOutput(self, input): """ Basic workflow of FORWARD PASS: y_0 = module[0].forward(input) y_1 = module[1].forward(y_0) ... output = module[n-1].forward(y_{n-2}) Just write a little loop. """ # Your code goes here. ################################################ # module = self.modules[0] # y_curr = module.forward(input) # for i in range(1, len(self.modules)): # y_curr = self.modules[i].forward(y_curr) # self.output = y_curr # return self.output # # self.modules[0].output = self.modules[0].forward(input) # for i in range(1, len(self.modules)): # self.modules[i].output = self.modules[i].forward(self.modules[i-1].output) # self.output = self.modules[-1].output self.y = [] self.y.append(self.modules[0].forward(input)) for i in range(1, len(self.modules)): self.y.append(self.modules[i].forward(self.y[-1])) self.output = self.y[-1] return self.output def backward(self, input, gradOutput): """ Workflow of BACKWARD PASS: g_{n-1} = module[n-1].backward(y_{n-2}, gradOutput) g_{n-2} = module[n-2].backward(y_{n-3}, g_{n-1}) ... g_1 = module[1].backward(y_0, g_2) gradInput = module[0].backward(input, g_1) !!! To ech module you need to provide the input, module saw while forward pass, it is used while computing gradients. Make sure that the input for `i-th` layer the output of `module[i]` (just the same input as in forward pass) and NOT `input` to this Sequential module. !!! """ # Your code goes here. ################################################ # self.modules[-1].gradInput = self.modules[-1].backward(self.modules[-2].output, gradOutput) # for i in range(len(self.modules) - 2, 0, -1): # self.modules[i].gradInput = self.modules[i].backward(self.modules[i-1].output, self.modules[i+1].gradInput) # i = 0 # self.modules[0].gradInput = self.modules[0].backward(input, self.modules[i+1].gradInput) # self.gradInput = self.modules[0].gradInput self.gradInput = self.modules[-1].backward(self.y[-2], gradOutput) for i in range(len(self.modules) - 2, 0, -1): self.gradInput = self.modules[i].backward(self.y[i-1], self.gradInput) self.gradInput = self.modules[0].backward(input, self.gradInput) return self.gradInput def zeroGradParameters(self): for module in self.modules: module.zeroGradParameters() def getParameters(self): """ Should gather all parameters in a list. """ return [x.getParameters() for x in self.modules] def getGradParameters(self): """ Should gather all gradients w.r.t parameters in a list. """ return [x.getGradParameters() for x in self.modules] def __repr__(self): string = "".join([str(x) + '\n' for x in self.modules]) return string def __getitem__(self,x): return self.modules.__getitem__(x) """ Explanation: Sequential container Define a forward and backward pass procedures. End of explanation """ class Linear(Module): """ A module which applies a linear transformation A common name is fully-connected layer, InnerProductLayer in caffe. The module should work with 2D input of shape (n_samples, n_feature). """ def __init__(self, n_in, n_out): super(Linear, self).__init__() # This is a nice initialization stdv = 1./np.sqrt(n_in) self.W = np.random.uniform(-stdv, stdv, size = (n_out, n_in)) self.b = np.random.uniform(-stdv, stdv, size = n_out) self.gradW = np.zeros_like(self.W) self.gradb = np.zeros_like(self.b) def updateOutput(self, input): # Your code goes here. ################################################ # N = input.shape[0] # newx = input.reshape((N,-1)) self.output = input.dot(self.W.T) + self.b return self.output def updateGradInput(self, input, gradOutput): # Your code goes here. ################################################ # x, dout = input, gradOutput # N = x.shape[0] # D = np.prod(x.shape[1:]) # x2 = np.reshape(x, (N, D)) # dx2 = np.dot(dout, w.T) # N x D # dw = np.dot(x2.T, dout) # D x M # db = np.dot(dout.T, np.ones(N)) # M x 1 # dx = np.reshape(dx2, x.shape) # self.gradInput = dx, dw, db #FIXME ? # self.gradb = np.sum(gradOutput,axis = 0) self.gradInput = gradOutput.dot(self.W)#.reshape(*input.shape) # self.gradW = input.reshape((input.shape[0],-1)).T.dot(gradOutput) return self.gradInput def accGradParameters(self, input, gradOutput): # Your code goes here. ################################################ self.gradb = np.sum(gradOutput,axis = 0) self.gradW = gradOutput.T.dot(input) # self.gradW = input.reshape((input.shape[0],-1)).T.dot(gradOutput) # pass def zeroGradParameters(self): self.gradW.fill(0) self.gradb.fill(0) def getParameters(self): return [self.W, self.b] def getGradParameters(self): return [self.gradW, self.gradb] def __repr__(self): s = self.W.shape q = 'Linear %d -> %d' %(s[1],s[0]) return q input_dim = 3 output_dim = 2 x = np.random.randn(5, input_dim) w = np.random.randn(output_dim, input_dim) b = np.random.randn(output_dim) dout = np.random.randn(5, output_dim) linear = Linear(input_dim, output_dim) def update_W_matrix(new_W): linear.W = new_W return linear.forward(x) def update_bias(new_b): linear.b = new_b return linear.forward(x) dx = linear.backward(x, dout) dx_num = eval_numerical_gradient_array(lambda x: linear.forward(x), x, dout) dw_num = eval_numerical_gradient_array(update_W_matrix, w, dout) db_num = eval_numerical_gradient_array(update_bias, b, dout) print 'Testing Linear_backward function:' print 'dx error: ', rel_error(dx_num, dx) print 'dw error: ', rel_error(dw_num, linear.gradW) print 'db error: ', rel_error(db_num, linear.gradb) """ Explanation: Layers input: batch_size x n_feats1 output: batch_size x n_feats2 End of explanation """ class SoftMax(Module): def __init__(self): super(SoftMax, self).__init__() def updateOutput(self, input): # start with normalization for numerical stability self.output = np.subtract(input, input.max(axis=1, keepdims=True)) # Your code goes here. ################################################ self.output = np.exp(self.output) # out_sum = self.output.sum(axis=1, keepdims=True) self.output = np.divide(self.output, self.output.sum(axis=1, keepdims=True)) return self.output def updateGradInput(self, input, gradOutput): # Your code goes here. ################################################ # N = self.output.shape[0] # self.gradInput = self.output.copy() # self.gradInput[np.arange(N).astype(np.int), gradOutput.astype(np.int)] -= 1 # self.gradInput /= N batch_size, n_feats = self.output.shape a = self.output.reshape(batch_size, n_feats, -1) b = self.output.reshape(batch_size, -1, n_feats) self.gradInput = np.multiply(gradOutput.reshape(batch_size, -1, n_feats), np.subtract(np.multiply(np.eye(n_feats), a), np.multiply(a, b))).sum(axis=2) return self.gradInput def __repr__(self): return "SoftMax" soft_max = SoftMax() x = np.random.randn(5, 3) dout = np.random.randn(5, 3) dx_numeric = eval_numerical_gradient_array(lambda x: soft_max.forward(x), x, dout) dx = soft_max.backward(x, dout) # The error should be around 1e-10 print 'Testing SoftMax grad:' print 'dx error: ', rel_error(dx_numeric, dx) """ Explanation: This one is probably the hardest but as others only takes 5 lines of code in total. - input: batch_size x n_feats - output: batch_size x n_feats End of explanation """ class Dropout(Module): def __init__(self, p=0.5): super(Dropout, self).__init__() self.p = p self.mask = None def updateOutput(self, input): # Your code goes here. ################################################ self.mask = np.random.binomial(1, self.p, input.shape) if self.training else np.ones(input.shape) self.output = input*self.mask return self.output def updateGradInput(self, input, gradOutput): # Your code goes here. ################################################ self.gradInput = gradOutput*self.mask return self.gradInput def __repr__(self): return "Dropout" """ Explanation: Implement dropout. The idea and implementation is really simple: just multimply the input by $Bernoulli(p)$ mask. This is a very cool regularizer. In fact, when you see your net is overfitting try to add more dropout. While training (self.training == True) it should sample a mask on each iteration (for every batch). When testing this module should implement identity transform i.e. self.output = input. input: batch_size x n_feats output: batch_size x n_feats End of explanation """ class ReLU(Module): def __init__(self): super(ReLU, self).__init__() def updateOutput(self, input): self.output = np.maximum(input, 0) return self.output def updateGradInput(self, input, gradOutput): self.gradInput = np.multiply(gradOutput , input > 0) return self.gradInput def __repr__(self): return "ReLU" """ Explanation: Activation functions Here's the complete example for the Rectified Linear Unit non-linearity (aka ReLU): End of explanation """ class LeakyReLU(Module): def __init__(self, slope = 0.03): super(LeakyReLU, self).__init__() self.slope = slope def updateOutput(self, input): # Your code goes here. ################################################ # self.output = np.maximum(input, input*self.slope) self.output = input.copy() self.output[self.output < 0] *= self.slope return self.output def updateGradInput(self, input, gradOutput): # Your code goes here. ################################################ # self.gradInput = np.multiply(gradOutput, input > 0) #FIXME self.gradInput = gradOutput.copy() self.gradInput[input < 0] *= self.slope return self.gradInput def __repr__(self): return "LeakyReLU" """ Explanation: Implement Leaky Rectified Linear Unit. Expriment with slope. End of explanation """ class Criterion(object): def __init__ (self): self.output = None self.gradInput = None def forward(self, input, target): """ Given an input and a target, compute the loss function associated to the criterion and return the result. For consistency this function should not be overrided, all the code goes in `updateOutput`. """ return self.updateOutput(input, target) def backward(self, input, target): """ Given an input and a target, compute the gradients of the loss function associated to the criterion and return the result. For consistency this function should not be overrided, all the code goes in `updateGradInput`. """ return self.updateGradInput(input, target) def updateOutput(self, input, target): """ Function to override. """ return self.output def updateGradInput(self, input, target): """ Function to override. """ return self.gradInput def __repr__(self): """ Pretty printing. Should be overrided in every module if you want to have readable description. """ return "Criterion" """ Explanation: Criterions Criterions are used to score the models answers. End of explanation """ class MSECriterion(Criterion): def __init__(self): super(MSECriterion, self).__init__() def updateOutput(self, input, target): self.output = np.sum(np.power(input - target,2)) / input.shape[0] return self.output def updateGradInput(self, input, target): self.gradInput = (input - target) * 2 / input.shape[0] return self.gradInput def __repr__(self): return "MSECriterion" """ Explanation: The MSECriterion, which is basic L2 norm usually used for regression, is implemented here for you. End of explanation """ class ClassNLLCriterion(Criterion): def __init__(self): a = super(ClassNLLCriterion, self) super(ClassNLLCriterion, self).__init__() def updateOutput(self, input, target): # Use this trick to avoid numerical errors eps = 1e-15 input_clamp = np.clip(input, eps, 1 - eps) # Your code goes here. ################################################ # N = input_clamp.shape[0] # self.output = -np.sum(np.log(input_clamp[np.arange(N).astype(np.int), target.astype(np.int)]+1e-8)) / N self.output = -np.sum(np.multiply(target, np.log(input_clamp))) / len(input) return self.output def updateGradInput(self, input, target): # Use this trick to avoid numerical errors input_clamp = np.maximum(1e-15, np.minimum(input, 1 - 1e-15) ) # Your code goes here. ################################################ self.gradInput = np.subtract(input_clamp, target) / len(input) return self.gradInput def __repr__(self): return "ClassNLLCriterion" """ Explanation: You task is to implement the ClassNLLCriterion. It should implement multiclass log loss. Nevertheless there is a sum over y (target) in that formula, remember that targets are one-hot encoded. This fact simplifies the computations a lot. Note, that criterions are the only places, where you divide by batch size. End of explanation """
bbglab/adventofcode
2015/ferran/day12.ipynb
mit
with open('inputs/input12.txt') as f_input: s = next(f_input).rstrip() import re def sum_numbers(s): p = re.compile('[-]?[\d]+') numbers = list(map(int, p.findall(s))) return sum(numbers) sum_numbers(s) """ Explanation: Day 12: JSAbacusFramework.io Day 12.1 End of explanation """ def transform_reds(s): q = re.compile('\"[\w]+\"\:\"red\"') return q.sub('R', s) """ Explanation: Day 12.2 A function to transform terms of the form "[key]":"red" into a single character 'R'. End of explanation """ def regions_to_erase(s): regions = [] curr_depth = 0 last_sink = {} red = None for i, c in enumerate(s): if c == '{': curr_depth += 1 if red is None: last_sink[curr_depth] = i elif c == 'R': ignore = True if red is None: red = curr_depth elif c == '}': if red is not None: if curr_depth == red: regions.append([last_sink[curr_depth], i]) red = None curr_depth -= 1 return regions """ Explanation: Track the regions to ignore: when an 'R' is found at depth d we keep this information; we ignore the span between the last $[d-1,d]$ transition (sink down) and the next $[d,d-1]$ transition (float up). Those regions will be erased. End of explanation """ def nest_regions(regions): nested = [] for i, bounds in enumerate(regions): include = True for a in regions[i + 1:]: if a[0] < bounds[0]: include = include & False if include: nested.append(bounds) return nested """ Explanation: Regions to erase may come out nested. If one region to erase is included inside another, we will ignore the smaller one. End of explanation """ def pruned_sum(s): t = transform_reds(s) nested_regions = nest_regions(regions_to_erase(t)) last_bound = 0 pruned = '' for i, bounds in enumerate(nested_regions): pruned += t[last_bound: bounds[0]] last_bound = bounds[1] + 1 pruned += t[last_bound:] return sum_numbers(pruned) """ Explanation: Gather all the functions into a main pruned_sum() End of explanation """ def test(): assert(pruned_sum('[1,2,3]') == 6) assert(pruned_sum('[1,{"c":"red","b":2},3]') == 4) assert(pruned_sum('{"d":"red","e":[1,2,3,4],"f":5}') == 0) assert(pruned_sum('[1,{"c":"red","b":2},3]') == 4) assert(pruned_sum('[1,"red",5]') == 6) test() """ Explanation: Test End of explanation """ pruned_sum(s) """ Explanation: Solution End of explanation """
rayjustinhuang/DataAnalysisandMachineLearning
Logistic Regression.ipynb
mit
# Import necessary libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.linear_model import LogisticRegression from sklearn.cross_validation import train_test_split, cross_val_score from sklearn import metrics """ Explanation: Predicting Grad School Admission with Logistic Regression A simple implementation of logistic regression for machine learning in Python to create a model that predicts the admission of candidates into graduate schools. Uses data from UCLA. Credit to the examples from yhat at http://blog.yhat.com/posts/logistic-regression-and-python.html and dataschool at http://nbviewer.jupyter.org/gist/justmarkham/6d5c061ca5aee67c4316471f8c2ae976 for being starting points and good guides (though the examples are not strictly followed). End of explanation """ # As we import the data, we rename the "Rank" column to "Prestige" to avoid confusion with the rank method of pandas df = pd.read_csv("binary.csv", header = 0, names = ["Admit", 'GRE', 'GPA', 'Prestige']) df.head() """ Explanation: The Dataset The data from UCLA (found at http://www.ats.ucla.edu/stat/data/binary.csv and originally used in this example: http://www.ats.ucla.edu/stat/r/dae/logit.htm) contains 4 columns: * admit - a binary variable describing if the student was admitted into grad school or not * gre - the student's Graduate Record Examination (GRE) score * gpa - the student's grade point average (GPA) * rank - the prestige of the student's undergraduate school, ranked from 1 to 4 The columns will be renamed to "Admit," "GRE," "GPA," and "Prestige" as we import the data to make them more human-friendly. Note that "rank" is renamed to "Prestige" to avoid confusion with a method of pandas. End of explanation """ # Basic summary of the data df.describe() # Generate a cross-tabulation (frequency table by default) of the factors; here we use prestige pd.crosstab(df['Admit'], df['Prestige'], rownames=['Admission']) """ Explanation: Initial Exploratory Data Analysis We take a look at basic summary statistics, a cross-tabulation, and a histogram to get a general idea of the contents of the data. End of explanation """ # Generate histograms sns.set_color_codes('muted') df.hist(color='g') plt.show() """ Explanation: Based on the cruss-tabulation above, it appears that prestige is a significant factor in admission, with those in schools of rank 1 having more admits than not, and those from schools of rank 4 being largely rejected. End of explanation """ # Dummy code the rank variable dummy_ranks = pd.get_dummies(df['Prestige'], prefix="Prestige") dummy_ranks.head() """ Explanation: Preprocessing the Data While the data is already very analysis-friendly, we still have to change the categorial variable (prestige) into a binary one to be able to create a logistic regression model. End of explanation """ columns1 = ['Admit', 'GRE', 'GPA'] data1 = df[columns1] columns2 = ['Prestige_1','Prestige_2','Prestige_3'] data2 = dummy_ranks[columns2] data = pd.merge(data1, data2, how="outer", left_index=True, right_index=True) data """ Explanation: Given that prestige is a categorical value, we perform dummy coding to convert the values into binary variables. End of explanation """ # Separate independent and dependent variables X = data.ix[:,1:] y = data['Admit'] # Create a logistic regression model initial = LogisticRegression(C = 1000, random_state=0) initial.fit(X,y) # Check model accuracy print("Accuracy Score:", initial.score(X,y)) # What percentage of students actually got into grad school print("Actual probability of admission:", y.mean()) """ Explanation: Logistic Regression We will use logistic regression to predict the probability that a particular student gets into grad school. End of explanation """ # View coefficients column_names = list(X.columns) coefficients = np.transpose(initial.coef_) intercept = initial.intercept_ Coeffs = pd.DataFrame(coefficients, column_names, columns=['Coefficients']) Coeffs.append(pd.DataFrame(intercept,['Intercept'], columns=['Coefficients'])) """ Explanation: If you were guessing "no," you would be right around 68.25% of the time. Our model is more accurate than just guessing "no" by around 2.5%. Our model is significantly better than random guessing. To be more precise, it is about 20.75% better than just guessing 50/50. End of explanation """ # Split data into training and test sets, using 30% of the data as the test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 0) # Fit the logistic regression with lambda = 10^-3 lr = LogisticRegression(C = 1000, random_state=0) lr.fit(X_train, y_train) # View predictions predicted = lr.predict(X_test) print(predicted) # View class probabilities probabilities = lr.predict_proba(X_test) print(probabilities) """ Explanation: The coefficients above are telling of the value of the data in the dataset. Every additional point in a candidate's GRE score improves their chance of admission by 0.002; every unit increase in GPA increases a candidate's chance by 0.803. The prestige coefficients are interpreted as showing that being from a school of rank 1 increases your chance of going to grad school by 1.509 versus a student from a rank 4 school. Differences in chances can be determined by subtracting the prestige 1 coefficient from the prestige coefficient of another rank, e.g., being from a school of rank 1 increases your chance of admission by around 0.6662 (calculated from 1.508653-0.842366) versus a student from a rank 2 school. It is important to note that the information mentioned regarding the log odds is contextual to our model. Modeling Using a Training and a Test Set In the real world, we will likely need to create a machine learning model that can take any set of predictor variables and spit out a probability of admission, which means we won't have the privilege of creating a logit model based on an entirely known set of data. We will now create a logistic regression model based on one training set and one test set, with 70% of the data going into the training set and 30% going into the test set, in order to be able to construct a model and test its accuracy on data that was not used to create it. End of explanation """ # Check accuracy print("Accuracy Score:", metrics.accuracy_score(y_test, predicted)) """ Explanation: Model Evaluation We now evaluate our logistic regression using some common metrics for assessing model quality. End of explanation """ # Print confusion matrix and classification report print("Confusion Matrix:\n",metrics.confusion_matrix(y_test, predicted)) print("\nClassification Report:\n",metrics.classification_report(y_test,predicted)) """ Explanation: The accuracy score here is slightly (around 0.083%) better than the optimized logistic regression without the training/test split. Using a well-chosen (completely random) subset of the data, we were able to create a model whose accuracy actually exceeded that of the model created using all of the data. Performance Visualization We use a confusion matrix, classification report, and ROC curve to get a better view of the performance of our model. End of explanation """ fpr, tpr, thresholds = metrics.roc_curve(y_test, probabilities[:,1]) results = pd.DataFrame({'False Positive Rate': fpr, 'True Positive Rate': tpr}) plt.plot(fpr,tpr, color='g', label="Model") plt.plot([0, 1], [0, 1], color='gray', linestyle='--', label="Baseline (Random Guessing)") plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC Curve') plt.legend() plt.show() print("Area Under the Curve:", metrics.roc_auc_score(y_test, probabilities[:,1])) """ Explanation: The confusion matrix shows that out of 82 non-admits, our model got 77 of those right, while 5 of those were false positives. This very good hit rate for 0's is reflected in the high recall of 0.94 for 0's in the classification report. However, the performance of the model is not as good at predicting admits, with only 8 out of 38 admissions correctly being predicted by the model. Again, this is reflected in the low recall 0.21 for 1's. Looking at precision, 72% of 0's are indeed 0's, and 62% of identified 1's are actual 1's. In total, 85 out of 120 results were correctly predicted by the model. Plotting the ROC Curve In order to more ably visualize the effectiveness of our model and support our existing analysis, we use an ROC curve. While scikit-learn already selects a certain balance (0.5 by default for binary classifiers; can be adjusted via the class_weight argument in LogisticRegression) of performance metrics (precision, recall, etc.), it is still good to get a view of the performance tradeoffs inherent in our model, as well as to gain insight for potential model tuning in the future. End of explanation """ fivefold = cross_val_score(lr, X, y, scoring='accuracy', cv=5) print("Score per fold:", fivefold) print("Mean score:", fivefold.mean()) print("Standard deviation:", fivefold.std()) """ Explanation: As the plot above shows, while our logistic regression model is not really that good -- the area under the curve is calculated to be 0.6784 -- in accordance with the results earlier, it still does better than random guessing. Also note that, in alignment with the 0.5 threshhold used by scikit-learn by default, our true positive rate (recall) of 0.71 matches up with the true positive rate in the graph when the false positive rate is 0.5. Checking Model Prediction Performance We assess the quality our modeling above -- specifically, how effectively it will likely hold up when exponsed to unseen data -- by using cross-validation. End of explanation """ from sklearn import preprocessing # Isolate columns to scale toscale = X[['GRE','GPA']].astype(float) scaledX = preprocessing.scale(toscale) scaleddata = pd.DataFrame(scaledX, columns=['GRE','GPA']) # Join scaled data with categorical rank columns scaledX = scaleddata.join(data2) scaledX.head() improve1 = cross_val_score(lr, scaledX, y, scoring='accuracy', cv=5) print("Score per fold:", improve1) print("Mean score:", improve1.mean()) print("Standard deviation:", improve1.std()) """ Explanation: Using five-fold cross-validation on our current model results in a similar accuracy score as the one previously derived, which shows that the model we arrived at earlier is not biased toward the training set and will likely generalize well to new data. Improving the Model We will attempt to improve our model by using a variety of techniques, including feature scaling, class weighting, and tuning our hyperparameter C. Our current model will be treated as a baseline. Feature Scaling One aspect of our data is that GRE and GPA scores vary significantly in magnitude (GRE varies from 220 to 800 while GPA varies from 2.26 to 4.0, though both appear to be shaped like normal distributions). Scaling these features may improve the accuracy of our machine learning model. End of explanation """ lrweighted = LogisticRegression(C = 1000, random_state=0, class_weight={0:0.505,1:0.495}) improve2 = cross_val_score(lrweighted, scaledX, y, scoring='accuracy', cv=5) print("Score per fold:", improve2) print("Mean score:", improve2.mean()) print("Standard deviation:", improve2.std()) """ Explanation: The accuracy of our model does not change, but the standard deviation improves a little. This means that our improved model should provide slightly better, or at the very least, more consistent performance. Correcting for Class Imbalance Based on our confusion matrix, the model appeared to be quick to assign values of "1" to actual 0's. By modifying the weighting of the classes ever so slightly, from the default weight of 1 each to adding slightly more weight to 0's, false positives should be penalized more, and we give the model a little more breathing room to make mistakes in favor of providing 1's (while the data most certainly shows that it's more likely to get a 0 than a 1, our model still appears to predict too much in favor of 0). End of explanation """ tens = [10**i for i in range(-5,6)] for i in tens: if i == 1000: continue testlr = LogisticRegression(C = i, random_state=0, class_weight={0:0.505,1:0.495}) testcrossval = cross_val_score(testlr, scaledX, y, scoring='accuracy', cv=5) print('For C = {}:'.format(i)) print(' Score per fold:', testcrossval) print(' Mean score:', testcrossval.mean()) print(' Standard deviation:', testcrossval.std()) """ Explanation: Our mean score shows a slight improvement. Our standard deviation is slightly higher than that of the previous, feature-scaled model, but it is still lower than our original model. Hyperparameter Tuning We will check if results change based on the $\lambda$ parameter used in regularization. Note that in scikit-learn's logistic regression, $C = \frac{1}{\lambda}$. End of explanation """ # Create new train and test sets and fit our revised model to it X_train2, X_test2, y_train2, y_test2 = train_test_split(scaledX, y, test_size = 0.3, random_state = 0) newlr = LogisticRegression(C = 1000, random_state=0, class_weight={0:0.505,1:0.495}) newlr.fit(X_train2, y_train2) # Check for metrics on the new predicted probabilities newpredictions = newlr.predict(X_test2) newprobabilities = newlr.predict_proba(X_test2) print("Accuracy Score:", newlr.score(X_test2, y_test2),"\n") print("Confusion Matrix:\n",metrics.confusion_matrix(y_test2, newpredictions)) print("\nClassification Report:\n",metrics.classification_report(y_test2, newpredictions)) """ Explanation: Given that $C = \frac{1}{\lambda}$, it makes sense that, after a certain value of $C$ (in this case, 100), the model no longer improves because the penalty to the logistic regression objective function is minimal. As such, we will not need to change our current $C$ value. Testing the Revised Model We now check our new model using a training and test set. End of explanation """ # Plot a new ROC curve for the revised model fpr2, tpr2, thresholds2 = metrics.roc_curve(y_test2, newprobabilities[:,1]) results2 = pd.DataFrame({'False Positive Rate': fpr2, 'True Positive Rate': tpr2}) plt.plot(fpr,tpr,color='darkgray', label="Original Model") plt.plot(fpr2,tpr2, color='g', label="Revised Model") plt.plot([0, 1], [0, 1], color='gray', linestyle='--', label="Baseline (Random Guessing)") plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC Curve for Revised Model') plt.legend() plt.show() print("Area Under the Curve:", metrics.roc_auc_score(y_test2, newprobabilities[:,1])) """ Explanation: In alignment with our expectations based on our model tuning, all metrics have shown an improvement over our original model. In comparison with our original model, the predictions for non-admits are the same, and we now have two more correctly classified admits than in the previous model, which is obviously an improvement. End of explanation """
jgoppert/iekf_analysis
Temperature Calibration.ipynb
bsd-3-clause
import sympy sympy.init_printing() Theta = sympy.Matrix(sympy.symbols( 'theta_0:3_0:4')).reshape(3,4) def Y(n): return sympy.Matrix(sympy.symbols( 'G_x:z_0:{:d}'.format(n+1))).T.reshape(3, n+1) def C(n): return sympy.ones(n+1, 1) def T(n): return sympy.Matrix(sympy.symbols('T_0:{:d}'.format(n+1))) def T2(n): return T(n).multiply_elementwise(T(n)) def T3(n): return T2(n).multiply_elementwise(T(n)) def X(n): return C(n).row_join(T(n)).row_join(T2(n)).row_join(T3(n)).T """ Explanation: Recursive Algorithm End of explanation """ X(0)*X(0).T X(1)*X(1).T dX = X(1)*X(1).T - X(0)*X(0).T dX Y(0)*X(0).T Y(1)*X(1).T dYXT = Y(1)*X(1).T - Y(0)*X(0).T dYXT """ Explanation: This let's us derive a recurisve form. $Y = \Theta X$ $Y X^T = \Theta X X^T$ We accumulate $Y X^T$ and $X X^T$ since they are of fixed size and there is a recursion relation as shown below. At the end we perform the inverse. This is the same as a pseuod-inverse solution but done recursively. $\Theta = Y X^T (X X^T)^{-1}$ End of explanation """
jeffcarter-github/MachineLearningLibrary
MachineLearningLibrary/NeuralNetworks/CNN_MNIST_Keras_Tensorflow.ipynb
mit
from __future__ import print_function import matplotlib.pyplot as plt %matplotlib notebook """ Explanation: This notebook walks through training a CNN Model on the MNIST data using Keras and Tensorflow... Load Data and Reshape Build Model Train / Test Build interactive OpenCV GUI for playing import ploting library... End of explanation """ from keras.datasets import mnist # load data... (X_train, y_train), (X_test, y_test) = mnist.load_data() # check dimensions... print('Train: ', X_train.shape, y_train.shape) print('Test: ', X_test.shape, y_test.shape) """ Explanation: Import the MNIST dataset using the keras api End of explanation """ # select a number [0, 60000)... idx = 1000 # plot image... plt.figure() plt.title('Number: %s'%y_train[idx]) plt.imshow(X_train[idx], cmap='gray') """ Explanation: Looks like we have 60k images of 28, 28 pixels. These images are single-channel, i.e. black and white... If these were color images, then we would see dimensions of (60000, 28, 28, 3)... 3 channels for Red-Green-Blue (RGB) or Blue-Green-Red (BGR), depending on the order of the color channels... Show an image and check data... End of explanation """ X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], X_train.shape[2], 1).astype('float32') / 255. X_test = X_test.reshape(X_test.shape[0], X_test.shape[1], X_test.shape[2], 1).astype('float32') / 255. print(X_train.shape) """ Explanation: Image Processing... Reshape (28, 28) to (28, 28, 1)... and Normalized Image Data... from uint8 to float32 over the range [0,1] End of explanation """ # import to_categorial function that does the one-hot encoding... from keras.utils import to_categorical # encode both training and testing data... y_train = to_categorical(y_train, 10) y_test = to_categorical(y_test, 10) y_train[0] """ Explanation: now we have explicity created a one-channel dataset... and normalized it between [0, 1]... alternatively, you might normalize it more correctly as Gaussian distributed about zero with a variance of one... this would help with training but for this example, as you'll see, it doesn't really matter... Encode numbers from 0-9 into 10-dimensional vectors... this is called one-hot encoding... i.e. 0 -> [1, 0, 0, ..., 0] and 1 -> [0, 1, 0, 0, ..., 0], etc. End of explanation """ from keras.models import Sequential from keras.layers import Conv2D, MaxPool2D, Dense, Dropout, Flatten img_shape = X_train[0].shape print(img_shape) model = Sequential() # Convolutional Section... model.add(Conv2D(32, (3, 3), activation='relu', input_shape=img_shape)) model.add(Conv2D(32, (3, 3), activation='relu')) model.add(MaxPool2D((2, 2))) model.add(Dropout(rate=0.25)) model.add(Conv2D(64, (3, 3), activation='relu', input_shape=img_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPool2D((2, 2))) model.add(Dropout(rate=0.25)) # Fully Connected Section... model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(rate=0.25)) model.add(Dense(10, activation='softmax')) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.summary() """ Explanation: Build CNN Model... End of explanation """ n_epochs = 2 model.fit(X_train, y_train, batch_size=32, epochs=n_epochs, verbose=True) loss, accuracy = model.evaluate(X_test, y_test, batch_size=32) print('Test Accuracy: ', accuracy) # save model for retrieval at later date... model.save('./MNIST_CNN') """ Explanation: Train model... End of explanation """ import cv2 import numpy as np """ Explanation: Build interactive notepad... End of explanation """ def record_location(event, x, y, flags, param): '''callback function that draws a circle at the point x, y...''' if flags == cv2.EVENT_FLAG_LBUTTON and event == cv2.EVENT_MOUSEMOVE: cv2.circle(img, (x,y), 10, (255, 255, 255), -1) img = np.zeros((256, 256, 3), np.uint8) cv2.namedWindow('image') cv2.setMouseCallback('image', record_location) while(1): cv2.imshow('image',img) k = cv2.waitKey(1) & 0xFF if k == ord('m'): mode = not mode elif k == 27: break cv2.destroyAllWindows() # copy one color channel and normalize values... _img = img[:,:,0] / 255.0 # resize image to (28, 28) _img = cv2.resize(_img, (28, 28), interpolation=cv2.INTER_AREA).reshape(1, 28, 28, 1) p = model.predict(_img) print(p) plt.figure() plt.title('Guess: %s' %p.argmax()) plt.imshow(_img[0][:,:,0], cmap='gray') """ Explanation: The code below will create an OpenCV popup window... the window can be closed using the 'esc' key... and we can draw in the window by holding the left-mouse button and moving the mouse within the window... End of explanation """
ioggstream/python-course
connexion-101/notebooks/05-reusing-and-bundling.ipynb
agpl-3.0
# Exercise: creating a bundle from a $ref file # # You can resolve dependencies and create a bundle file with !pip install openapi_resolver # Exercise: create a bundle from the previous file with !python -m openapi_resolver /code/notebooks/oas3/ex-05-01-bundle.yaml """ Explanation: Reusing and bundling Our strategy to standardize default responses, schemas and other components between different API providers, is to provide those definitions in a shared and versioned file, like https://teamdigitale.github.io/openapi/0.0.5/definitions.yaml API providers can then: $reference reusable components eventualy create a single bundled file containing all the resolved references (eg. with openapi_resolver) With a set of common components API designers can create better interfaces and ask themself questions like: am I considering enough error responses? can I reuse already existing schemas? should I implement a new schema for this object? Reusable components in OAS3 Supported reusable components can be: schemas: data types and objects parameters: request parameters, that may be defined in headers, query and path responses: http responses securitySchemes: security requirements to be applied to a given path NOTE: in this course we won't go indeep on all the possibilities of OAS3, which you can see on the OAS website Exercise: replacing definitions with $refs Open this complete file ex-05-01-bundle.yaml replace as many definitions as possible with references from the shared definitions.yaml. End of explanation """ # Check yaml file content. from pathlib import Path content = Path('anchors.yaml').read_text() print(content) ret = yaml.safe_load(content) assert ret['anchored_content'], ret['other_anchor'] print(ret['foo']) print(ret['bar']) """ Explanation: YAML anchors are your friends YAML has a very nice feature, named anchors. They allow to define and reference given portions of a YAML file. ``` the following &anchor stores the foo value a: &this_is_an_anchor foo *star dereferences the anchor b: *this_is_an_anchor ``` See anchors.yaml End of explanation """