markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Finally, the multiples of 7, starting at 49 (the first multiple of 7 greater than 7 that's left!) | primes[49::7] = [None] * len(primes[49::7]) # The right side is a list of Nones, of the necessary length.
print(primes) # What have we done? | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
What's left? A lot of Nones and the prime numbers up to 100. We have successfully sieved out all the nonprime numbers in the list, using just four sieving steps (and setting 0 and 1 to None manually).
But there's a lot of room for improvement, from beginning to end!
The format of the end result is not so nice.
We had to sieve each step manually. It would be much better to have a function prime_list(n) which would output a list of primes up to n without so much supervision.
The memory usage will be large, if we need to store all the numbers up to a large n at the beginning.
We solve these problems in the following way.
We will use a list of booleans rather than a list of numbers. The ending list will have a True value at prime indices and a False value at composite indices. This reduces the memory usage and increases the speed.
A which function (explained soon) will make the desired list of primes after everything else is done.
We will proceed through the sieving steps algorithmically rather than entering each step manually.
Here is a somewhat efficient implementation of the Sieve in Python. | def isprime_list(n):
'''
Return a list of length n+1
with Trues at prime indices and Falses at composite indices.
'''
flags = [True] * (n+1) # A list [True, True, True,...] to start.
flags[0] = False # Zero is not prime. So its flag is set to False.
flags[1] = False # One is not prime. So its flag is set to False.
p = 2 # The first prime is 2. And we start sieving by multiples of 2.
while p <= sqrt(n): # We only need to sieve by p is p <= sqrt(n).
if flags[p]: # We sieve the multiples of p if flags[p]=True.
flags[p*p::p] = [False] * len(flags[p*p::p]) # Sieves out multiples of p, starting at p*p.
p = p + 1 # Try the next value of p.
return flags
print(isprime_list(100)) | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
If you look carefully at the list of booleans, you will notice a True value at the 2nd index, the 3rd index, the 5th index, the 7th index, etc.. The indices where the values are True are precisely the prime indices. Since booleans take the smallest amount of memory of any data type (one bit of memory per boolean), your computer can carry out the isprime_list(n) function even when n is very large.
To be more precise, there are 8 bits in a byte. There are 1024 bytes (about 1000) in a kilobyte. There are 1024 kilobytes in a megabyte. There are 1024 megabytes in a gigabyte. Therefore, a gigabyte of memory is enough to store about 8 billion bits. That's enough to store the result of isprime_list(n) when n is about 8 billion. Not bad! And your computer probably has 4 or 8 or 12 or 16 gigabytes of memory to use.
To transform the list of booleans into a list of prime numbers, we create a function called where. This function uses another Python technique called list comprehension. We discuss this technique later in this lesson, so just use the where function as a tool for now, or read about list comprehension if you're curious. | def where(L):
'''
Take a list of booleans as input and
outputs the list of indices where True occurs.
'''
return [n for n in range(len(L)) if L[n]]
| P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
Combined with the isprime_list function, we can produce long lists of primes. | print(where(isprime_list(100))) | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
Let's push it a bit further. How many primes are there between 1 and 1 million? We can figure this out in three steps:
Create the isprime_list.
Use where to get the list of primes.
Find the length of the list of primes.
But it's better to do it in two steps.
Create the isprime_list.
Sum the list! (Note that True is 1, for the purpose of summation!) | sum(isprime_list(1000000)) # The number of primes up to a million!
%timeit isprime_list(10**6) # 1000 ms = 1 second.
%timeit sum(isprime_list(10**6)) | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
This isn't too bad! It takes a fraction of a second to identify the primes up to a million, and a smaller fraction of a second to count them! But we can do a little better.
The first improvement is to take care of the even numbers first. If we count carefully, then the sequence 4,6,8,...,n (ending at n-1 if n is odd) has the floor of (n-2)/2 terms. Thus the line flags[4::2] = [False] * ((n-2)//2) will set all the flags to False in the sequence 4,6,8,10,... From there, we can begin sieving by odd primes starting with 3.
The next improvement is that, since we've already sieved out all the even numbers (except 2), we don't have to sieve out again by even multiples. So when sieving by multiples of 3, we don't have to sieve out 9,12,15,18,21,etc.. We can just sieve out 9,15,21,etc.. When p is an odd prime, this can be taken care of with the code flags[p*p::2*p] = [False] * len(flags[p*p::2*p]). | def isprime_list(n):
'''
Return a list of length n+1
with Trues at prime indices and Falses at composite indices.
'''
flags = [True] * (n+1) # A list [True, True, True,...] to start.
flags[0] = False # Zero is not prime. So its flag is set to False.
flags[1] = False # One is not prime. So its flag is set to False.
flags[4::2] = [False] * ((n-2)//2)
p = 3
while p <= sqrt(n): # We only need to sieve by p is p <= sqrt(n).
if flags[p]: # We sieve the multiples of p if flags[p]=True.
flags[p*p::2*p] = [False] * len(flags[p*p::2*p]) # Sieves out multiples of p, starting at p*p.
p = p + 2 # Try the next value of p. Note that we can proceed only through odd p!
return flags
%timeit sum(isprime_list(10**6)) # How much did this speed it up? | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
Another modest improvement is the following. In the code above, the program counts the terms in sequences like 9,15,21,27,..., in order to set them to False. This is accomplished with the length command len(flags[p*p::2*p]). But that length computation is a bit too intensive. A bit of algebraic work shows that the length is given formulaically in terms of p and n by the formula:
$$len = \lfloor \frac{n - p^2 - 1}{2p} \rfloor + 1$$
(Here $\lfloor x \rfloor$ denotes the floor function, i.e., the result of rounding down.) Putting this into the code yields the following. | def isprime_list(n):
'''
Return a list of length n+1
with Trues at prime indices and Falses at composite indices.
'''
flags = [True] * (n+1) # A list [True, True, True,...] to start.
flags[0] = False # Zero is not prime. So its flag is set to False.
flags[1] = False # One is not prime. So its flag is set to False.
flags[4::2] = [False] * ((n-2)//2)
p = 3
while p <= sqrt(n): # We only need to sieve by p is p <= sqrt(n).
if flags[p]: # We sieve the multiples of p if flags[p]=True.
flags[p*p::2*p] = [False] * ((n-p*p-1)//(2*p)+1) # Sieves out multiples of p, starting at p*p.
p = p + 2 # Try the next value of p.
return flags
%timeit sum(isprime_list(10**6)) # How much did this speed it up? | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
That should be pretty fast! It should be under 100 ms (one tenth of one second!) to determine the primes up to a million, and on a newer computer it should be under 50ms. We have gotten pretty close to the fastest algorithms that you can find in Python, without using external packages (like SAGE or sympy). See the related discussion on StackOverflow... the code in this lesson was influenced by the code presented there.
Exercises
Prove that the length of range(p*p, n, 2*p) equals $\lfloor \frac{n - p^2 - 1}{2p} \rfloor + 1$.
A natural number $n$ is called squarefree if it has no perfect square divides $n$ except for 1. Write a function squarefree_list(n) which outputs a list of booleans: True if the index is squarefree and False if the index is not squarefree. For example, if you execute squarefree_list(12), the output should be [False, True, True, True, False, True, True, True, False, False, True, True, False]. Note that the False entries are located the indices 0, 4, 8, 9, 12. These natural numbers have perfect square divisors besides 1.
Your DNA contains about 3 billion base pairs. Each "base pair" can be thought of as a letter, A, T, G, or C. How many bits would be required to store a single base pair? In other words, how might you convert a sequence of booleans into a letter A,T,G, or C? Given this, how many megabytes or gigabytes are required to store your DNA? How many people's DNA would fit on a thumb-drive?
<a id='analysis'></a>
Data analysis
Now that we can produce a list of prime numbers quickly, we can do some data analysis: some experimental number theory to look for trends or patterns in the sequence of prime numbers. Since Euclid (about 300 BCE), we have known that there are infinitely many prime numbers. But how are they distributed? What proportion of numbers are prime, and how does this proportion change over different ranges? As theoretical questions, these belong the the field of analytic number theory. But it is hard to know what to prove without doing a bit of experimentation. And so, at least since Gauss (read Tschinkel's article about Gauss's tables) started examining his extensive tables of prime numbers, mathematicians have been carrying out experimental number theory.
Analyzing the list of primes
Let's begin by creating our data set: the prime numbers up to 1 million. | primes = where(isprime_list(1000000))
len(primes) # Our population size. A statistician might call it N.
primes[-1] # The last prime in our list, just before one million.
type(primes) # What type is this data?
print(primes[:100]) # The first hundred prime numbers. | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
To carry out serious analysis, we will use the method of list comprehension to place our population into "bins" for statistical analysis. Our first type of list comprehension has the form [x for x in LIST if CONDITION]. This produces the list of all elements of LIST satisfying CONDITION. It is similar to list slicing, except we pull out terms from the list according to whether a condition is true or false.
For example, let's divide the (odd) primes into two classes. Red primes will be those of the form 4n+1. Blue primes will be those of the form 4n+3. In other words, a prime p is red if p%4 == 1 and blue if p%4 == 3. And the prime 2 is neither red nor blue. | redprimes = [p for p in primes if p%4 == 1] # Note the [x for x in LIST if CONDITION] syntax.
blueprimes = [p for p in primes if p%4 == 3]
print('Red primes:',redprimes[:20]) # The first 20 red primes.
print('Blue primes:',blueprimes[:20]) # The first 20 blue primes.
print("There are {} red primes and {} blue primes, up to 1 million.".format(len(redprimes), len(blueprimes))) | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
This is pretty close! It seems like prime numbers are about evenly distributed between red and blue. Their remainder after division by 4 is about as likely to be 1 as it is to be 3. In fact, it is proven that asymptotically the ratio between the number of red primes and the number of blue primes approaches 1. However, Chebyshev noticed a persistent slight bias towards blue primes along the way.
Some of the deepest conjectures in mathematics relate to the prime counting function $\pi(x)$. Here $\pi(x)$ is the number of primes between 1 and $x$ (inclusive). So $\pi(2) = 1$ and $\pi(3) = 2$ and $\pi(4) = 2$ and $\pi(5) = 3$. One can compute a value of $\pi(x)$ pretty easily using a list comprehension. | def primes_upto(x):
return len([p for p in primes if p <= x]) # List comprehension recovers the primes up to x.
primes_upto(1000) # There are 168 primes between 1 and 1000. | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
Now we graph the prime counting function. To do this, we use a list comprehension, and the visualization library called matplotlib. For graphing a function, the basic idea is to create a list of x-values, a list of corresponding y-values (so the lists have to be the same length!), and then we feed the two lists into matplotlib to make the graph.
We begin by loading the necessary packages. | import matplotlib # A powerful graphics package.
import numpy # A math package
import matplotlib.pyplot as plt # A plotting subpackage in matplotlib. | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
Now let's graph the function $y = x^2$ over the domain $-2 \leq x \leq 2$ for practice. As a first step, we use numpy's linspace function to create an evenly spaced set of 11 x-values between -2 and 2. | x_values = numpy.linspace(-2,2,11) # The argument 11 is the *number* of terms, not the step size!
print(x_values)
type(x_values) | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
You might notice that the format looks a bit different from a list. Indeed, if you check type(x_values), it's not a list but something else called a numpy array. Numpy is a package that excels with computations on large arrays of data. On the surface, it's not so different from a list. The numpy.linspace command is a convenient way of producing an evenly spaced list of inputs.
The big difference is that operations on numpy arrays are interpreted differently than operations on ordinary Python lists. Try the two commands for comparison. | [1,2,3] + [1,2,3]
x_values + x_values
y_values = x_values * x_values # How is multiplication interpreted on numpy arrays?
print(y_values) | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
Now we use matplotlib to create a simple line graph. | %matplotlib inline
plt.plot(x_values, y_values)
plt.title('The graph of $y = x^2$') # The dollar signs surround the formula, in LaTeX format.
plt.ylabel('y')
plt.xlabel('x')
plt.grid(True)
plt.show()
| P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
Let's analyze the graphing code a bit more. See the official pyplot tutorial for more details.
python
%matplotlib inline
plt.plot(x_values, y_values)
plt.title('The graph of $y = x^2$') # The dollar signs surround the formula, in LaTeX format.
plt.ylabel('y')
plt.xlabel('x')
plt.grid(True)
plt.show()
The first line contains the magic %matplotlib inline. We have seen a magic word before, in %timeit. Magic words can call another program to assist. So here, the magic %matplotlib inline calls matplotlib for help, and places the resulting figure within the notebook.
The next line plt.plot(x_values, y_values) creates a plot object based on the data of the x-values and y-values. It is an abstract sort of object, behind the scenes, in a format that matplotlib understands. The following lines set the title of the plot, the axis labels, and turns a grid on. The last line plt.show renders the plot as an image in your notebook. There's an infinite variety of graphs that matplotlib can produce -- see the gallery for more! Other graphics packages include bokeh and seaborn, which extends matplotlib.
Analysis of the prime counting function
Now, to analyze the prime counting function, let's graph it. To make a graph, we will first need a list of many values of x and many corresponding values of $\pi(x)$. We do this with two commands. The first might take a minute to compute. | x_values = numpy.linspace(0,1000000,1001) # The numpy array [0,1000,2000,3000,...,1000000]
pix_values = numpy.array([primes_upto(x) for x in x_values]) # [FUNCTION(x) for x in LIST] syntax | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
We created an array of x-values as before. But the creation of an array of y-values (here, called pix_values to stand for $\pi(x)$) probably looks strange. We have done two new things!
We have used a list comprehension [primes_upto(x) for x in x_values] to create a list of y-values.
We have used numpy.array(LIST) syntax to convert a Python list into a numpy array.
First, we explain the list comprehension. Instead of pulling out values of a list according to a condition, with [x for x in LIST if CONDITION], we have created a new list based on performing a function each element of a list. The syntax, used above, is [FUNCTION(x) for x in LIST]. These two methods of list comprehension can be combined, in fact. The most general syntax for list comprehension is [FUNCTION(x) for x in LIST if CONDITION].
Second, a list comprehension can be carried out on a numpy array, but the result is a plain Python list. It will be better to have a numpy array instead for what follows, so we use the numpy.array() function to convert the list into a numpy array. | type(numpy.array([1,2,3])) # For example. | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
Now we have two numpy arrays: the array of x-values and the array of y-values. We can make a plot with matplotlib. | len(x_values) == len(pix_values) # These better be the same, or else matplotlib will be unhappy.
%matplotlib inline
plt.plot(x_values, pix_values)
plt.title('The prime counting function')
plt.ylabel('$\pi(x)$')
plt.xlabel('x')
plt.grid(True)
plt.show() | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
In this range, the prime counting function might look nearly linear. But if you look closely, there's a subtle downward bend. This is more pronounced in smaller ranges. For example, let's look at the first 10 x-values and y-values only. | %matplotlib inline
plt.plot(x_values[:10], pix_values[:10]) # Look closer to 0.
plt.title('The prime counting function')
plt.ylabel('$\pi(x)$')
plt.xlabel('x')
plt.grid(True)
plt.show() | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
It still looks almost linear, but there's a visible downward bend here. How can we see this bend more clearly? If the graph were linear, its equation would have the form $\pi(x) = mx$ for some fixed slope $m$ (since the graph does pass through the origin). Therefore, the quantity $\pi(x)/x$ would be constant if the graph were linear.
Hence, if we graph $\pi(x) / x$ on the y-axis and $x$ on the x-axis, and the result is nonconstant, then the function $\pi(x)$ is nonlinear. | m_values = pix_values[1:] / x_values[1:] # We start at 1, to avoid a division by zero error.
%matplotlib inline
plt.plot(x_values[1:], m_values)
plt.title('The ratio $\pi(x) / x$ as $x$ varies.')
plt.xlabel('x')
plt.ylabel('$\pi(x) / x$')
plt.grid(True)
plt.show() | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
That is certainly not constant! The decay of $\pi(x) / x$ is not so different from $1 / \log(x)$, in fact. To see this, let's overlay the graphs. We use the numpy.log function, which computes the natural logarithm of its input (and allows an entire array as input). | %matplotlib inline
plt.plot(x_values[1:], m_values, label='$\pi(x)/x$') # The same as the plot above.
plt.plot(x_values[1:], 1 / numpy.log(x_values[1:]), label='$1 / \log(x)$') # Overlay the graph of 1 / log(x)
plt.title('The ratio of $\pi(x) / x$ as $x$ varies.')
plt.xlabel('x')
plt.ylabel('$\pi(x) / x$')
plt.grid(True)
plt.legend() # Turn on the legend.
plt.show() | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
The shape of the decay of $\pi(x) / x$ is very close to $1 / \log(x)$, but it looks like there is an offset. In fact, there is, and it is pretty close to $1 / \log(x)^2$. And that is close, but again there's another little offset, this time proportional to $2 / \log(x)^3$. This goes on forever, if one wishes to approximate $\pi(x) / x$ by an "asymptotic expansion" (not a good idea, it turns out).
The closeness of $\pi(x) / x$ to $1 / \log(x)$ is expressed in the prime number theorem:
$$\lim_{x \rightarrow \infty} \frac{\pi(x)}{x / \log(x)} = 1.$$ | %matplotlib inline
plt.plot(x_values[1:], m_values * numpy.log(x_values[1:]) ) # Should get closer to 1.
plt.title('The ratio $\pi(x) / (x / \log(x))$ approaches 1... slowly')
plt.xlabel('x')
plt.ylabel('$\pi(x) / (x / \log(x)) $')
plt.ylim(0.8,1.2)
plt.grid(True)
plt.show() | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
Comparing the graph to the theoretical result, we find that the ratio $\pi(x) / (x / \log(x))$ approaches $1$ (the theoretical result) but very slowly (see the graph above!).
A much stronger result relates $\pi(x)$ to the "logarithmic integral" $li(x)$. The Riemann hypothesis is equivalent to the statement
$$\left\vert \pi(x) - li(x) \right\vert = O(\sqrt{x} \log(x)).$$
In other words, the error if one approximates $\pi(x)$ by $li(x)$ is bounded by a constant times $\sqrt{x} \log(x)$. The logarithmic integral function isn't part of Python or numpy, but it is in the mpmath package. If you have this package installed, then you can try the following. | from mpmath import li
print(primes_upto(1000000)) # The number of primes up to 1 million.
print(li(1000000)) # The logarithmic integral of 1 million. | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
Not too shabby!
Prime gaps
As a last bit of data analysis, we consider the prime gaps. These are the numbers that occur as differences between consecutive primes. Since all primes except 2 are odd, all prime gaps are even except for the 1-unit gap between 2 and 3. There are many unsolved problems about prime gaps; the most famous might be that a gap of 2 occurs infinitely often (as in the gaps between 3,5 and between 11,13 and between 41,43, etc.).
Once we have our data set of prime numbers, it is not hard to create a data set of prime gaps. Recall that primes is our list of prime numbers up to 1 million. | len(primes) # The number of primes up to 1 million.
primes_allbutlast = primes[:-1] # This excludes the last prime in the list.
primes_allbutfirst = primes[1:] # This excludes the first (i.e., with index 0) prime in the list.
primegaps = numpy.array(primes_allbutfirst) - numpy.array(primes_allbutlast) # Numpy is fast!
print(primegaps[:100]) # The first hundred prime gaps! | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
What have we done? It is useful to try out this method on a short list. | L = [1,3,7,20] # A nice short list.
print(L[:-1])
print(L[1:]) | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
Now we have two lists of the same length. The gaps in the original list L are the differences between terms of the same index in the two new lists. One might be tempted to just subtract, e.g., with the command L[1:] - L[:-1], but subtraction is not defined for lists.
Fortunately, by converting the lists to numpy arrays, we can use numpy's term-by-term subtraction operation. | L[1:] - L[:-1] # This will give a TypeError. You can't subtract lists!
numpy.array(L[1:]) - numpy.array(L[:-1]) # That's better. See the gaps in the list [1,3,7,20] in the output. | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
Now let's return to our primegaps data set. It contains all the gap-sizes for primes up to 1 million. | print(len(primes))
print(len(primegaps)) # This should be one less than the number of primes. | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
As a last example of data visualization, we use matplotlib to produce a histogram of the prime gaps. | max(primegaps) # The largest prime gap that appears!
%matplotlib inline
plt.figure(figsize=(12, 5)) # Makes the resulting figure 12in by 5in.
plt.hist(primegaps, bins=range(1,115)) # Makes a histogram with one bin for each possible gap from 1 to 114.
plt.ylabel('Frequency')
plt.xlabel('Gap size')
plt.grid(True)
plt.title('The frequency of prime gaps, for primes up to 1 million')
plt.show() | P3wNT Notebook 3.ipynb | MartyWeissman/Python-for-number-theory | gpl-3.0 |
Approximation of the J-function taken from [1] with
$$
J(\mu) \approx \left(1 - 2^{-H_1\cdot (2\mu)^{H_2}}\right)^{H_3}
$$
and its inverse function can be easily found as
$$
\mu = J^{-1}(I) \approx \frac{1}{2}\left(-\frac{1}{H_1}\log_2\left(1-I^{\frac{1}{H_3}}\right)\right)^{\frac{1}{H_2}}
$$
with $H_1 = 0.3073$, $H_2=0.8935$, and $H_3 = 1.1064$.
[1] F. Schreckenbach, Iterative Decoding of Bit-Interleaved Coded Modulation , PhD thesis, TU Munich, 2007 | H1 = 0.3073
H2 = 0.8935
H3 = 1.1064
def J_fun(mu):
I = (1 - 2**(-H1*(2*mu)**H2))**H3
return I
def invJ_fun(I):
if I > (1-1e-10):
return 100
mu = 0.5*(-(1/H1) * np.log2(1 - I**(1/H3)))**(1/H2)
return mu | SC468/LDPC_Optimization_AWGN.ipynb | kit-cel/wt | gpl-2.0 |
The following function solves the optimization problem that returns the best $\lambda(Z)$ for a given BI-AWGN channel quality $E_s/N_0$, corresponding to a $\mu_c = 4\frac{E_s}{N_0}$, for a regular check node degree $d_{\mathtt{c}}$, and for a maximum variable node degree $d_{\mathtt{v},\max}$. This optimization problem is derived in the lecture as
$$
\begin{aligned}
& \underset{\lambda_1,\ldots,\lambda_{d_{\mathtt{v},\max}}}{\text{maximize}} & & \sum_{i=1}^{d_{\mathtt{v},\max}}\frac{\lambda_i}{i} \
& \text{subject to} & & \lambda_1 = 0 \
& & & \lambda_i \geq 0, \quad \forall i \in{2,3,\ldots,d_{\mathtt{v},\max}} \
& & & \sum_{i=2}^{d_{\mathtt{v},\max}}\lambda_i = 1 \
& & & \sum_{i=2}^{d_{\mathtt{v},\max}}\lambda_i\cdot J\left(\mu_c + (i-1)J^{-1}\left(\frac{j}{D}\right)\right) > 1 - J\left(\frac{1}{d_{\mathtt{c}}-1}J^{-1}\left(1-\frac{j}{D}\right)\right),\quad \forall j \in {1,\ldots, D} \
& & & \lambda_2 \leq \frac{e^{\frac{\mu_c}{4}}}{d_{\mathtt{c}}-1}
\end{aligned}
$$
If this optimization problem is feasible, then the function returns the polynomial $\lambda(Z)$ as a coefficient array where the first entry corresponds to the largest exponent ($\lambda_{d_{\mathtt{v},\max}}$) and the last entry to the lowest exponent ($\lambda_1$). If the optimization problem has no solution (e.g., it is unfeasible), then the empty vector is returned. | def find_best_lambda(mu_c, v_max, dc):
# quantization of EXIT chart
D = 500
I_range = np.arange(0, D, 1)/D
# Linear Programming model, maximize target expression
model = pulp.LpProblem("Finding best lambda problem", pulp.LpMaximize)
# definition of variables, v_max entries \lambda_i that are between 0 and 1 (implicit declaration of constraint 2)
v_lambda = pulp.LpVariable.dicts("lambda", range(v_max),0,1)
# objective function
cv = 1/np.arange(v_max,0,-1)
model += pulp.lpSum(v_lambda[i]*cv[i] for i in range(v_max))
# constraints
# constraint 1, no variable nodes of degree 1
model += v_lambda[v_max-1] == 0
# constraint 3, sum of lambda_i must be 1
model += pulp.lpSum(v_lambda[i] for i in range(v_max))==1
# constraints 4, fixed point condition for all the descrete xi values (a total number of D, for each \xi)
for myI in I_range:
model += pulp.lpSum(v_lambda[j] * J_fun(mu_c + (v_max-1-j)*invJ_fun(myI)) for j in range(v_max)) - 1 + J_fun(1/(dc-1)*invJ_fun(1-myI)) >= 0
# constraint 5, stability condition
model += v_lambda[v_max-2] <= np.exp(mu_c/4)/(dc-1)
model.solve()
if model.status != 1:
r_lambda = []
else:
r_lambda = [v_lambda[i].varValue for i in range(v_max)]
return r_lambda | SC468/LDPC_Optimization_AWGN.ipynb | kit-cel/wt | gpl-2.0 |
As an example, we consider the case of optimization carried out in the lecture after 10 iterations, where we have $\mu_c = 3.8086$ and $d_{\mathtt{c}} = 14$ with $d_{\mathtt{v},\max}=16$ | best_lambda = find_best_lambda(3.8086, 16, 14)
print(np.poly1d(best_lambda, variable='Z')) | SC468/LDPC_Optimization_AWGN.ipynb | kit-cel/wt | gpl-2.0 |
In the following, we provide an interactive widget that allows you to choose the parameters of the optimization yourself and get the best possible $\lambda(Z)$. Additionally, the EXIT chart is plotted to visualize the good fit of the obtained degree distribution. | def best_lambda_interactive(mu_c, v_max, dc):
# get lambda and rho polynomial from optimization and from c_avg, respectively
p_lambda = find_best_lambda(mu_c, v_max, dc)
# if optimization successful, compute rate and show plot
if not p_lambda:
print('Optimization infeasible, no solution found')
else:
design_rate = 1 - 1/(dc * np.polyval(np.polyint(p_lambda),1))
if design_rate <= 0:
print('Optimization feasible, but no code with positive rate found')
else:
print("Lambda polynomial:")
print(np.poly1d(p_lambda, variable='Z'))
print("Design rate r_d = %1.3f" % design_rate)
# Plot EXIT-Chart
print("EXIT Chart:")
plot.figure(3)
x = np.linspace(0, 1, num=100)
y_v = [np.sum([p_lambda[j] * J_fun(mu_c + (v_max-1-j)*invJ_fun(xv)) for j in range(v_max)]) for xv in x]
y_c = [1-J_fun((dc-1)*invJ_fun(1-xv)) for xv in x]
plot.plot(x, y_v, '#7030A0')
plot.plot(y_c, x, '#008000')
plot.axis('equal')
plot.gca().set_aspect('equal', adjustable='box')
plot.xlim(0,1)
plot.ylim(0,1)
plot.grid()
plot.show()
interactive_plot = interactive(best_lambda_interactive, \
mu_c=widgets.FloatSlider(min=0.5,max=8,step=0.01,value=3, continuous_update=False, description=r'\(\mu_c\)',layout=widgets.Layout(width='50%')), \
v_max = widgets.IntSlider(min=3, max=20, step=1, value=16, continuous_update=False, description=r'\(d_{\mathtt{v},\max}\)'), \
dc = widgets.IntSlider(min=3,max=20,step=1,value=4, continuous_update=False, description=r'\(d_{\mathtt{c}}\)'))
output = interactive_plot.children[-1]
output.layout.height = '400px'
interactive_plot | SC468/LDPC_Optimization_AWGN.ipynb | kit-cel/wt | gpl-2.0 |
Now, we carry out the optimization over a wide range of $d_{\mathtt{c},\text{avg}}$ values for a given $\epsilon$ and find the largest possible rate. | def find_best_rate(mu_c, dv_max, dc_max):
c_range = np.arange(3, dc_max+1)
rates = np.zeros_like(c_range,dtype=float)
# loop over all c_avg, add progress bar
f = widgets.FloatProgress(min=0, max=np.size(c_range))
display(f)
for index,dc in enumerate(c_range):
f.value += 1
p_lambda = find_best_lambda(mu_c, dv_max, dc)
if p_lambda:
design_rate = 1 - 1/(dc * np.polyval(np.polyint(p_lambda),1))
if design_rate >= 0:
rates[index] = design_rate
# find largest rate
largest_rate_index = np.argmax(rates)
best_lambda = find_best_lambda(mu_c, dv_max, c_range[largest_rate_index])
print("Found best code of rate %1.3f for average check node degree of %1.2f" % (rates[largest_rate_index], c_range[largest_rate_index]))
print("Corresponding lambda polynomial")
print(np.poly1d(best_lambda, variable='Z'))
# Plot curve with all obtained results
plot.figure(4, figsize=(10,3))
plot.plot(c_range, rates, 'b--s',color=(0, 0.59, 0.51))
plot.plot(c_range[largest_rate_index], rates[largest_rate_index], 'rs')
plot.xlim(3, dc_max)
plot.xticks(range(3,dc_max+1))
plot.ylim(0, 1)
plot.xlabel('$d_{\mathtt{c}}$')
plot.ylabel('design rate $r_d$')
plot.grid()
plot.show()
return rates[largest_rate_index]
interactive_optim = interactive(find_best_rate, \
mu_c=widgets.FloatSlider(min=0.1,max=10,step=0.01,value=2, continuous_update=False, description=r'\(\mu_c\)',layout=widgets.Layout(width='50%')), \
dv_max = widgets.IntSlider(min=3, max=20, step=1, value=16, continuous_update=False, description=r'\(d_{\mathtt{v},\max}\)'), \
dc_max = widgets.IntSlider(min=3, max=40, step=1, value=22, continuous_update=False, description=r'\(d_{\mathtt{c},\max}\)'))
output = interactive_optim.children[-1]
output.layout.height = '400px'
interactive_optim | SC468/LDPC_Optimization_AWGN.ipynb | kit-cel/wt | gpl-2.0 |
Running binary search to find code with a given target rate for the AWGN channel | target_rate = 0.7
dv_max = 16
dc_max = 22
T_Delta = 0.01
mu_c = 10
Delta_mu = 10
while Delta_mu >= T_Delta:
print('Running optimization for mu_c = %1.5f, corresponding to Es/N0 = %1.2f dB' % (mu_c, 10*np.log10(mu_c/4)))
rate = find_best_rate(mu_c, dv_max, dc_max)
if rate > target_rate:
mu_c = mu_c - Delta_mu / 2
else:
mu_c = mu_c + Delta_mu / 2
Delta_mu = Delta_mu / 2 | SC468/LDPC_Optimization_AWGN.ipynb | kit-cel/wt | gpl-2.0 |
Software Engineering for Data Scientists
Manipulating Data with Python
CSE 599 B1
Today's Objectives
1. Opening & Navigating the IPython Notebook
2. Simple Math in the IPython Notebook
3. Loading data with pandas
4. Cleaning and Manipulating data with pandas
5. Visualizing data with pandas
1. Opening and Navigating the IPython Notebook
We will start today with the interactive environment that we will be using often through the course: the IPython/Jupyter Notebook.
We will walk through the following steps together:
Download miniconda (be sure to get Version 3.5) and install it on your system (hopefully you have done this before coming to class)
Use the conda command-line tool to update your package listing and install the IPython notebook:
Update conda's listing of packages for your system:
$ conda update conda
Install IPython notebook and all its requirements
$ conda install ipython-notebook
Navigate to the directory containing the course material. For example:
$ cd ~/courses/CSE599/
You should see a number of files in the directory, including these:
$ ls
...
Breakout-Simple-Math.ipynb
CSE599_Lecture_2.ipynb
...
Type ipython notebook in the terminal to start the notebook
$ ipython notebook
If everything has worked correctly, it should automatically launch your default browser
Click on CSE599_Lecture_2.ipynb to open the notebook containing the content for this lecture.
With that, you're set up to use the IPython notebook!
2. Simple Math in the IPython Notebook
Now that we have the IPython notebook up and running, we're going to do a short breakout exploring some of the mathematical functionality that Python offers.
Please open Breakout-Simple-Math.ipynb, find a partner, and make your way through that notebook, typing and executing code along the way.
3. Loading data with pandas
With this simple Python computation experience under our belt, we can now move to doing some more interesting analysis.
Python's Data Science Ecosystem
In addition to Python's built-in modules like the math module we explored above, there are also many often-used third-party modules that are core tools for doing data science with Python.
Some of the most important ones are:
numpy: Numerical Python
Numpy is short for "Numerical Python", and contains tools for efficient manipulation of arrays of data.
If you have used other computational tools like IDL or MatLab, Numpy should feel very familiar.
scipy: Scientific Python
Scipy is short for "Scientific Python", and contains a wide range of functionality for accomplishing common scientific tasks, such as optimization/minimization, numerical integration, interpolation, and much more.
We will not look closely at Scipy today, but we will use its functionality later in the course.
pandas: Labeled Data Manipulation in Python
Pandas is short for "Panel Data", and contains tools for doing more advanced manipulation of labeled data in Python, in particular with a columnar data structure called a Data Frame.
If you've used the R statistical language (and in particular the so-called "Hadley Stack"), much of the functionality in Pandas should feel very familiar.
matplotlib: Visualization in Python
Matplotlib started out as a Matlab plotting clone in Python, and has grown from there in the 15 years since its creation. It is the most popular data visualization tool currently in the Python data world (though other recent packages are starting to encroach on its monopoly).
Installing Pandas & friends
Because the above packages are not included in Python itself, you need to install them separately. While it is possible to install these from source (compiling the C and/or Fortran code that does the heavy lifting under the hood) it is much easier to use a package manager like conda. All it takes is to run
$ conda install numpy scipy pandas matplotlib
and (so long as your conda setup is working) the packages will be downloaded and installed on your system.
Loading Data with Pandas | import pandas | PreFall2018/02-Python-and-Data/Lecture-Python-and-Data.ipynb | UWSEDS/LectureNotes | bsd-2-clause |
Because we'll use it so much, we often import under a shortened name using the import ... as ... pattern: | import pandas as pd | PreFall2018/02-Python-and-Data/Lecture-Python-and-Data.ipynb | UWSEDS/LectureNotes | bsd-2-clause |
Now we can use the read_csv command to read the comma-separated-value data:
Viewing Pandas Dataframes
The head() and tail() methods show us the first and last rows of the data
The shape attribute shows us the number of elements:
The columns attribute gives us the column names
The index attribute gives us the index names
The dtypes attribute gives the data types of each column:
4. Manipulating data with pandas
Here we'll cover some key features of manipulating data with pandas
Access columns by name using square-bracket indexing:
Mathematical operations on columns happen element-wise:
Columns can be created (or overwritten) with the assignment operator.
Let's create a tripminutes column with the number of minutes for each trip
Working with Times
One trick to know when working with columns of times is that Pandas DateTimeIndex provides a nice interface for working with columns of times:
With it, we can extract, the hour of the day, the day of the week, the month, and a wide range of other views of the time:
Simple Grouping of Data
The real power of Pandas comes in its tools for grouping and aggregating data. Here we'll look at value counts and the basics of group-by operations.
Value Counts
Pandas includes an array of useful functionality for manipulating and analyzing tabular data.
We'll take a look at two of these here.
The pandas.value_counts returns statistics on the unique values within each column.
We can use it, for example, to break down rides by gender:
Or to break down rides by age:
What else might we break down rides by?
Group-by Operation
One of the killer features of the Pandas dataframe is the ability to do group-by operations.
You can visualize the group-by like this (image borrowed from the Python Data Science Handbook) | from IPython.display import Image
Image('split_apply_combine.png') | PreFall2018/02-Python-and-Data/Lecture-Python-and-Data.ipynb | UWSEDS/LectureNotes | bsd-2-clause |
So, for example, we can use this to find the average length of a ride as a function of time of day:
The simplest version of a groupby looks like this, and you can use almost any aggregation function you wish (mean, median, sum, minimum, maximum, standard deviation, count, etc.)
<data object>.groupby(<grouping values>).<aggregate>()
You can even group by multiple values: for example we can look at the trip duration by time of day and by gender:
The unstack() operation can help make sense of this type of multiply-grouped data:
5. Visualizing data with pandas
Of course, looking at tables of data is not very intuitive.
Fortunately Pandas has many useful plotting functions built-in, all of which make use of the matplotlib library to generate plots.
Whenever you do plotting in the IPython notebook, you will want to first run this magic command which configures the notebook to work well with plots: | %matplotlib inline | PreFall2018/02-Python-and-Data/Lecture-Python-and-Data.ipynb | UWSEDS/LectureNotes | bsd-2-clause |
Now we can simply call the plot() method of any series or dataframe to get a reasonable view of the data:
Adjusting the Plot Style
The default formatting is not very nice; I often make use of the Seaborn library for better plotting defaults.
First you'll have to
$ conda install seaborn
and then you can do this: | import seaborn
seaborn.set() | PreFall2018/02-Python-and-Data/Lecture-Python-and-Data.ipynb | UWSEDS/LectureNotes | bsd-2-clause |
Steps 4 & 5: Sample data from setting similar to data and record classification accuracy | accuracy = np.zeros((len(S), len(classifiers), 2), dtype=np.dtype('float64'))
for idx1, s in enumerate(S):
s0=s/2
s1=s/2
g0 = 1 * (np.random.rand( r, r, s0) > 1-p0)
g1 = 1 * (np.random.rand( r, r, s1) > 1-p1)
mbar0 = 1.0*np.sum(g0, axis=(0,1))
mbar1 = 1.0*np.sum(g1, axis=(0,1))
X = np.array((np.append(mbar0, mbar1), np.append(mbar0/( r**2), mbar1/( r**2 )))).T
y = np.append(np.zeros(s0), np.ones(s1))
for idx2, cla in enumerate(classifiers):
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0)
clf = cla.fit(X_train, y_train)
loo = LeaveOneOut(len(X))
scores = cross_validation.cross_val_score(clf, X, y, cv=loo)
accuracy[idx1, idx2,] = [scores.mean(), scores.std()]
print("Accuracy of %s: %0.2f (+/- %0.2f)" % (names[idx2], scores.mean(), scores.std() * 2))
print accuracy | code/classification_simulation.ipynb | Upward-Spiral-Science/grelliam | apache-2.0 |
Step 6: Plot Accuracy versus N | font = {'weight' : 'bold',
'size' : 14}
import matplotlib
matplotlib.rc('font', **font)
plt.figure(figsize=(8,5))
plt.errorbar(S, accuracy[:,0,0], yerr = accuracy[:,0,1]/np.sqrt(S), hold=True, label=names[0])
plt.errorbar(S, accuracy[:,1,0], yerr = accuracy[:,1,1]/np.sqrt(S), color='green', hold=True, label=names[1])
plt.errorbar(S, accuracy[:,2,0], yerr = accuracy[:,2,1]/np.sqrt(S), color='red', hold=True, label=names[2])
plt.errorbar(S, accuracy[:,3,0], yerr = accuracy[:,3,1]/np.sqrt(S), color='black', hold=True, label=names[3])
plt.errorbar(S, accuracy[:,4,0], yerr = accuracy[:,4,1]/np.sqrt(S), color='brown', hold=True, label=names[4])
plt.xscale('log')
plt.xlabel('Number of Samples')
plt.xlim((0,2100))
plt.ylim((-0.05, 1.05))
plt.ylabel('Accuracy')
plt.title('Gender Classification of Simulated Data')
plt.axhline(1, color='red', linestyle='--')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.savefig('../figs/general_classification.png')
plt.show() | code/classification_simulation.ipynb | Upward-Spiral-Science/grelliam | apache-2.0 |
Step 7: Apply technique to data | # Initializing dataset names
dnames = list(['../data/KKI2009'])
print "Dataset: " + ", ".join(dnames)
# Getting graph names
fs = list()
for dd in dnames:
fs.extend([root+'/'+file for root, dir, files in os.walk(dd) for file in files])
fs = fs[1:]
def loadGraphs(filenames, rois, printer=False):
A = np.zeros((rois, rois, len(filenames)))
for idx, files in enumerate(filenames):
if printer:
print "Loading: " + files
g = ig.Graph.Read_GraphML(files)
tempg = g.get_adjacency(attribute='weight')
A[:,:,idx] = np.asarray(tempg.data)
return A
# Load X
X = loadGraphs(fs, 70)
print X.shape
# Load Y
ys = csv.reader(open('../data/kki42_subjectinformation.csv'))
y = [y[5] for y in ys]
y = [1 if x=='F' else 0 for x in y[1:]]
xf = 1.0*np.sum(1.0*(X>0), axis=(0,1))
features = np.array((xf, xf/( 70**2 * 22))).T
accuracy=np.zeros((len(classifiers),2))
for idx, cla in enumerate(classifiers):
X_train, X_test, y_train, y_test = cross_validation.train_test_split(features, y, test_size=0.4, random_state=0)
clf = cla.fit(X_train, y_train)
loo = LeaveOneOut(len(features))
scores = cross_validation.cross_val_score(clf, features, y, cv=loo)
accuracy[idx,] = [scores.mean(), scores.std()]
print("Accuracy of %s: %0.2f (+/- %0.2f)" % (names[idx], scores.mean(), scores.std() * 2)) | code/classification_simulation.ipynb | Upward-Spiral-Science/grelliam | apache-2.0 |
Affine layer: foward
Open the file cs231n/layers.py and implement the affine_forward function.
Once you are done you can test your implementaion by running the following: | # Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print 'Testing affine_forward function:'
print 'difference: ', rel_error(out, correct_out) | solutions/levin/assignment2/FullyConnectedNets.ipynb | machinelearningnanodegree/stanford-cs231 | mit |
Affine layer: backward
Now implement the affine_backward function and test your implementation using numeric gradient checking. | # Test the affine_backward function
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be around 1e-10
print 'Testing affine_backward function:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db) | solutions/levin/assignment2/FullyConnectedNets.ipynb | machinelearningnanodegree/stanford-cs231 | mit |
ReLU layer: forward
Implement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following: | # Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 1e-8
print 'Testing relu_forward function:'
print 'difference: ', rel_error(out, correct_out) | solutions/levin/assignment2/FullyConnectedNets.ipynb | machinelearningnanodegree/stanford-cs231 | mit |
ReLU layer: backward
Now implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking: | x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 1e-12
print 'Testing relu_backward function:'
print 'dx error: ', rel_error(dx_num, dx) | solutions/levin/assignment2/FullyConnectedNets.ipynb | machinelearningnanodegree/stanford-cs231 | mit |
"Sandwich" layers
There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.
For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass: | from cs231n.layer_utils import affine_relu_forward, affine_relu_backward
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)
print 'Testing affine_relu_forward:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db) | solutions/levin/assignment2/FullyConnectedNets.ipynb | machinelearningnanodegree/stanford-cs231 | mit |
Loss layers: Softmax and SVM
You implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py.
You can make sure that the implementations are correct by running the following: | num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9
print 'Testing svm_loss:'
print 'loss: ', loss
print 'dx error: ', rel_error(dx_num, dx)
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8
print '\nTesting softmax_loss:'
print 'loss: ', loss
print 'dx error: ', rel_error(dx_num, dx) | solutions/levin/assignment2/FullyConnectedNets.ipynb | machinelearningnanodegree/stanford-cs231 | mit |
Two-layer network
In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.
Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation. | N, D, H, C = 3, 5, 50, 7
X = np.random.randn(N, D)
y = np.random.randint(C, size=N)
std = 1e-2
model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)
print 'Testing initialization ... '
W1_std = abs(model.params['W1'].std() - std)
b1 = model.params['b1']
W2_std = abs(model.params['W2'].std() - std)
b2 = model.params['b2']
assert W1_std < std / 10, 'First layer weights do not seem right'
assert np.all(b1 == 0), 'First layer biases do not seem right'
assert W2_std < std / 10, 'Second layer weights do not seem right'
assert np.all(b2 == 0), 'Second layer biases do not seem right'
print 'Testing test-time forward pass ... '
model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)
model.params['b1'] = np.linspace(-0.1, 0.9, num=H)
model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)
model.params['b2'] = np.linspace(-0.9, 0.1, num=C)
X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T
scores = model.loss(X)
correct_scores = np.asarray(
[[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],
[12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],
[12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])
scores_diff = np.abs(scores - correct_scores).sum()
assert scores_diff < 1e-6, 'Problem with test-time forward pass'
print 'Testing training loss (no regularization)'
y = np.asarray([0, 5, 1])
loss, grads = model.loss(X, y)
correct_loss = 3.4702243556
assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'
model.reg = 1.0
loss, grads = model.loss(X, y)
correct_loss = 26.5948426952
assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'
for reg in [0.0, 0.7]:
print 'Running numeric gradient check with reg = ', reg
model.reg = reg
loss, grads = model.loss(X, y)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])) | solutions/levin/assignment2/FullyConnectedNets.ipynb | machinelearningnanodegree/stanford-cs231 | mit |
Solver
In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.
Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set. | # model = TwoLayerNet()
# solver = None
##############################################################################
# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #
# 50% accuracy on the validation set. #
##############################################################################
input_dim=3*32*32
hidden_dim=100
num_classes=10
weight_scale=1e-3
reg=0.0
model = TwoLayerNet(input_dim=input_dim, hidden_dim=hidden_dim, num_classes=num_classes,
weight_scale=weight_scale, reg=reg)
solver = Solver(model, data,
update_rule='sgd',
optim_config={
'learning_rate': 1e-3,
},
lr_decay=0.95,
num_epochs=10, batch_size=100,
print_every=100)
solver.train()
##############################################################################
# END OF YOUR CODE #
##############################################################################
# Run this cell to visualize training loss and train / val accuracy
plt.subplot(2, 1, 1)
plt.title('Training loss')
plt.plot(solver.loss_history, 'o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(solver.train_acc_history, '-o', label='train')
plt.plot(solver.val_acc_history, '-o', label='val')
plt.plot([0.5] * len(solver.val_acc_history), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show() | solutions/levin/assignment2/FullyConnectedNets.ipynb | machinelearningnanodegree/stanford-cs231 | mit |
Multilayer network
Next you will implement a fully-connected network with an arbitrary number of hidden layers.
Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.
Initial loss and gradient check
As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?
For gradient checking, you should expect to see errors around 1e-6 or less. | N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print 'Running check with reg = ', reg
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64)
loss, grads = model.loss(X, y)
print 'Initial loss: ', loss
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])) | solutions/levin/assignment2/FullyConnectedNets.ipynb | machinelearningnanodegree/stanford-cs231 | mit |
As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs. | # TODO: Use a three-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
learning_rate = 1e-2
weight_scale = 1e-2
model = FullyConnectedNet([100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show() | solutions/levin/assignment2/FullyConnectedNets.ipynb | machinelearningnanodegree/stanford-cs231 | mit |
Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs. | # TODO: Use a five-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
learning_rate = 1e-2
weight_scale = 6e-2
model = FullyConnectedNet([100, 100, 100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show() | solutions/levin/assignment2/FullyConnectedNets.ipynb | machinelearningnanodegree/stanford-cs231 | mit |
Inline question:
Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net?
Answer:
It's much harder to find the right weight initialization and learning rate for five layer net. As the network grows deeper, we tend to have more dead activations, and thus kill the backward gradient.
Update rules
So far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD.
SGD+Momentum
Stochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent.
Open the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8. | from cs231n.optim import sgd_momentum
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-3, 'velocity': v}
next_w, _ = sgd_momentum(w, dw, config=config)
expected_next_w = np.asarray([
[ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],
[ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],
[ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],
[ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])
expected_velocity = np.asarray([
[ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],
[ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],
[ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],
[ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])
print 'next_w error: ', rel_error(next_w, expected_next_w)
print 'velocity error: ', rel_error(expected_velocity, config['velocity']) | solutions/levin/assignment2/FullyConnectedNets.ipynb | machinelearningnanodegree/stanford-cs231 | mit |
Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster. | num_train = 4000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
for update_rule in ['sgd', 'sgd_momentum']:
print 'running with ', update_rule
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': 1e-2,
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in solvers.iteritems():
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show() | solutions/levin/assignment2/FullyConnectedNets.ipynb | machinelearningnanodegree/stanford-cs231 | mit |
RMSProp and Adam
RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.
In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.
[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude." COURSERA: Neural Networks for Machine Learning 4 (2012).
[2] Diederik Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization", ICLR 2015. | # Test RMSProp implementation; you should see errors less than 1e-7
from cs231n.optim import rmsprop
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'cache': cache}
next_w, _ = rmsprop(w, dw, config=config)
expected_next_w = np.asarray([
[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],
[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],
[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],
[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])
expected_cache = np.asarray([
[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],
[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],
[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],
[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])
print 'next_w error: ', rel_error(expected_next_w, next_w)
print 'cache error: ', rel_error(expected_cache, config['cache'])
# Test Adam implementation; you should see errors around 1e-7 or less
from cs231n.optim import adam
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}
next_w, _ = adam(w, dw, config=config)
expected_next_w = np.asarray([
[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],
[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],
[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],
[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])
expected_v = np.asarray([
[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],
[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],
[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],
[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])
expected_m = np.asarray([
[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],
[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],
[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],
[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])
print 'next_w error: ', rel_error(expected_next_w, next_w)
print 'v error: ', rel_error(expected_v, config['v'])
print 'm error: ', rel_error(expected_m, config['m']) | solutions/levin/assignment2/FullyConnectedNets.ipynb | machinelearningnanodegree/stanford-cs231 | mit |
Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules: | learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}
for update_rule in ['adam', 'rmsprop']:
print 'running with ', update_rule
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': learning_rates[update_rule]
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in solvers.iteritems():
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show() | solutions/levin/assignment2/FullyConnectedNets.ipynb | machinelearningnanodegree/stanford-cs231 | mit |
Train a good model!
Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.
If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.
You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models. | best_model = None
################################################################################
# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #
# batch normalization and dropout useful. Store your best model in the #
# best_model variable. #
################################################################################
num_train = data['X_train'].shape[0]
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
dropout=0.1
model = FullyConnectedNet([100, 100, 100], weight_scale=5e-2, use_batchnorm=True, dropout=dropout)
update_rule = 'adam'
learning_rate = 1e-3
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': learning_rate
},
verbose=True)
solver.train()
best_model = model
################################################################################
# END OF YOUR CODE #
################################################################################ | solutions/levin/assignment2/FullyConnectedNets.ipynb | machinelearningnanodegree/stanford-cs231 | mit |
Test you model
Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set. | X_test = data['X_test']
y_test = data['y_test']
X_val = data['X_val']
y_val = data['y_val']
y_test_pred = np.argmax(best_model.loss(X_test), axis=1)
y_val_pred = np.argmax(best_model.loss(X_val), axis=1)
print 'Validation set accuracy: ', (y_val_pred == y_val).mean()
print 'Test set accuracy: ', (y_test_pred == y_test).mean() | solutions/levin/assignment2/FullyConnectedNets.ipynb | machinelearningnanodegree/stanford-cs231 | mit |
Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively. | def model_inputs(real_dim, z_dim):
inputs_real =
inputs_z =
return inputs_real, inputs_z | gan_mnist/Intro_to_GANs_Exercises.ipynb | ktmud/deep-learning | mit |
Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope. | def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out:
'''
with tf.variable_scope # finish this
# Hidden layer
h1 =
# Leaky ReLU
h1 =
# Logits and tanh output
logits =
out =
return out | gan_mnist/Intro_to_GANs_Exercises.ipynb | ktmud/deep-learning | mit |
Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope. | def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope # finish this
# Hidden layer
h1 =
# Leaky ReLU
h1 =
logits =
out =
return out, logits | gan_mnist/Intro_to_GANs_Exercises.ipynb | ktmud/deep-learning | mit |
Hyperparameters | # Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1 | gan_mnist/Intro_to_GANs_Exercises.ipynb | ktmud/deep-learning | mit |
Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier. | tf.reset_default_graph()
# Create our input placeholders
input_real, input_z =
# Generator network here
g_model =
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real =
d_model_fake, d_logits_fake = | gan_mnist/Intro_to_GANs_Exercises.ipynb | ktmud/deep-learning | mit |
Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will be sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator. | # Calculate losses
d_loss_real =
d_loss_fake =
d_loss =
g_loss = | gan_mnist/Intro_to_GANs_Exercises.ipynb | ktmud/deep-learning | mit |
Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately. | # Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars =
g_vars =
d_vars =
d_train_opt =
g_train_opt = | gan_mnist/Intro_to_GANs_Exercises.ipynb | ktmud/deep-learning | mit |
Training | batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f) | gan_mnist/Intro_to_GANs_Exercises.ipynb | ktmud/deep-learning | mit |
Training loss
Here we'll check out the training losses for the generator and discriminator. | %matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend() | gan_mnist/Intro_to_GANs_Exercises.ipynb | ktmud/deep-learning | mit |
These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make. | _ = view_samples(-1, samples) | gan_mnist/Intro_to_GANs_Exercises.ipynb | ktmud/deep-learning | mit |
It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples]) | gan_mnist/Intro_to_GANs_Exercises.ipynb | ktmud/deep-learning | mit |
Questions:
Exercise 1a
What are the leakage factors of the aquifer system? | print('The leakage factors of the aquifers are:')
print(ml.aq.lab) | notebooks/timml_notebook1_sol.ipynb | Hugovdberg/timml | mit |
Exercise 1b
What is the head at the well? | print('The head at the well is:')
print(w.headinside()) | notebooks/timml_notebook1_sol.ipynb | Hugovdberg/timml | mit |
Exercise 1c
Create a contour plot of the head in the three aquifers. Use a window with lower left hand corner $(x,y)=(−3000,−3000)$ and upper right hand corner $(x,y)=(3000,3000)$. Notice that the heads in the three aquifers are almost equal at three times the largest leakage factor. | ml.contour(win=[-3000, 3000, -3000, 3000], ngr=50, layers=[0, 1, 2], levels=10,
legend=True, figsize=figsize) | notebooks/timml_notebook1_sol.ipynb | Hugovdberg/timml | mit |
Exercise 1d
Create a contour plot of the head in aquifer 1 with labels along the contours. Labels are added when the labels keyword argument is set to True. The number of decimal places can be set with the decimals keyword argument, which is zero by default. | ml.contour(win=[-3000, 3000, -3000, 3000], ngr=50, layers=[1], levels=np.arange(30, 45, 1),
labels=True, legend=['layer 1'], figsize=figsize) | notebooks/timml_notebook1_sol.ipynb | Hugovdberg/timml | mit |
Exercise 1e
Create a contour plot with a vertical cross-section below it. Start three pathlines from $(x,y)=(-2000,-1000)$ at levels $z=-120$, $z=-60$, and $z=-10$. Try a few other starting locations. | win=[-3000, 3000, -3000, 3000]
ml.plot(win=win, orientation='both', figsize=figsize)
ml.tracelines(-2000 * ones(3), -1000 * ones(3), [-120, -60, -10], hstepmax=50,
win=win, orientation='both')
ml.tracelines(0 * ones(3), 1000 * ones(3), [-120, -50, -10], hstepmax=50,
win=win, orientation='both') | notebooks/timml_notebook1_sol.ipynb | Hugovdberg/timml | mit |
Exercise 1f
Add an abandoned well that is screened in both aquifer 0 and aquifer 1, located at $(x, y) = (100, 100)$ and create contour plot of all aquifers near the well (from (-200,-200) till (200,200)). What are the discharge and the head at the abandoned well? Note that you have to solve the model again! | ml = ModelMaq(kaq=[10, 20, 5],
z=[0, -20, -40, -80, -90, -140],
c=[4000, 10000])
w = Well(ml, xw=0, yw=0, Qw=10000, rw=0.2, layers=1)
Constant(ml, xr=10000, yr=0, hr=20, layer=0)
Uflow(ml, slope=0.002, angle=0)
wabandoned = Well(ml, xw=100, yw=100, Qw=0, rw=0.2, layers=[0, 1])
ml.solve()
ml.contour(win=[-200, 200, -200, 200], ngr=50, layers=[0, 2],
levels=20, color=['C0', 'C1', 'C2'], legend=True, figsize=figsize)
print('The head at the abandoned well is:')
print(wabandoned.headinside())
print('The discharge at the abandoned well is:')
print(wabandoned.discharge()) | notebooks/timml_notebook1_sol.ipynb | Hugovdberg/timml | mit |
Usaremos la librería pymongo para python. La cargamos a continuación. | import pymongo
from pymongo import MongoClient | mongo/sesion4.ipynb | dsevilla/bdge | mit |
La conexión se inicia con MongoClient en el host descrito en el fichero docker-compose.yml (mongo). | client = MongoClient("mongo",27017)
client
client.list_database_names() | mongo/sesion4.ipynb | dsevilla/bdge | mit |
Format: 7zipped
Files:
badges.xml
UserId, e.g.: "420"
Name, e.g.: "Teacher"
Date, e.g.: "2008-09-15T08:55:03.923"
comments.xml
Id
PostId
Score
Text, e.g.: "@Stu Thompson: Seems possible to me - why not try it?"
CreationDate, e.g.:"2008-09-06T08:07:10.730"
UserId
posts.xml
Id
PostTypeId
1: Question
2: Answer
ParentID (only present if PostTypeId is 2)
AcceptedAnswerId (only present if PostTypeId is 1)
CreationDate
Score
ViewCount
Body
OwnerUserId
LastEditorUserId
LastEditorDisplayName="Jeff Atwood"
LastEditDate="2009-03-05T22:28:34.823"
LastActivityDate="2009-03-11T12:51:01.480"
CommunityOwnedDate="2009-03-11T12:51:01.480"
ClosedDate="2009-03-11T12:51:01.480"
Title=
Tags=
AnswerCount
CommentCount
FavoriteCount
posthistory.xml
Id
PostHistoryTypeId
- 1: Initial Title - The first title a question is asked with.
- 2: Initial Body - The first raw body text a post is submitted with.
- 3: Initial Tags - The first tags a question is asked with.
- 4: Edit Title - A question's title has been changed.
- 5: Edit Body - A post's body has been changed, the raw text is stored here as markdown.
- 6: Edit Tags - A question's tags have been changed.
- 7: Rollback Title - A question's title has reverted to a previous version.
- 8: Rollback Body - A post's body has reverted to a previous version - the raw text is stored here.
- 9: Rollback Tags - A question's tags have reverted to a previous version.
- 10: Post Closed - A post was voted to be closed.
- 11: Post Reopened - A post was voted to be reopened.
- 12: Post Deleted - A post was voted to be removed.
- 13: Post Undeleted - A post was voted to be restored.
- 14: Post Locked - A post was locked by a moderator.
- 15: Post Unlocked - A post was unlocked by a moderator.
- 16: Community Owned - A post has become community owned.
- 17: Post Migrated - A post was migrated.
- 18: Question Merged - A question has had another, deleted question merged into itself.
- 19: Question Protected - A question was protected by a moderator
- 20: Question Unprotected - A question was unprotected by a moderator
- 21: Post Disassociated - An admin removes the OwnerUserId from a post.
- 22: Question Unmerged - A previously merged question has had its answers and votes restored.
PostId
RevisionGUID: At times more than one type of history record can be recorded by a single action. All of these will be grouped using the same RevisionGUID
CreationDate: "2009-03-05T22:28:34.823"
UserId
UserDisplayName: populated if a user has been removed and no longer referenced by user Id
Comment: This field will contain the comment made by the user who edited a post
Text: A raw version of the new value for a given revision
If PostHistoryTypeId = 10, 11, 12, 13, 14, or 15 this column will contain a JSON encoded string with all users who have voted for the PostHistoryTypeId
If PostHistoryTypeId = 17 this column will contain migration details of either "from <url>" or "to <url>"
CloseReasonId
1: Exact Duplicate - This question covers exactly the same ground as earlier questions on this topic; its answers may be merged with another identical question.
2: off-topic
3: subjective
4: not a real question
7: too localized
postlinks.xml
Id
CreationDate
PostId
RelatedPostId
PostLinkTypeId
1: Linked
3: Duplicate
users.xml
Id
Reputation
CreationDate
DisplayName
EmailHash
LastAccessDate
WebsiteUrl
Location
Age
AboutMe
Views
UpVotes
DownVotes
votes.xml
Id
PostId
VoteTypeId
1: AcceptedByOriginator
2: UpMod
3: DownMod
4: Offensive
5: Favorite - if VoteTypeId = 5 UserId will be populated
6: Close
7: Reopen
8: BountyStart
9: BountyClose
10: Deletion
11: Undeletion
12: Spam
13: InformModerator
CreationDate
UserId (only for VoteTypeId 5)
BountyAmount (only for VoteTypeId 9)
Las bases de datos se crean conforme se nombran. Se puede utilizar la notación punto o la de diccionario. Las colecciones también. | db = client.stackoverflow
db = client['stackoverflow']
db | mongo/sesion4.ipynb | dsevilla/bdge | mit |
Las bases de datos están compuestas por un conjunto de colecciones. Cada colección aglutina a un conjunto de objetos (documentos) del mismo tipo, aunque como vimos en teoría, cada documento puede tener un conjunto de atributos diferente. | posts = db.posts
posts | mongo/sesion4.ipynb | dsevilla/bdge | mit |
Importación de los ficheros CSV. Por ahora creamos una colección diferente para cada uno. Después estudiaremos cómo poder optimizar el acceso usando agregación. | import os
import os.path as path
from urllib.request import urlretrieve
def download_csv_upper_dir(baseurl, filename):
file = path.abspath(path.join(os.getcwd(),os.pardir,filename))
if not os.path.isfile(file):
urlretrieve(baseurl + '/' + filename, file)
baseurl = 'http://neuromancer.inf.um.es:8080/es.stackoverflow/'
download_csv_upper_dir(baseurl, 'Posts.csv')
download_csv_upper_dir(baseurl, 'Users.csv')
download_csv_upper_dir(baseurl, 'Tags.csv')
download_csv_upper_dir(baseurl, 'Comments.csv')
download_csv_upper_dir(baseurl, 'Votes.csv')
import csv
from datetime import datetime
def csv_to_mongo(file, coll):
"""
Carga un fichero CSV en Mongo. file especifica el fichero, coll la colección
dentro de la base de datos, y date_cols las columnas que serán interpretadas
como fechas.
"""
# Convertir todos los elementos que se puedan a números
def to_numeric(d):
try:
return int(d)
except ValueError:
try:
return float(d)
except ValueError:
return d
def to_date(d):
"""To ISO Date. If this cannot be converted, return NULL (None)"""
try:
return datetime.strptime(d, "%Y-%m-%dT%H:%M:%S.%f")
except ValueError:
return None
coll.drop()
with open(file, encoding='utf-8') as f:
# La llamada csv.reader() crea un iterador sobre un fichero CSV
reader = csv.reader(f, dialect='excel')
# Se leen las columnas. Sus nombres se usarán para crear las diferentes columnas en la familia
columns = next(reader)
# Las columnas que contienen 'Date' se interpretan como fechas
func_to_cols = list(map(lambda c: to_date if 'date' in c.lower() else to_numeric, columns))
docs=[]
for row in reader:
row = [func(e) for (func,e) in zip(func_to_cols, row)]
docs.append(dict(zip(columns, row)))
coll.insert_many(docs)
csv_to_mongo('../Posts.csv',db.posts)
csv_to_mongo('../Users.csv',db.users)
csv_to_mongo('../Votes.csv',db.votes)
csv_to_mongo('../Comments.csv',db.comments)
csv_to_mongo('../Tags.csv',db.tags)
posts.count_documents() | mongo/sesion4.ipynb | dsevilla/bdge | mit |
El API de colección en Python se puede encontrar aquí: https://api.mongodb.com/python/current/api/pymongo/collection.html. La mayoría de libros y referencias muestran el uso de mongo desde Javascript, ya que el shell de MongoDB acepta ese lenguaje. La sintaxis con respecto a Python cambia un poco, y se puede seguir en el enlace anterior.
Creación de índices
Para que el proceso map-reduce y de agregación funcione mejor, voy a crear índices sobre los atributos que se usarán como índice... Ojo, si no se crea las consultas pueden tardar mucho. | (
db.posts.create_index([('Id', pymongo.HASHED)]),
db.comments.create_index([('Id', pymongo.HASHED)]),
db.users.create_index([('Id', pymongo.HASHED)])
) | mongo/sesion4.ipynb | dsevilla/bdge | mit |
Map-Reduce
Mongodb incluye dos APIs para procesar y buscar documentos: el API de Map-Reduce y el API de agregación. Veremos primero el de Map-Reduce. Manual: https://docs.mongodb.com/manual/aggregation/#map-reduce | from bson.code import Code
map = Code(
'''
function () {
emit(this.OwnerUserId, 1);
}
''')
reduce = Code(
'''
function (key, values)
{
return Array.sum(values);
}
''')
results = posts.map_reduce(map, reduce, "posts_by_userid")
posts_by_userid = db.posts_by_userid
list(posts_by_userid.find()) | mongo/sesion4.ipynb | dsevilla/bdge | mit |
Se le puede añadir una etiqueta para especificar sobre qué elementos queremos trabajar (query):
La función map_reduce puede llevar añadida una serie de keywords, los mismos especificados en la documentación:
query: Restringe los datos que se tratan
sort: Ordena los documentos de entrada por alguna clave
limit: Limita el número de resultados
out: Especifica la colección de salida y otras opciones. Lo veremos después.
etc.
En el parámetro out se puede especificar en qué colección se quedarán los datos resultado del map-reduce. Por defecto, en la colección origen. (Todos los parámetros aquí: https://docs.mongodb.com/manual/reference/command/mapReduce/#mapreduce-out-cmd). En la operación map_reduce() podemos especificar la colección de salida, pero también podemos añadir un parámetro final out={...}.
Hay varias posibilidades para out:
replace: Sustituye la colección, si la hubiera, con la especificada (p. ej.: out={ "replace" : "coll" }.
merge: Mezcla la colección existente, sustituyendo los documentos que existan por los generados.
reduce: Si existe un documento con el mismo _id en la colección, se aplica la función reduce para fusionar ambos documentos y producir un nuevo documento.
Veremos a continuación, al resolver el ejercicio de crear post_comments con map-reduce cómo se utilizan estas posibilidades.
También hay operaciones específicas de la coleción, como count(), groupby() y distinct(): | db.posts.distinct('Score') | mongo/sesion4.ipynb | dsevilla/bdge | mit |
EJERCICIO (resuelto): Construir, con el API de Map-Reduce, una colección 'post_comments', donde se añade el campo 'Comments' a cada Post con la lista de todos los comentarios referidos a un Post.
Veremos la resolución de este ejercicio para que haga de ejemplo para los siguientes a implementar. En primer lugar, una operación map/reduce sólo se puede ejecutar sobre una colección, así que sólo puede contener resultados de la misma. Por lo tanto, con sólo una operación map/reduce no va a ser posible realizar todo el ejercicio.
Así, en primer lugar, parece interesante agrupar todos los comentarios que se han producido de un Post en particular. En cada comentario, el atributo PostId marca una referencia al Post al que se refiere.
Es importante cómo se construyen las operaciones map() y reduce(). Primero, la función map() se ejecutará para todos los documentos (o para todos los que cumplan la condición si se utiliza el modificador query=). Sin embargo, la función reduce() no se ejecutará a no ser que haya más de un elemento asociado a la misma clave.
Por lo tanto, la salida de la función map() debe ser la misma que la de la función reduce(). En nuestro caso, es un objeto JSON de la forma:
{ type: 'comment', comments: [ {comentario1, comentario2} ] }
En el caso de que sólo se ejecute la función map(), nótese cómo el objeto tiene la misma composición, pero con un array de sólo un elemento (comentario): sí mismo. | from bson.code import Code
comments_map = Code('''
function () {
emit(this.PostId, { type: 'comment', comments: [this]});
}
''')
comments_reduce = Code('''
function (key, values) {
comments = [];
values.forEach(function(v) {
if ('comments' in v)
comments = comments.concat(v.comments)
})
return { type: 'comment', comments: comments };
}
''')
db.comments.map_reduce(comments_map, comments_reduce, "post_comments")
list(db.post_comments.find()[:10]) | mongo/sesion4.ipynb | dsevilla/bdge | mit |
Esto demuestra que en general el esquema de datos en MongoDB no estaría así desde el principio.
Después del primer paso de map/reduce, tenemos que construir la colección final que asocia cada Post con sus comentarios. Como hemos construido antes la colección post_comments indizada por el Id del Post, podemos utilizar ahora una ejecución de map/reduce que mezcle los datos en post_comments con los datos en posts.
La segunda ejecución de map/reduce la haremos sobre posts, para que el resultado sea completo, incluso para los Posts que no aparecen en comentarios, y por lo tanto tendrán el atributo comments vacío.
En este caso, debemos hacer que la función map() produzca una salida de documentos que también están indizados con el atributo Id, y, como sólo hay uno para cada Id, la función reduce() no se ejecutará. Tan sólo se ejecutará para mezclar ambas colecciones, así que la función reduce() tendrá que estar preparada para mezclar objetos de tipo "comment" y Posts. En cualquier caso, como se puede ver, es válida también aunque sólo se llame con un objeto de tipo Post. Finalmente, la función map() prepara a cada objeto Post, inicialmente, con una lista de comentarios vacíos | posts_map = Code("""
function () {
this.comments = [];
emit(this.Id, this);
}
""")
posts_reduce = Code("""
function (key, values) {
comments = []; // The set of comments
obj = {}; // The object to return
values.forEach(function(v) {
if (v['type'] === 'comment')
comments = comments.concat(v.comments);
else // Object
{
obj = v;
// obj.comments will always be there because of the map() operation
comments = comments.concat(obj.comments);
}
})
// Finalize: Add the comments to the object to return
obj.comments = comments;
return obj;
}
""")
db.posts.map_reduce(posts_map, posts_reduce, out={'reduce' : 'post_comments'})
list(db.post_comments.find()[:10]) | mongo/sesion4.ipynb | dsevilla/bdge | mit |
Framework de Agregación
Framework de agregación: https://docs.mongodb.com/manual/reference/operator/aggregation/. Y aquí una presentación interesante sobre el tema: https://www.mongodb.com/presentations/aggregation-framework-0?jmp=docs&_ga=1.223708571.1466850754.1477658152
<video style="width:100%;" src="https://docs.mongodb.com/manual/_images/agg-pipeline.mp4" controls> </video>
Proyección: | respuestas = db['posts'].aggregate( [ {'$project' : { 'Id' : True }}, {'$limit': 20} ])
list(respuestas) | mongo/sesion4.ipynb | dsevilla/bdge | mit |
Lookup! | respuestas = posts.aggregate( [
{'$match': { 'Score' : {'$gte': 40}}},
{'$lookup': {
'from': "users",
'localField': "OwnerUserId",
'foreignField': "Id",
'as': "owner"}
}
])
list(respuestas) | mongo/sesion4.ipynb | dsevilla/bdge | mit |
El $lookup genera un array con todos los resultados. El operador $arrayElementAt accede al primer elemento. | respuestas = db.posts.aggregate( [
{'$match': { 'Score' : {'$gte': 40}}},
{'$lookup': {
'from': "users",
'localField': "OwnerUserId",
'foreignField': "Id",
'as': "owner"}
},
{ '$project' :
{
'Id' : True,
'Score' : True,
'username' : {'$arrayElemAt' : ['$owner.DisplayName', 0]},
'owner.DisplayName' : True
}}
])
list(respuestas) | mongo/sesion4.ipynb | dsevilla/bdge | mit |
$unwind también puede usarse. "Desdobla" cada fila por cada elemento del array. En este caso, como sabemos que el array sólo contiene un elemento, sólo habrá una fila por fila original, pero sin el array. Finalmente se puede proyectar el campo que se quiera. | respuestas = db.posts.aggregate( [
{'$match': { 'Score' : {'$gte': 40}}},
{'$lookup': {
'from': "users",
'localField': "OwnerUserId",
'foreignField': "Id",
'as': "owner"}
},
{ '$unwind': '$owner'},
{ '$project' :
{
'username': '$owner.DisplayName'
}
}
])
list(respuestas) | mongo/sesion4.ipynb | dsevilla/bdge | mit |
Ejemplo de realización de la consulta RQ4
Como ejemplo de consulta compleja con el Framework de Agregación, adjunto una posible solución a la consulta RQ4: | RQ4 = db.posts.aggregate( [
{ "$match" : {"PostTypeId": 2}},
{'$lookup': {
'from': "posts",
'localField': "ParentId",
'foreignField': "Id",
'as': "question"
}
},
{
'$unwind' : '$question'
},
{
'$project' : { 'OwnerUserId': True,
'OP' : '$question.OwnerUserId'
}
},
{
'$group' : {'_id' : {'min' : { '$min' : ['$OwnerUserId' , '$OP'] },
'max' : { '$max' : ['$OwnerUserId' , '$OP'] }},
'pairs' : {'$addToSet' : { '0q': '$OP', '1a': '$OwnerUserId'}}
}
},
{
'$project': {
'pairs' : True,
'npairs' : { '$size' : '$pairs'}
}
},
{
'$match' : { 'npairs' : { '$eq' : 2}}
}
])
RQ4 = list(RQ4)
RQ4 | mongo/sesion4.ipynb | dsevilla/bdge | mit |
La explicación es como sigue:
Se eligen sólo las respuestas
Se accede a la tabla posts para recuperar los datos de la pregunta
A continuación se proyectan sólo el usuario que pregunta y el que hace la respuesta
El paso más imaginativo es el de agrupación. Lo que se intenta es que ambos pares de usuarios que están relacionados como preguntante -> respondiente y viceversa, caigan en la misma clave. Por ello, se coge el máximo y el mínimo de ambos identificadores de usuarios y se construye una clave con ambos números en las mismas posiciones. Así, ambas combinaciones de usuario que pregunta y que responde caerán en la misma clave. También se usa un conjunto (en pairs), y sólo se añadirá una vez las posibles combinaciones iguales de preguntador/respondiente.
Sólo nos interesan aquellas tuplas cuyo tamaño del conjunto de pares de pregunta/respuesta sea igual a dos (en un elemento uno de los dos usuarios habrá preguntado y el otro habrá respondido y en el otro viceversa).
La implementación en Map-Reduce se puede realizar con la misma idea.
En el caso de que queramos tener como referencia las preguntas y respuestas a las que se refiere la conversación, se puede añadir un campo más que guarde todas las preguntas junto con sus respuestas consideradas | RQ4 = db.posts.aggregate( [
{'$match': { 'PostTypeId' : 2}},
{'$lookup': {
'from': "posts",
'localField': "ParentId",
'foreignField': "Id",
'as': "question"}
},
{
'$unwind' : '$question'
},
{
'$project' : {'OwnerUserId': True,
'QId' : '$question.Id',
'AId' : '$Id',
'OP' : '$question.OwnerUserId'
}
},
{
'$group' : {'_id' : {'min' : { '$min' : ['$OwnerUserId' , '$OP'] },
'max' : { '$max' : ['$OwnerUserId' , '$OP'] }},
'pairs' : {'$addToSet' : { '0q':'$OP', '1a': '$OwnerUserId'}},
'considered_pairs' : { '$push' : {'QId' : '$QId', 'AId' : '$AId'}}
}
},
{
'$project': {
'pairs' : True,
'npairs' : { '$size' : '$pairs'},
'considered_pairs' : True
}
},
{
'$match' : { 'npairs' : { '$eq' : 2}}
}
])
RQ4 = list(RQ4)
RQ4
(db.posts.find_one({'Id': 238}), db.posts.find_one({'Id': 243}),
db.posts.find_one({'Id': 222}), db.posts.find_one({'Id': 223})) | mongo/sesion4.ipynb | dsevilla/bdge | mit |
Ejemplo de consulta: Tiempo medio desde que se hace una pregunta hasta que se le da la primera respuesta
Veamos cómo calcular el tiempo medio desde que se hace una pregunta hasta que se le da la primera respuesta. En este caso se puede utilizar las respuestas para apuntar a qué pregunta correspondieron. No se considerarán pues las preguntas que no tienen respuesta, lo cual es razonable. Sin embargo, la función map debe guardar también las preguntas para poder calcular el tiempo menor (la primera repuesta). | from bson.code import Code
# La función map agrupará todas las respuestas, pero también necesita las
mapcode = Code("""
function () {
if (this.PostTypeId == 2)
emit(this.ParentId, {q: null, a: {Id: this.Id, CreationDate: this.CreationDate}, diff: null})
else if (this.PostTypeId == 1)
emit(this.Id, {q: {Id: this.Id, CreationDate: this.CreationDate}, a: null, diff: null})
}
""")
reducecode = Code("""
function (key, values) {
q = null // Pregunta
a = null // Respuesta con la fecha más cercana a la pregunta
values.forEach(function(v) {
if (v.q != null) // Pregunta
q = v.q
if (v.a != null) // Respuesta
{
if (a == null || v.a.CreationDate < a.CreationDate)
a = v.a
}
})
mindiff = null
if (q != null && a != null)
mindiff = a.CreationDate - q.CreationDate;
return {q: q, a: a, diff: mindiff}
}
""")
db.posts.map_reduce(mapcode, reducecode, "min_response_time")
mrt = list(db.min_response_time.find())
from pandas.io.json import json_normalize
df = json_normalize(mrt)
df.index=df["_id"]
df
df['value.diff'].plot() | mongo/sesion4.ipynb | dsevilla/bdge | mit |
Esto sólo calcula el tiempo mínimo de cada pregunta a su respuesta. Después habría que aplicar lo visto en otros ejemplos para calcular la media. Con agregación, a continuación, sí que se puede calcular la media de forma relativament sencilla: | min_answer_time = db.posts.aggregate([
{"$match" : {"PostTypeId" : 2}},
{
'$group' : {'_id' : '$ParentId',
# 'answers' : { '$push' : {'Id' : "$Id", 'CreationDate' : "$CreationDate"}},
'min' : {'$min' : "$CreationDate"}
}
},
{ "$lookup" : {
'from': "posts",
'localField': "_id",
'foreignField': "Id",
'as': "post"}
},
{ "$unwind" : "$post"},
{"$project" :
{"_id" : True,
"min" : True,
#"post" : True,
"diff" : {"$subtract" : ["$min", "$post.CreationDate"]}}
},
# { "$sort" : {'_id' : 1} }
{
"$group" : {
"_id" : None,
"avg" : { "$avg" : "$diff"}
}
}
])
min_answer_time = list(min_answer_time)
min_answer_time | mongo/sesion4.ipynb | dsevilla/bdge | mit |
Vertex SDK: AutoML training image object detection model for export to edge
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_object_detection_online_export_edge.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_object_detection_online_export_edge.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_object_detection_online_export_edge.ipynb">
Open in Google Cloud Notebooks
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex SDK to create image object detection models to export as an Edge model using a Google Cloud AutoML model.
Dataset
The dataset used for this tutorial is the Salads category of the OpenImages dataset from TensorFlow Datasets. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the bounding box locations and corresponding type of salad items in an image from a class of five items: salad, seafood, tomato, baked goods, or cheese.
Objective
In this tutorial, you create a AutoML image object detection model from a Python script using the Vertex SDK, and then export the model as an Edge model in TFLite format. You can alternatively create models with AutoML using the gcloud command-line tool or online using the Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
Export the Edge model from the Model resource to Cloud Storage.
Download the model locally.
Make a local prediction.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python. | import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG | notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Tutorial
Now you are ready to start creating your own AutoML image object detection model.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage. | IMPORT_FILE = "gs://cloud-samples-data/vision/salads.csv" | notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Quick peek at your data
This tutorial uses a version of the Salads dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows. | if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head | notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Create the Dataset
Next, create the Dataset resource using the create method for the ImageDataset class, which takes the following parameters:
display_name: The human readable name for the Dataset resource.
gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.
import_schema_uri: The data labeling schema for the data items.
This operation may take several minutes. | dataset = aip.ImageDataset.create(
display_name="Salads" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aip.schema.dataset.ioformat.image.bounding_box,
)
print(dataset.resource_name) | notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Create and run training pipeline
To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.
Create training pipeline
An AutoML training pipeline is created with the AutoMLImageTrainingJob class, with the following parameters:
display_name: The human readable name for the TrainingJob resource.
prediction_type: The type task to train the model for.
classification: An image classification model.
object_detection: An image object detection model.
multi_label: If a classification task, whether single (False) or multi-labeled (True).
model_type: The type of model for deployment.
CLOUD: Deployment on Google Cloud
CLOUD_HIGH_ACCURACY_1: Optimized for accuracy over latency for deployment on Google Cloud.
CLOUD_LOW_LATENCY_: Optimized for latency over accuracy for deployment on Google Cloud.
MOBILE_TF_VERSATILE_1: Deployment on an edge device.
MOBILE_TF_HIGH_ACCURACY_1:Optimized for accuracy over latency for deployment on an edge device.
MOBILE_TF_LOW_LATENCY_1: Optimized for latency over accuracy for deployment on an edge device.
base_model: (optional) Transfer learning from existing Model resource -- supported for image classification only.
The instantiated object is the DAG (directed acyclic graph) for the training job. | dag = aip.AutoMLImageTrainingJob(
display_name="salads_" + TIMESTAMP,
prediction_type="object_detection",
multi_label=False,
model_type="MOBILE_TF_LOW_LATENCY_1",
base_model=None,
)
print(dag) | notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Run the training pipeline
Next, you run the DAG to start the training job by invoking the method run, with the following parameters:
dataset: The Dataset resource to train the model.
model_display_name: The human readable name for the trained model.
training_fraction_split: The percentage of the dataset to use for training.
test_fraction_split: The percentage of the dataset to use for test (holdout data).
validation_fraction_split: The percentage of the dataset to use for validation.
budget_milli_node_hours: (optional) Maximum training time specified in unit of millihours (1000 = hour).
disable_early_stopping: If True, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.
The run method when completed returns the Model resource.
The execution of the training pipeline will take upto 20 minutes. | model = dag.run(
dataset=dataset,
model_display_name="salads_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=20000,
disable_early_stopping=False,
) | notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project. | # Get model resource ID
models = aip.Model.list(filter="display_name=salads_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation) | notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Subsets and Splits