video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
HGYYEUSm-0Q | chain that gives us one family of distributions of models of which the main example is the generative stochastic Network and then finally if we would like to draw samples directly we have models like generative adversarial networks or deep moment matching networks are both examples of models that can draw samples directly but don't necessarily represent a density function so now let's look at | 1,005 | 1,028 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1005s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | each of these in a little bit more detail and describe exactly what the advantages and disadvantages of them are and why you might want to be in one branch of the tree or another so first fully visible belief networks are the most mathematically straightforward they use the chain rule of probability to decompose the probability distribution over a vector into a product over each | 1,028 | 1,050 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1028s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | of the members of the vector we write down a probability distribution for the distribution over X 1 and then we multiply that by the distribution over X 2 given X 1 and then X 3 given X 1 and X 2 and so on until we finally have a distribution over the final member of the vector given all of the other members of the vector so this goes back to a paper by Brendan Freund 1996 but | 1,050 | 1,073 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1050s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | has had several other advancements in the meantime the current most popular member of this model family is the pixel CNN and I show here some samples of elephants that it generated the primary disadvantage of this approach is that generating a sample is very slow each time we want to sample a different X I from the vector X we need to run the model again and these n different times | 1,073 | 1,101 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1073s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | that we run the model cannot be parallelized each of these operations of sampling another X is dependent on all of the earlier X I values and that means that there's really no choice but to schedule them one after another regardless of how much bandwidth we have available one other smaller drawback is that the generation process is not guided by a latent code many of the | 1,101 | 1,127 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1101s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | other models that we study have a latent code that we can sample first that describes the entire vector to be generated and then the rest of the process involves translating that vector into something that lies in the data space and that allows us to do things like have embeddings that are useful for semi-supervised learning or generating samples that have particular properties | 1,127 | 1,150 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1127s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | that were interested in fully visible belief networks don't do this out of the box but there are different extensions of them that can enable these abilities one very recent example of a fully visible belief net is wavenet and it shows both some of the advantages and some of the disadvantages of these fully visible belief networks first because the optimization process is very | 1,150 | 1,174 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1150s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | straightforward it's just minimizing a cost function with no approximation to that cost function it's very effective and generates really amazing samples but the disadvantage is that the sample generation is very slow in particular it takes about two minutes to generate one second of audio and that means that barring some major improvement in the way that we're able to run the model | 1,174 | 1,196 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1174s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | it's not going to be able to be used for interactive dialogue any time soon even though it is able to generate very good lifelike audio waveforms the other major family of explicit tractable density models is the family of models based on the change of variables where we begin with a simple distribution like a Gaussian and we use a non-linear function to transform that distribution | 1,196 | 1,222 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1196s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | into another space so we transform from a latent space to on this slide the space of natural images the main drawback to this approach is that the transformation must be carefully designed to be invertible and to have a tractable Jacobian and in fact a tractable determinant of the Jacobian in particular this requirement says that the latent variables must have the same | 1,222 | 1,246 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1222s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | dimension allottee as the data space so if we want to generate 3,000 pixels we need to have 3,000 latent variables it makes it harder to design the model to have exactly the capacity that we would like to have another major family of models is those that have intractable density functions but then use tractable approximations to those density functions currently one of the most popular | 1,246 | 1,273 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1246s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | members of this family is the variational auto-encoder the basic idea is to write down a density function log P of X where the density is intractable because we need to marginalize out a random variable Z Z is a vector of latent variables that provide a hidden code describing the input image and because of the process of marginalizing these variables out to recover simply the distribution over X | 1,273 | 1,301 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1273s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | is intractable we're forced to use instead a variational approximation this variational approximation introduces a distribution Q over the latent variable Z and to the extent that this distribution Q is closer to the true posterior over the latent variables we're able to make it bound that becomes tighter and tighter and does a better job of lower bounding the true density | 1,301 | 1,326 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1301s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | unfortunately this model is only asymptotically consistent if this Q distribution is perfect otherwise there's a gap between the lower bound and the actual density so even if the optimizer is perfect and even if we have infinite training data we are not able to recover exactly the distribution that was used to generate the data in practice we observe that variational autoencoders are very good | 1,326 | 1,351 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1326s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | at obtaining high likelihood but they tend to produce lower quality samples and in particular the samples are often relatively blurry another major family of models is the Bolton machine these also have an explicit density function that is not actually tractable in this case the Bolton machine is defined by an energy function and the probability of a particular state is proportional to e to | 1,351 | 1,379 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1351s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | the value of the energy in order to convert this to an actual probability distribution it started to renormalize by dividing by the sum over all the different states and that sum becomes intractable we're able to approximate it using Monte Carlo methods but those Monte Carlo methods often suffer from problems like failing to mix between different modes and in general Monte | 1,379 | 1,400 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1379s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | Carlo methods especially Markov chain Monte Carlo method perform very poorly in high dimensional spaces because the Markov chains break down for very large images we don't really see both some machines applied to tasks like modeling image annette images they perform very well on small data sets like m nest but then have never really scaled all of these different | 1,400 | 1,424 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1400s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | observations about the other members of the family tree bring us to generative adversarial networks and explained the design requirements that I had in mind when I thought of this model first they use a latent code that describes everything that's generated later they have this property in common with other models like variational Ottoman coders and Boltzmann machines but it's an | 1,424 | 1,446 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1424s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | advantage that they have over fully visible belief networks they're also asymptotically consistent if you're able to find the equilibrium point of the game defining a general a generative adverse trail network you're guaranteed that you've actually recovered the true distribution that generates the data modulo sample complexity issues so if you have infinite training data you do | 1,446 | 1,468 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1446s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | eventually recover the correct distribution there are no Markov chains needed neither to train the generative adversarial Network nor to draw samples from it and I felt like that was an important requirement based on the way that the Markov chains had seemed to hold back restricted Boltzmann machines today we've started to see some models that use Markov chains more successfully | 1,468 | 1,490 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1468s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | and I'll describe those later in the talk but that was one of my primary motivations for designing this particular model family finally a major advantage of generative adversarial networks is that they are often regarded as producing the best samples compared to other models like variational autoencoders in the past few months we've started to see other models like | 1,490 | 1,509 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1490s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | pixel cnn's competing with them and it's now somewhat difficult to say which is the best because we don't have a good way of quantifying exactly how good a set of samples are that concludes my description of the different families of generative models and how they relate to each other and how generative address our networks are situated in this family of generative models so I'll move on to | 1,509 | 1,535 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1509s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | describing exactly how generative adversarial net works actually work the basic framework is that we have two different models and their adversaries of each other in the sense of game theory there's a game that has well-defined payoff functions and each of the two players tries to determine how they can get the most payoff possible within this game there are two different networks one of them | 1,535 | 1,563 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1535s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | is called the generator and it is the primary model that were interested in learning the generator is the model that actually generates samples that are intended to resemble those that were in the training distribution the other model is the discriminator the discriminator is not really necessary after we finished the training process at least not in the original development | 1,563 | 1,585 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1563s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | of generative adversary works there are some ways of getting some extra use out of the discriminator but in the basic setup we can think of the discriminator as a tool that we use during training that can be discarded as soon as training is over the role of the discriminator is to inspect a sample and say whether that sample looks real or fake so the training process consists of | 1,585 | 1,608 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1585s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | sampling images or other kinds of data from the training set and then running the discriminator on those inputs the discriminator is any kind of differentiable function that has parameters that we can learn with gradient descent so we usually represent it as a deep neural network but in principle it could be other kinds of models when the discriminator is applied | 1,608 | 1,633 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1608s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | to images that come from the training set its goal is to output a value that is near one representing a high probability that the input was real rather than fake but half the time we also apply the discriminator to examples that are in fact fake in this case we begin by sampling the latent vector Z in this case we sample Z from the prior distribution over latent variables so Z | 1,633 | 1,659 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1633s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | is essentially a vector of unstructured noise it's a source of randomness that allows the generator to output a wide variety of different vectors we then apply the generator to the input vector Z the generator function is a differentiable function that has parameters that can be learned by gradient descent similar to the discriminator function and we usually | 1,659 | 1,684 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1659s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | represent the generator as being a deep neural network though once again it could be any other kind of model that satisfies those differentiability properties after we have applied G to Z we obtain a sample from the model and ideally this will resemble actual samples from the data set though early in learning it will not after we've obtained that sample we apply the | 1,684 | 1,709 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1684s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | discriminator function D again and this time the goal of the discriminator D is to output a value D of G of Z that is near zero I'm sorry I realized there's a mistake in the slide actually it's backwards the discriminator wants to make the value in this case be near zero and the generator would like to make it be near one so the discriminator would like to reject these | 1,709 | 1,732 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1709s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | samples as being fake well the generator would like to fool the discriminator into thinking that they're real you can think of the generator and the discriminator as being a little bit like counterfeit counterfeiters and police the counterfeiters are trying to make money that looks realistic and the police are trying to correctly identify counterfeit money and reject it without accidentally | 1,732 | 1,756 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1732s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | rejecting real money as the two adversaries are forced to compete against each other the counterfeiters must become better and better if they want to fool the police and eventually they're forced to make counterfeit money that is identical to real money similarly in this framework the generator must eventually learn to make samples that come from the distribution | 1,756 | 1,778 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1756s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | that generated the data so let's look at the generator Network in a little bit more detail we can think of the generator network is being a very simple graphical model shown on the Left there's a vector of latent variable Z and there's a vector of observed variables X and depending on the model architecture we usually have every member of X depend on every layer of Z | 1,778 | 1,803 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1778s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | so every member of X 2 and on every member of Z so I've drawn this as just a simple vector-valued model where we see one edge you could also imagine expanding it into a graph of scalar variables where would be a bytes heart bipartite directed graph the main reason that generative adversarial networks are relatively simple to train is that we never actually try to infer | 1,803 | 1,827 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1803s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | the probability distribution over Z given X instead we sample values of Z from the prior and then we sample values of X from P of x given Z because that's an central ancestral sampling in a directed graphical model it's very efficient in particular we accomplished this ancestral sampling by applying the function G to the input variable Z one of the very nice things about the | 1,827 | 1,854 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1827s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | generative every cell networks framework is that there are not really any requirements other than differentiability on G unlike nonlinear nonlinear ICA there is no requirement that Z have the same dimension as X for example or Boltzmann machines require energy functions that are tractable and have different tractable conditional distributions we don't actually need to | 1,854 | 1,880 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1854s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | be careful to design values that have multiple different conditionals that are all tractable in this case we only really need to make one conditional distribution tractable there are a few properties that we'd like to be able to guarantee that impose a few extra requirements on G in particular if we want to be sure that we're able to recover the training distribution we | 1,880 | 1,901 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1880s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | need to make sure that X has a higher dimension than Z or at least an equal dimension this is just to make sure that we aren't forced to represent only a low dimensional manifold with an X space an interesting thing is that it's actually possible to train the generator network even if we don't provide support across all of X space if we make Z be lower dimensional in X then we obtain a low | 1,901 | 1,929 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1901s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | dimensional manifold that assigns no probability whatsoever to most space most points in X space but we're still able to train the model using the discriminator as a guide that's kind of an unusual quirk that sets this framework apart from the methods that are based on maximizing a density function those would break if we evaluated the logarithm of zero density | 1,929 | 1,952 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1929s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | so the training procedure is to choose an optimization algorithm you can pick your favorite one I usually like to use atom these days and then repeatedly sample to different many batches of data one of these is a mini batch of training examples that you draw from the data set and the other mini batch is a set of input values Z that we sample from the prior and then feed to the generator we | 1,952 | 1,979 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1952s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | then run gradient descent on both of the players costs simultaneously in one optional variant we can also run the update for the discriminator more often than we run the update for the generator I personally usually just use one update for each player each player has its own cost and the choice of the cost determines exactly how the training algorithm proceeds there are many | 1,979 | 2,006 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=1979s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | different ways of specifying the cost the simplest one is to use a minimax game where we have a cost function J superscript D defining the cost for the generator for the discriminator and then the cost for the generator is just the negative of the cost for the discriminator so you can think of this as having a single value that the discriminator is trying to maximize and | 2,006 | 2,030 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2006s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | the generator is trying to minimize so what exactly is this value that the two players are fighting over it's simply the cross-entropy between the discriminators predictions and the correct labels and the binary classification task of discriminating real data from fake data so we have one term where we're feeding data and we're with a discriminator is trying to | 2,030 | 2,054 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2030s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | maximize the log probability of assigning one to the data and then we have another term where the discriminator is aiming to maximize the log probability of assigning 0 to the fake samples when we look for an equilibrium point to a game it's different than minimizing a function we're actually looking for a saddle point of J superscript D and if we're able to | 2,054 | 2,077 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2054s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | successfully find this saddle point the whole procedure resembles minimizing the Jensens Shannon divergence between the data and the distribution represented by the model so as our first exercise which will be accompanied by a little five-minute break we're going to study what the discriminator does when the discriminator plays this game at the top of the slide I've shown the cost | 2,077 | 2,103 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2077s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | function that the discriminator is going to minimize and the exercise is to determine what the solution to D of X is written in terms of the data distribution and the generator distribution you'll also find that you need to make a few assumptions in order to make a clean solution to this exercise so I'll give you about five minutes to work on this exercise or if | 2,103 | 2,126 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2103s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | you don't want to do the exercise feel free to talk with your neighbors or just take a break for a minute so that you don't need to remain attentive for too many consecutive minutes I'm also happy to take questions from the mic during this time if anyone's interested yeah over there yeah my question is what prevents the generator from always generating the same image you see what I mean it could | 2,126 | 2,155 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2126s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | just lazily learn to always generate one single realistic image and be fine with this yeah that's a good question and it's an important part of ongoing research in generative adversarial networks essentially if we're able to correctly play this minimax game then the generator is not able to consistently fool the discriminator by always generating the same sample the | 2,155 | 2,182 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2155s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | discriminator would learn to recognize that individual sample and rejected as being fake in practice it's difficult to find a true equilibrium point of this game and one of the failure modes is actually to generate samples that have too little diversity to them and because of that we're having to study ways to improve our ability to find the equilibrium I became thanks | 2,182 | 2,217 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2182s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | did I yeah over here okay so I'm on your left actually yeah here I'm raising my hand okay so I'm actually learning a bit of Gans as well and variational encoders and I see certain resemblances in terms of sampling in this Z space in what case should I when generating samples use again I know what cases should I use variational autoencoders Thanks if your goal is to obtain a high likelihood then | 2,217 | 2,249 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2217s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | you would be better off using a variational auto encoder if your goal is to obtain realistic samples then you would usually be better off using a generative adversarial network rather than a variational autoencoder you can kind of see this in the cost function the generative adversity all Network is designed to fool the discriminator into thinking that it's samples are realistic and the | 2,249 | 2,271 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2249s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | variational autoencoder is designed to maximize the likelihood I how to sample from the data is just uniform distribution or that's also a really good question and I think one that is a topic of ongoing research the naive way of implementing the algorithm and the one that everyone does so far is to sample uniformly from the training data and also to sample uniformly from the z | 2,271 | 2,298 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2271s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | space but you could imagine the importance sampling could give us big improvements in particular most of the points that we train the generator on are wasted because we're usually going to sample from points that are doing pretty well and what we'd really like to do is find points that are doing very badly or maybe points that lie on the boundary between two modes in order to adjust | 2,298 | 2,323 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2298s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | those boundaries so you could imagine that as a procedure for doing important sampling where we visit latent encodes the yield more important aspects of the learning process and then reweighed those samples to correct for the bias on the sampling procedure could actually lead to an improvement so I just have one quick question I'm surprised well extremely impressed | 2,323 | 2,353 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2323s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | by this this beautiful algorithm but one thing that I'm rather confused by is why don't strange artifacts appear on the representation for the weight created by the generator and once is created by the generator it has some and it's any sort of non visually relevant artifact whether it is a non smoothness and then that would just mean the discriminator is set up to just win does that make | 2,353 | 2,388 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2353s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | sense yeah that makes sense so there are unusual artifacts that appear in samples created by the generator and in a lot of cases we're fortunate that those artifacts are somewhat compatible with the blind spots and the discriminator one example is if we use a convolutional generator the generator is somewhat inclined to producing unusual tile patterns there's a really good blog post | 2,388 | 2,415 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2388s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | by a ghostess Adina vase londoom Alain and Chris Ola I'm sorry if I forgot into the authors in that list about the checkerboard patterns that appear when you use D convolution with large stride in the generator though the good news is that the discriminator is also using convolution presumably with similar stride and so it might actually become blind to the same grid patterns that the | 2,415 | 2,439 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2415s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | generator creates the best answer exactly right but more generally there are a lot of artifacts that come out of the generator that don't really seem all that relevant to the sample creation process and the discriminator spends a lot of its time learning to reject patterns that ideally it would just you know not ever have to encounter in the first place like on M NIST is a very | 2,439 | 2,465 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2439s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | simple data set with just handwritten digits on a background if you look at the weights that the discriminator learns in the first layer they often look a little bit like 40 a basis so early on in learning they're realizing that the generator often makes a lot of high-frequency stuff and the data doesn't really have that frequency and so the discriminator is looking at this | 2,465 | 2,486 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2465s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | whole spectrum of different frequencies in order to figure out if there's too much of different bands president or not really it seems like it would be much better for the generator to go straight to making pen strokes and the discriminator go straight to paying attention to pen strokes instead of spending all of its time policing exactly how it sharp the transitions | 2,486 | 2,507 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2486s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | between neighboring pixels are so if just wanna understand this objective function a little bit better if you fix the generator so that it just does negative sampling or what a rather let me ask what is the relation between this objective function and a negative sampling approach the kind that are used with like board Tyvek Oh negative sampling forward to Veck I haven't | 2,507 | 2,536 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2507s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | really thought about that one connection to negative sampling is when trading boltzmann machines we generate samples from the model in order to estimate the gradient on the log partition function and we call that the negative phase you can think of the generative adversity all Network training procedure as being almost entirely negative phase the the generator only really learns from the | 2,536 | 2,556 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2536s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | samples it makes and it makes it a little bit like when you carve a statue out of marble you only ever remove things rather than adding things it's it's kind of a unique peculiarity of this particular training process so in the interest of time I think I should move on to the solution to this exercise but I'll continue taking more questions probably most of them at the next | 2,556 | 2,576 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2556s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | exercise break okay yeah okay so yeah I'll take your question next when I come to the exercise to you so the solution to exercise 1 and as you recall if you were paying attention to the questions rather than to the exercise we're looking for the optimal discriminative function D of X in terms of P data and P generator to solve this it's best to assume that both P data and P generator | 2,576 | 2,606 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2576s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | are nonzero everywhere if we don't make that assumption then there's this issue that some points in the discriminator the discriminators input space might never be sampled there it's training process and then those particular inputs would not really have a defined behavior because they're just never trained but if you make those relatively weak assumptions we can then | 2,606 | 2,630 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2606s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | just solve for the functional derivatives where we regard D of X as being almost like this infinite dimensional vector where every x value index is a different member of the vector and we're just solving for a big vector like we're used to doing with calculus so in this case we take the derivative with respect to a particular D of X output value of the cost function | 2,630 | 2,653 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2630s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | and we set it equal to zero it's pretty straightforward to take those derivatives and then from there it's straightforward algebra to solve this stationarity condition and what we get is that the optimal discrimination function is the ratio between P data of X and the sum of P data of X and P model of X so this is the main mathematical technique that sets generative adverse | 2,653 | 2,678 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2653s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | own networks apart from the other models that I described in the family tree some of them use techniques like lower bounds some of them use techniques like Markov chains generative adversarial networks use supervised learning to estimate a ratio of densities and essentially this is the the property that makes them really unique supervised learning is able to in in the ideal | 2,678 | 2,705 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2678s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | limit of infinite data and perfect optimization it's able to recover exactly the function that we want and the way that it breaks down is different from the other approximations it can suffer from under fitting if the optimizer is not perfect and it can suffer from overfitting if the training data is limited and it doesn't learn to generalize very well from that training | 2,705 | 2,726 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2705s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | data so far I've described everything in terms of a minimax game where there's a single value function and one player tries to maximize it and the other player tries to minimize it we can actually make the game a little bit more complicated where each player has its own independently parameterised cost so in all the different versions of the game we pretty much always want the | 2,726 | 2,748 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2726s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | discriminator to be using your bridge version of the game where it's just trying to be a good binary classifier but there are many different things we might consider doing with the generator in particular one really big problem with the minimax game is that when the discriminator becomes too smart the gradient for the generator goes away one of the really nice properties of the | 2,748 | 2,770 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2748s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | cross entropy loss function that we use to Train sigmoid classifiers and softmax classifiers is that whenever the classifier is making a mistake whenever it's choosing the wrong class the gradient is guaranteed to be nonzero the gradient of the cross entropy with respect to the logits approaches 1 as the probability assigned to the correct class approach is zero so we can never | 2,770 | 2,795 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2770s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | get in a situation where the classifier is unable to learn due to a lack of gradient either it has gradient and it's making a mistake or it lacks gradient and it's perfect so the discriminator has this particular property but unfortunately if we negate the discriminators cost then the generator has the opposite of that property whenever the generator is failing to | 2,795 | 2,817 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2795s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | fool the discriminator completely then it has no gradient because the output of the discriminator has saturated what we can do is instead of flipping the sign of the discriminators cost we can flip the order of the arguments to the cross-entropy function specifically this means that rather than trying to minimize the log probability of the correct answer we have the generator try | 2,817 | 2,841 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2817s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | to maximize the log probability of the wrong answer both of these cost functions are monotonically decreasing in the same direction but they're steep in different places at this point it's no longer possible to describe the equilibrium with just a single loss function and the motivations for this particular cost are far more heuristic we don't have a good theoretical | 2,841 | 2,864 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2841s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | argument that this place is the Nash equilibrium in the right place but in practice we see that this cost function behaves similar to the minimax cost function early in learning and then later in learning when the minimax function would start to have trouble with saturation and a lack of gradient this cost function continues to learn rapidly so this is the default cost | 2,864 | 2,884 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2864s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | function usually advocate that most people use even though it's not quite as theoretically appealing generative address trail networks did not really scale to very large inputs when my co-authors and I first developed them and eventually they were scaled to large images using a hand design process called lap Gans that used a laplacian pyramid to separate the image into | 2,884 | 2,909 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2884s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | multiple scales and generate each scale independently but more recently the way that they are usually used is following an architecture that was introduced in a collaboration between a start-up called in deco and face book AI research this architecture is called the DC Gann architecture for deep convolutional generative adversarial networks even in the original paper generative Ebersole | 2,909 | 2,933 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2909s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | networks were deep and convolutional but this paper placed greater emphasis on having multiple convolutional layers and using techniques that were invented after the original development of generative error so networks such as batch normalization so in particular when we generate images we might wonder exactly what we should do to increase the resolution as we move through a | 2,933 | 2,954 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2933s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | convolutional network the answer from the DC gun architecture is just to use a stride of greater than one when using the deconvolution operator another important contribution of the DC gun paper is to show that it's important to use batch normalization that every layer except for the last layer of the generator network that makes the learning process much more stable and | 2,954 | 2,977 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2954s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | since then guns have been applied to a wide range of large image generation tasks DC guns showed that you can generate really good images of bedrooms in particular many different data sets that have a small number of output modes work really well with DC gun style architectures so here we can see that we're getting realistic beds blankets windows cabinets and so on and that we | 2,977 | 3,005 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=2977s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | have a quite a variety of different kinds of lighting and all the different sources of lighting are rendered in a very nice realistic way another domain where generative adversity or networks work well because the number of outputs is restricted is the domain of images of faces DC guns were shown to work very well on faces and in particular they showed that the | 3,005 | 3,028 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3005s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | latent code is actually very useful for representing faces many of you have probably seen the result the language models that have word embeddings can have properties where the word embedding for Queen if you subtract the word embedding for female and add the word of bedding for male give us a word embedding very close to the word embedding for King so you can actually | 3,028 | 3,051 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3028s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | do algebra in latent space and have it correspond to semantics the authors of the DC GaN paper showed that generative adversarial networks provide a similar property for images in particular if we take the word or the image embedding for images of a man with glasses and subtract the embedding for images of a man and add the embedding for images of a woman we obtained the embedding that | 3,051 | 3,077 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3051s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | corresponds to images of women with glasses all of the images in this slide were generated by the network none of them our training data they all come from decoding different embeddings so this shows that we're able to do algebra and latent space and have that algebra correspond to semantic properties just like with language models but what's even more exciting than language models | 3,077 | 3,099 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3077s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | is that we're actually able to decode this latent variable to a rich high dimensional image where all the different thousands of pixels are actually arranged correctly in relation to each other in the case of language models we only had to find an embedding that was really close to the embedding for the word King but we didn't have to actually map from the embedding to some | 3,099 | 3,121 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3099s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | kind of complicated data space so here we've shown we can go one step further and actually accomplish that mapping tasks when we try to understand exactly how a generative adversarial networks work one thing that's important to think about is whether the particular choice of divergence that we minimize is really important and in the past I and several other people have argued the generative | 3,121 | 3,145 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3121s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | adversarial networks made good samples and obtained bad likelihood because of the divergence that we chose I no longer believe that and I'm going to give you an argument now that the divergence doesn't matter but I will start by explaining to you why you might think that it should so if we maximize the likelihood of the data that's similar to it that's equivalent to | 3,145 | 3,167 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3145s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | minimizing the KL divergence between the data distribution and the model distribution and that's shown on the left in this panel here the data distribution is represented by the blue curves where we have a bimodal data distribution for this example the model distribution is represented by the dashed green curve and in this particular demonstration I'm assuming | 3,167 | 3,190 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3167s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | that the model is a Gaussian with a single mode so it's not able to represent the data distribution correctly so this is what the maximum likelihood solution to this problem would give us the Gaussian ends of averaging out the two different modes the KL divergence is not actually symmetric maximum likelihood corresponds to minimizing the KL divergence with the | 3,190 | 3,211 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3190s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | data on the left and the model on the right but we can actually flip that around we can minimize the KL divergence with the model on the left and the data on the right and when we do that we get a different result where instead of averaging out the two modes the model as shown in the panel on the right here we'll choose one of the modes we can think of KL data come a model as saying | 3,211 | 3,233 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3211s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | that the model should put probability mass everywhere that the data puts probability mass and we can think of KL data KL model comma data as saying that the model should not put probability mass anywhere that the data does not put probability mass in the left it's really important to have some mass on both peaks on the right it's really important to never generate a sample in the valley | 3,233 | 3,257 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3233s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | between the two peaks because none of the data ever actually occurs there both of these are perfectly legitimate approaches to generative modeling and you can choose one or the other based on whichever task you are using and what the design requirements for that task are the loss that we traditionally use with generative adversarial networks mostly because it was the thing that | 3,257 | 3,276 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3257s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | popped into my head in a bar as as Erin mentioned is pretty similar to the the divergence on the right but since that night in the I've realized that it's possible to use other divergences and and several papers by other people have been published on how to use other divergences and I now no longer think that the choice of divergence explains why we get really | 3,276 | 3,296 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3276s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | good samples and don't get as good of likelihood so here's how you can actually get maximum likelihood out of a generative adversarial network where you approximately minimize the KL divergence between data and model rather than model and data for the discriminator Network you use the same cost function as before which is just the binary classification task and for the generator network we | 3,296 | 3,321 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3296s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | now sample from the generator and then we penalize it according to e to the value of the logits of the discriminator and if the discriminator is optimal this has the same expected gradient with respect to the parameters as the KL divergence data and the model does so its approximating maximum likelihood by using supervised learning to estimate a ratio that would be intractable if we | 3,321 | 3,347 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3321s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | were to evaluate the maximum likelihood criterion directly in general we can think of these different costs as being like reward functions we can kind of think of the generator net as being a reinforcement learning agent where it takes actions and we reward its actions depending on the way that the environment responds the thing that makes this particular reinforcement | 3,347 | 3,370 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3347s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | learning setup a little unusual is that part of the environment is another learning agent in particular the discriminator all these different costs have one thing in common you can compute the cost using only the output of the discriminator and then for every sample you you just give a reward that depends on exactly what the discriminator did so if we look at a graph of the cost that | 3,370 | 3,397 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3370s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | the generator incurs as a function of the output of the discriminator we can see that all these different costs increase as we move from left to right and so our decrease as we move from left to right essentially that's saying that if you make the discriminator think that the samples that the generator created are real then you incur a very low cost we can see the way that they saturate in | 3,397 | 3,419 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3397s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | places and all so we can see how sampling along these curves that give us very different variance in the estimate of the gradient the green curve that lies the highest is the heuristic Allah motivated cost which is designed not to saturate when the generator is making a mistake so if you look at the very extreme left where the discriminator is outputting zeros where the discriminator | 3,419 | 3,440 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3419s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.