video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
HGYYEUSm-0Q
is successfully rejecting the generator samples this cost function has a a high derivative value so the model is able to learn rapidly early on when it samples did not yet look realistic then if we move downward in the series of plots the blue curve the minimax curve is the one that we originally used to design this model framework and the one that's the easiest to analyze using the minimax
3,440
3,468
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3440s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
theorem this curve is relatively flat most of the way across and starts to curve down gently as the samples become more realistic and then finally the maximum likelihood cost which has the negation of an exponential function in it is very flat on the left side but then shoots off exponentially downward as we get very far to the right so we can see that we would actually incur
3,468
3,493
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3468s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
very high variance in the estimate of the gradient if we were to use that particular function because almost all the gradient comes from a single member of the mini batch whichever one is the most realistic because of that we don't usually use the maximum likelihood cost with generative adversarial networks we use one of the other costs that has nicer saturation properties and nicer
3,493
3,516
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3493s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
variance properties but it is a perfectly legitimate cost and when we go ahead and we use that cost to Train there's actually there's a few other ways of approximating the KL divergence but none of the different ways of approximating the KL divergence give us blurry samples like we get with a V ie so that we used to think that the VA was using the KL divergence and got blurry
3,516
3,539
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3516s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
samples and gowns were using the reverse KL divergence and got sharp samples but now that we're able to do both divergences with gans we see that we get sharp samples both ways my interpretation this is that it is the approximation strategy of using supervised learning to estimate the density ratio that leads to the samples being very sharp and that something about the variational bound is
3,539
3,568
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3539s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
what leads to the samples for the VA e being blurry there's one other possibility which is that the model architectures we use for generative adversarial Nets are usually a little bit different VA use usually are conditionally Gaussian and usually have an isotropic Gaussian at the output layer generative adversarial networks don't need to have any particular
3,568
3,590
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3568s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
conditional distribution that you can evaluate so the last layer is often just a linear layer which would look kind of like a Gaussian distribution with a complete covariance matrix instead of a restricted covariance matrix so it's possible that that complete covariance matrix at the last layer remove some of the blurriness but we no longer think that the choice of the divergence is
3,590
3,612
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3590s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
really important to understanding how generative adversarial networks behave earlier I showed you a family tree of different generative models and I said we're going to pretend that all of them do maximum likelihood and clearly they don't actually do that now that we've seen how generative adversarial networks work in a little bit more detail we can actually start to describe exactly how
3,612
3,638
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3612s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
it is that they compare to some of the more similar generative models in particular noise contrastive estimation is a procedure for fitting many different generative models including bolts and machines and other different types of generator nets and noise contrastive estimation uses exactly the same value function that we use as as the value function for the minimax game
3,638
3,662
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3638s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
for generative adversarial nets so a lot of people look at this and think maybe these two methods are almost the same thing and and I myself wondered about that for a little while so it turns out that actually this same value function also appears for maximum likelihood if you look at it the right way so what this value function consists of is on the Left we have a term where we sample
3,662
3,682
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3662s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
values from the data and we measure the log discriminator function on the right we sample values from a generator function and we measure the log of one minus the discriminator function it turns out that the differences between noise contrastive estimation maximum likelihood estimation and generative adversarial notes all revolve around exactly what the generator and the discriminator and the
3,682
3,706
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3682s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
learning process are so for generative adverse neural networks the discriminator is just a neural network that we parameterize directly the function D of X is just directly implemented for both noise contrastive estimation and maximum likelihood estimation the discriminator is a ratio between the model that we're learning and the sum of the model density and the
3,706
3,730
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3706s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
generator density so that probably got a little bit confusing right there what is this model that we are learning and how is it different from the generator well it turns out that for noise contrastive estimation the generator is used as a source of reference noise and the model learns to tell samples apart from noise by assigning higher density to the data so noise contrastive estimation might
3,730
3,752
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3730s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
consist of generating samples from a Gaussian distribution and then training this discriminator function to tell whether a given input comes from the gaussian distribution or it comes from the data distribution and it implements that discriminator function by actually implementing an explicit tractable density over the data and by accessing an explicit tractable density over the
3,752
3,777
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3752s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
generator that creates the noise and I ask a question yeah go ahead because you have this nice slide there my name is Yong Schmidt Hoover from this with a high lab and I was wondering whether you can relate these very interesting GA and soy games to the other adversarial network that we had back sent in 1992 where you had two types of network fighting each other also playing a
3,777
3,807
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3777s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
minimax game where one of them I to come up with try to minimize an error function that the others were maximizing and it was not exactly like that but it was very similar in many ways because there you had an image coming in and then you had these cold layers like in an auditing color and then you try to find a representation initially random representation of the
3,807
3,829
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3807s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
image but then for each of these units in the cold layer there was a predictor which try to predict this code unit from the other guys in them in the code layer and then the predictors try to minimize the error just like the feature detectors the code units try to maximize it trying to become as unpredictable as possible now this is closely related to coming up with this reference noise
3,829
3,857
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3829s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
vector that you just mentioned because of course then in the in the code layer you basically get in the ideal case a factorial code where each of these units is statistically independent of each other of the other units but still tells you a lot about the image so you still can attach an ordering coder to that and then get a generative distribution you just wake up the code layer units and
3,857
3,882
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3857s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
you randomly activate them according to their probabilities they are factual code which means that you get in images that are just reflecting the original distribution of the images so in many ways very similar but in other ways different and I was wondering whether you have comments on the similarities and differences of these old adversarial networks yeah so Jurgen has asked me if
3,882
3,907
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3882s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
I have any comment on the similarities and differences here but he's in fact aware of my opinion because we've correspond about this by email before I mean I don't exactly appreciate the public confrontation if you want to form your own if you want to form your own opinion about whether predictability minimization is the same thing as generative adversity all networks you're
3,907
3,932
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3907s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
welcome to read the paper one of the nips reviewers requested that we add a description of predictability minimization to the generative a Brazil Networks paper and we undid added our comments on the extent to which we think that they are similar which is that they're not particularly similar to the nips final copy just just for completeness however so I reacted to
3,932
3,955
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3932s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
exactly these changes and then you did not comment it's not sure that you commented or reacted to these confrontations yeah so there are comments which you did not address and I think still think I would prefer to use my tutorial to teach about generative adversarial networks if people want to read about particularly memorization and please do sir just to just honor to make
3,955
3,982
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3955s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
sure what you will have so related work section their comments have been added to the newspaper so returning to the comparison to noise contrastive estimation which is far more similar to generative ever-so networks than predictability minimization in that they have exactly the same value function we find that for noise contrastive estimation the learning of the final
3,982
4,008
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=3982s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
generative model occurs in the discriminator and for the generative address or network the learning occurs in the generator that's one way that they're different from each other and it has consequences on exactly what they are able to do an interesting thing is that maximum likelihood estimation also turns out to use this same value function and can also be interpreted as
4,008
4,032
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4008s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
having a discriminative function inside it the difference between noise contrastive estimation and maximum likelihood estimation is that for noise contrastive estimation the noise distribution is fixed and never changes throughout training if we choose to use a noise distribution that is gaussian as the reference distribution then in practice learning tends to slow down
4,032
4,053
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4032s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
relatively quickly once the generator once the model has learned to create samples that are easily distinguishable from a Gaussian in maximum likelihood estimation we take the parameters of the model distribution and we copy them into the noise distribution and we do this before each step begins so in some ways the maximum likelihood estimation procedure can be
4,053
4,076
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4053s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
seen as the model constantly trying to learn its own shortcomings and distinguish its own samples from the data and the generative every cell that works approach we constantly update the generator network by following the gradient on its parameters all three of these approaches constantly follow the gradient on the parameters over the discriminator so we can see the way that
4,076
4,100
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4076s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
we get some computational savings relative to maximum likelihood by looking at the corners that both noise contrastive estimation and generative adversarial networks cut for a noise contrastive estimation it's clear that the main corner we cut is that we never update the noise distribution and that eliminates a lot of computations right there for generative adversarial
4,100
4,121
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4100s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
networks the way that we're able to cut a corner is that we don't need to make sure that there's an exact correspondence between a density and a sampler so for maximum likelihood we need to be able to sample if we're going to follow this particular implementation of maximum likelihood we need to be able to sample from the model when we evaluate the term on the right but we
4,121
4,143
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4121s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
also need to be able to evaluate densities of the model in order to evaluate the D function and we need to perform computations that convert between those density representation and the sampling procedure generative adversarial networks only over a sample from G and only ever evaluate D there's no need to perform these transitions from densities to sampling procedures
4,143
4,167
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4143s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
and that provides a lot of computational savings so I've completed this section of our roadmap on exactly how it is that generative adder cell networks are able to work from a theoretical point of view and now I'll move on to a few tips and tricks that should help you to make them work better in your own practical applied work the first really big tip is that labels turn out
4,167
4,191
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4167s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
to really improve the subjective sample quality a lot as far as I know this is first observed by Emily Denton and her collaborators at NYU and Facebook AI research where they showed the bakwin generative Evaristo networks didn't work very well at all you could actually get them to work really well if you made them class conditional so Metis Mira and Simon OS
4,191
4,210
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4191s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
and arrow had developed a conditional version of the generative adversarial network where you could give some input value that should control what output should come out and Emily and the collaborators showed that if you use that class label as the input you could then create an output value of an image from that class and that these images would be much better than if you just
4,210
4,231
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4210s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
learned the the density over images to begin with another thing is that even if you don't want to go fully to the level that you have a class conditional model you can learn a joint distribution over the probability distribution of X and Y and even if it's sample time you don't provide an input Y to request a specific kind of sample the samples that come out
4,231
4,254
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4231s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
will be better Tim Salomon's and I did this in our paper that we'll be showing at the poster session tonight it's it's not a key contribution of our paper but it's it's one of the tricks that we use to get better images one of the caveats about using this trick is that you need to keep in mind that there are now three different categories of models that shouldn't be directly compared to each
4,254
4,274
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4254s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
other there are those models that are trained entirely without labels there are models that are class conditional and there are models that are not class conditional but that benefited from the use of labels to guide the training somewhat and it wouldn't really be fair to make a class conditional model and then say that it's strictly superior to some model that didn't use labels to
4,274
4,292
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4274s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
improve its samples at all another tip that can really help a lot is a technique that I call one-sided label smoothing and we also introduced this in the paper with Tim that we're showing tonight the basic idea of one-sided label smoothing is that usually when you train the discriminator you're turning it to output hard ones on the data and hard zeros on the fake samples but it's
4,292
4,317
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4292s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
much better if you turn it to output a soft value like 0.9 on the data and on the fixed samples it should still strive to output zeros that's why it's called one-sided is that we only smooth the the side that's on the data so what this will do is you can think of it as introducing some kind of like a leak probability that sometimes the data has been mislabeled that we accidentally
4,317
4,341
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4317s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
gave you something fake and said it was real in particular this will reduce the confidence of the model somewhat so that it will not predict really extreme values it's important not to smooth the generator samples and we can see this by optimizing what the optimal discriminator is if we smooth by replacing the positive targets of one minus alpha and replacing the negative
4,341
4,366
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4341s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
targets with beta then we see that we get this ratio of densities again or in the numerator we have 1 minus alpha times the data distribution and we have beta times the model distribution because this value in the numerator determines where the output of the discriminator function is large and therefore determines where the generator wants to steer samples we need to make
4,366
4,387
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4366s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
sure that this second term does not appear in the numerator otherwise we would reinforce the current behavior of the generator if the generator is making lots of weird pictures of grids and we assign beta times P model to those weird pictures of grids in the discriminator we will just ask you to keep making weird pictures of grids forever and and the gradient near those images will not
4,387
4,409
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4387s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
steer you away from them so that's why we always set beta to zero and only really smooth using the alpha term on the left term so we didn't invent label smoothing we just advocating the one-sided use of it for just for the discriminator label smoothing dates back to the 1980s I'm not sure where it originated Christian's egg ad and his collaborators showed that it works
4,409
4,433
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4409s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
really well for regularizing inception models and one of the really nice properties that I've observed for it is that compared to weight decay weight decay actually will reduce the training accuracy of your model it will actually cause the model to make classification mistakes by shrinking the weights until it's not possible to make the correct classification anymore if you turn up
4,433
4,455
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4433s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
the weight decay coefficient enough label smoothing will not actually introduce mistakes it will just reduce the confidence of the correct classifications but it will never actually steer the model toward an incorrect classification so if regenerative adverse Erichs this allows it the discriminator it is still more or less know which direction is real data in which direction is fake data but it
4,455
4,476
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4455s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
doesn't actually result in it miss guiding the generator and it gets rid of really large gradients it gets rid of behaviors where the discriminator linearly extrapolates to decide that if you move a little bit in one direction then moving very far in that direction will give you more and more realistic samples it's important to use batch normalization most layers of the model
4,476
4,499
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4476s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
and I won't go into batch normalization in detail but the idea is you take a full batch of input samples and you normalize the features of the network by subtracting the mean of those features across the whole batch and dividing by their standard deviation this makes the learning process a lot better conditioned unfortunately the use of these normalization constants that are
4,499
4,524
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4499s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
computed across a whole mini batch can induce correlations between different samples generated in the same mini batch so I'm showing you a grid of sixteen examples in the top image that we're all in one batch and then the next grid of sixteen samples is all in another batch same generator model in both cases the only reason that there seems to be a common theme in all the examples in each
4,524
4,548
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4524s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
image is that they're using the same mean and standard deviation normalizing constants and in this case the model has kind of pathologically learned to have its output depend a lot more on the precise randomly sampled value of that mean and that standard deviation rather than paying attention to the individual values in the code so in the top we see a lot of very like orange images and in
4,548
4,573
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4548s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
the bottom we see a lot of very green images so to fix that problem we are able to change two different versions of batch normalization that actually process every example in the same way the simplest of these is what we call reference batch normalization where you just pick a reference batch of examples at the start of training and you never change them and you always compute the mean and the
4,573
4,595
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4573s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
standard deviation of the features on those reference images and then you use them to normalize different images that you train on it means that every image throughout all of training is normalized using the statistics from the same reference batch and there's no longer this random jitter as we resample the images that are used to create the normalizing statistics unfortunately
4,595
4,616
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4595s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
because we always use the same images we can start to overfit to that particular reference batch to partially resolve that we introduced a technique called virtual batch normalization the basic idea here is that every time you want to normal as an example X we normalize it using statistics computed both on the reference batch and on the example X itself added to that batch a lot of
4,616
4,644
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4616s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
people ask me questions about how to balance the generator and the discriminator and if they need to be carefully adjusted to make sure that neither one of them wins in reality I usually find that the discriminator wins and I also believe that this is a good thing the way that the theory works is all based on assuming that the discriminator will converge to its optimal
4,644
4,665
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4644s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
distribution where it correctly estimates the ratios that were interested in and we really want the discriminator to do a good job of that in some cases you can get problems where if the discriminator gets really good at rejecting generator samples the generator doesn't have a gradient anymore some people have an instinct to fix that problem by making the discriminator less powerful but I think
4,665
4,687
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4665s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
that's the wrong way of going about it I think the right way to do it is to use things like one sided label smoothing to reduce the how extreme the gradients from the discriminator are and also to use things like the heuristic non saturating cost instead of the minimax cost and that will make sure that you can still get a learning signal even when the discriminator is able to reject
4,687
4,706
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4687s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
most of the samples there are a few other things that you can do to try to make sure that the coordination between the generator and the discriminator works out correctly in particular we really want the discriminator to always do a good job of estimating that ratio we want the discriminator you really up to date and to have fit really well to the latest changes to the
4,706
4,728
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4706s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
generator that motivates running the update on the discriminator more often than the update on the generator some people still do this I don't usually find that it works that well in practice I can't really explain why it doesn't work very well all the theory suggests that it should be the right thing to do but that particular approach doesn't seem to consistently yield an obvious
4,728
4,749
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4728s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
payoff we're now coming to the most exciting part of the roadmap which is the research frontiers in generative adversarial networks can I get a quick check on how much time I have left okay yes so the biggest research frontier in generative ever sail networks is confronting the non convergence problem usually when we train deep models we are minimizing a cost function and so we're
4,749
4,782
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4749s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
using an optimization algorithm to perform minimization there are a lot of things that can go wrong with minimization especially when you're training a deep model you can approach a saddle point rather than approaching a minimum you can approach a local minimum rather than a global minimum we're starting to become skeptical that local minima are as much of a problem as we
4,782
4,803
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4782s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
used to think they were and you can have all kinds of things we think they're like bad conditioning high variance in the gradient and so on but for the most part you're pretty much going to go down a hill until eventually you stop somewhere unless your hyper parameters are really bad and you don't usually need to worry that your optimization algorithm will fail to even converge in
4,803
4,823
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4803s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
the case of looking for an equilibrium to a game it is actually pretty difficult to guarantee that you will eventually converge to a specific equilibrium point or even that you will stop in some particular location that isn't a great equilibrium so to start looking at exactly how this works we're going to do another exercise where we're going to analyze a minimax game
4,823
4,848
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4823s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
and see what gradient descent does for this game we have a scalar variable X at a scalar variable Y and we have a value function x times why and basically the one player controls X and would like to minimize this value function the other player controls Y and would like to maximize it and the exercise is to figure out if this value function has an equilibrium
4,848
4,869
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4848s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
anywhere if so where is at equilibrium and then to look at the dynamics of gradient descent and analyze gradient descent as a continuous time process and just determine what the trajectory that gradient descent follows looks like on this particular problem I can take a few more questions while people work on this one now you have guns that generate really really nice results and train on
4,869
4,902
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4869s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
a lot of data I think like there's the vegan work presented here let's train on 27 terabytes of video so the thing I'm wondering is nobody has looked at all these videos how can you know that Yan is not generating near duplicates is there any theoretical motivation is it related to overfitting and are people trying near duplicate search to see if it's just very good at compressing this
4,902
4,926
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4902s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
data instead of generating yeah so duplicating a training example would actually definitely be a form of overfitting it's not something that we really believe happens in generative ever cell networks we don't have a strong theoretical guarantee that it doesn't happen one thing I can point out is that the generator never actually gets to see a training example directly
4,926
4,945
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4926s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
it only gets to see the gradients coming from the discriminator so the discriminator would need to perfectly memorize a training example and then communicate it into the generator via the gradient another thing is because we have this problem with fitting games finding the equilibria like like people are analyzing in the exercise right now we just we tend to under fit rather than
4,945
4,965
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4945s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
over fit we I'd be really quite happy if we started to overfit consistently but it's it's actually pretty difficult to really measure how much we're overfitting because you wouldn't really expect the model to perfectly copy a training example it's more likely that it would mostly copy the training example and then kind of change a few small things about it and we do things
4,965
4,986
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4965s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
like look for nearest neighbors we generate samples and then see the most similar training example in terms of Euclidean distances but it's really easy to make a small change that causes a gigantic difference in Euclidean distance so that can be kind of hard to tell if it's actually eliminating the duplicates or not and it's it's also worth mentioning that in
4,986
5,010
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=4986s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
many cases genitive address donuts aren't even necessarily compressing the data sometimes we actually train them with more parameters than there are floating-point values in the original data set we're just we're converting it into a form where you can get infinitely many samples in a computationally efficient way but yeah we are usually compressing as you said yeah and so on
5,010
5,031
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5010s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
my question is right now in like for example the vanilla ganz right you're you're taking noise you're doing like a noise shaping in a sense right and then you're reconstructing some signal some image in its native space in Ag native basis so our question is what do you think of actually doing the generation in a more safe sparsa fied basis of those types of signals for example maybe
5,031
5,054
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5031s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
a cosine basis or even the coefficients of some dictionary do you think that it might make the learning of the gans easier or do you think it might not matter or something like that so I was just curious alike should the output of the generator network be a set of bases yeah or for example coefficients of some say it's a natural base maybe a Fourier basis or some wavelet basis or a
5,054
5,076
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5054s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
dictionary or something just wondering if that makes any difference of the learning if it makes it easier because you can put some more priors on these as a member of the deep learning cult I'm not allowed to hand engineer anything so the the closest thing I've done to what you're suggesting is my co-author Bing Xuan the original generative Ebersole net paper was able to train a really
5,076
5,100
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5076s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
good generator net on the Toronto faces data set by doing layer wise pre-training I wasn't able to get the deep jointly trained model to fit that data set very well back then my guess is it would probably work now that we have patched norm we didn't have bachelor on back then you can view what Bing did as being a little bit like what you're suggesting because when you train the
5,100
5,121
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5100s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
output layer of the generator in the training step it learns essentially a dictionary that looks a little bit like like wavelet dictionaries and then when you start training the deeper layers of the generator those layers are essentially learning to output wavelet coefficients and so I do think that would that would help yeah question I can use after the gains are trained to
5,121
5,148
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5121s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
create more synthetic data set for another classifier like the idea that after the guns are trained I kind of captured the probability distribution of my input and use them to automatically generate more images like to avoid like how we normally use data set augmentation to the images like that yeah so my former intern Chen Qi Chen who was mentored when I was at Google I
5,148
5,177
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5148s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
don't want to disclose his project but I'll tell you that he's doing something cool related to that and if you talk to him he can decide whether he wants to disclose it or not I don't think I'm giving away anything about what he's done by saying that I've also had a lot of other people tell me that sometimes when they're evaluating a generator Network to see how well it's doing the
5,177
5,195
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5177s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
one test they'll run is they will create a synthetic data set using the generator and then train a classifier on that new data set then use it to classify the real test set and if that classifier is able to classify the real test set they take that as evidence that their generator was pretty good if if it could be used to make a fake training set there are a few downsides to that
5,195
5,219
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5195s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
procedure like for example if you were generating one mode way too often but you were still generating all the other modes occasionally your classifier might still be pretty good even though your generative model is screwed up but it does it does basically seem to work so in the interest of time I think I'll move on to the solution of the exercise but there'll be one more exercise you'll
5,219
5,237
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5219s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
get to ask a few more questions so the solution to this exercise which is we're looking at the value function of x times y where x and y are just scalars there is actually an equilibrium point where x is 0 and Y is 0 when when they're both 0 the each of them causes the gradient to go away on the and then we can look at the gradient descent dynamics by analyzing it as a
5,237
5,261
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5237s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
continuous-time system so if we actually evaluate the gradients DX DT is negative Y and dy DT is positive x the sign difference is because one of them is trying to minimize the value of function and one of them is trying to maximize it if we then go ahead and solve this differential equation to find the directions I guess there's a lot of different ways of doing it depending on
5,261
5,285
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5261s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
exactly which pattern matching technique you're most comfortable with my particular approach is to differentiate the second equation with respect to T and then I get that d squared Y DT squared is negative Y so I recognize from that that we're looking at a sinusoidal basis of solutions and from that you can guess and check the corresponding coefficients and we get
5,285
5,309
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5285s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
that we have this circular orbit where the only real thing that changes exactly what this the circle looks like is the initial conditions so if you initialize right on the origin you'll stay on the origin but if you initialize off the origin you never get any closer to it so a gradient descent goes into an orbit and oscillates forever rather than converging and then this is continuous
5,309
5,332
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5309s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
time gradient descent where we have an infinitesimal step size if we use a larger step size then it can actually spiral outward forever so there are actually conditions that you can check to see whether or not simultaneous gradient descent will converge or not and they involve complex eigenvalues of a matrix of second derivatives and I won't go into it because it's not really
5,332
5,358
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5332s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
the kind of thing that makes for a nice talk but the long and short of it is the the generative adversarial Nets game does not satisfy the main sufficient condition for convergence so that doesn't mean that they don't converge it means that we don't know whether they converge or not according to you the main criterion that we can look at and it seems like in practice they do
5,358
5,378
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5358s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
converge sometimes and they doubt other times and we don't have a great understanding of why they do or don't but the most important thing under said is that simultaneous gradient descent is not really an algorithm for looking for equilibria of game it it sometimes does that but it's it's not really its its purpose and the most important research direction in genitive
5,378
5,397
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5378s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
ever sonnets is to find an algorithm that does find equilibria in these high dimensional continuous non convex spaces it's important to mention that if we were able to optimize the generative adversarial network in function space if we were able to update the density function corresponding to the generator and and the discriminators beliefs about the generator directly then we can
5,397
5,424
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5397s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
actually use convexity in function space to prove that simultaneous gradient descent converges for that particular problem the reason that this breaks down is that we don't actually update the densities directly we update the G and D functions that do the sampling and and the ratio estimation and then on top of that we represent G and D using parametric functions deep neural
5,424
5,447
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5424s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
networks where the actual output values of G and D are very non convex functions of the parameters and so that causes us to lose all of our guarantees for convergence the main way that we see this affect the generative a dresser networks game is that we get behaviors like oscillation where the generator continually makes very different samples from one step to another but doesn't
5,447
5,469
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5447s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
ever actually converge to producing a nice consistent set of samples in particular the worst form of non convergence and one that happens particularly often is what we call mode collapse where the generator starts to make only one sample or one similar theme of related samples it usually doesn't output exactly the same image over and over again but it might do
5,469
5,493
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5469s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
something like every image it creates is a picture of the same dog and the dog is in different positions or has different objects in the background or we might see you know every sample it makes as a beach scene for example but it is essentially generating too few of things so the reason that mode collapse happens particularly often for the genitive adversarial Nets game is that the game
5,493
5,518
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5493s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
is a little bit pathological and the way that we specify the value function in particular if we look at the minimax version the min Max and the max min do different things if we do the min max where we put the discriminator in the inner loop and maximize over it there then we're guaranteed to converge to the correct distribution in practice we don't actually do the maximization in the
5,518
5,541
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5518s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
inner loop we do gradient descent on both players simultaneously if we put G in the inner loop that actually corresponds to a pathological version of the game where the generator learns to place all of its mass on the single point that the discriminator currently finds to be most likely so Luke Metz and his collaborators produced a really nice visualization of this in their recent
5,541
5,564
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5541s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
papers submitted to iclear where we have this target distribution shown in the middle of the slide which has several different modes in two-dimensional space and then over time we see how as we move left to right and train a generative adversarial Network we learn to sample from different modes of that distribution but we don't ever actually get multiple modes at the same time this
5,564
5,586
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5564s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
is because simultaneous gradient descent can sometimes behave a little bit like min Max and a little bit like max min and we're just unlucky enough that it often behaves more like max min and does the thing that we don't want some people have explained mode collapse in terms of the fact that we use the reverse KL loss that I described earlier when I said that I don't believe the reverse KL loss
5,586
5,608
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5586s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
it describes why we get sharp samples because the reverse KL loss would prefer to choose a single mode rather than averaged out two different modes it does superficially seem like it might explain why we get mode collapse but I don't think that it is actually the explanation in this case for one thing if we use the forward KL we still get mode collapse in many cases also the
5,608
5,633
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5608s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
reverse KL divergence does not say that we should collapse to a single mode it says that if our model is not able to represent every mode and to put sharp divisions between them then it should discard modes rather than blur modes but it would still prefer to have as many modes as the model can represent and with generative adversarial networks we usually see is a collapse to a much
5,633
5,655
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5633s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
smaller number of modes than the all can represent that makes me believe that the problem is really that we're doing max-min rather than that we're using the wrong cost we often see that generative adverts on networks work best on tasks that are conditional where we take an input and map it to some output and we're reasonably happy with the result as long as the output looks
5,655
5,678
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5655s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
acceptable and in particular we may not really notice if there's low diversity in the output so for example sentence to image generation as long as we get an image that actually resembles the sentence we're pretty happy with the output even if there isn't that much diversity in it Scott Reid and his collaborators have recently showed that for these sentence to image tasks
5,678
5,700
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5678s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
generative adversarial networks seem to produce samples that are much less diverse than those produced by other models in the panel on the right we can see how the sentence a man in an orange jacket with sunglasses and a hat skis down a hill gives three different images of a man in essentially the same pose when we use a generative adversarial Network but using the model developed in
5,700
5,723
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5700s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
this paper it's possible to get greater diversity in the output one way that we can try to reduce the mode collapse problem is to introduce what Tim Solomon calls mini-batch features these are features that look at the entire mini batch of samples when examining a single sample if that sample is too close to the other members of the mini batch then it can be rejected as having collapsed
5,723
5,746
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5723s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
to a single mode this procedure led to much better image quality on CFR 10 we're now able to see all the different ten classes of images in CFR 10 on the Left I show you the training data so you can see that this data is not particularly beautiful to start with it's 32 by 32 pixels so it's it's relatively low resolution you can see that there are things like cars
5,746
5,767
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5746s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg