video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
HGYYEUSm-0Q | airplanes horses and so on in the panel on the right we have a Gantt rain'd with mini-batch features and it is now successfully able to generate many different recognizable classes like cars and horses and so on previous generative adverse own networks on CFR 10 would usually give only photo texture blobs that would look like regions of grass and regions of sky | 5,767 | 5,790 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5767s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | regions of water but would not usually have recognizable object classes in them an image net the object classes are not as recognizable but if we go through and cherry-pick examples we can see some relatively nice recognizable images where we get many different kinds of animals like dogs and maybe koalas and birds and so on if we look at some of the problems that arise with this | 5,790 | 5,815 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5790s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | sampling procedure we can see some of the amusing things that convolutional networks get wrong one thing in particular is that I think probably due to the way that pooling works in the convolutional network the network is usually testing whether some feature is absent or present but not testing how many times it occurs so we tend to get multiple heads in one image or animals | 5,815 | 5,836 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5815s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | that have more than one face on the same head we also often get problems where the perspective of an image is greatly reduced and I think this might be due to the network not having enough long range connections between different pixels in the image but it's hard for it to tell the things like foreshortening ought to happen in particular the picture of the gray and orange dog looks literally like | 5,836 | 5,859 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5836s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | a cubist painting to me where you know the Cubist's intentionally removed the perspective some of them also just look like we've taken an animal and skinned it and laid its fur out flat on the ground and then taken an axis aligned photo of it we also see a lot of problems where individual details are great but the global structure is wrong like there's this cow that is both | 5,859 | 5,881 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5859s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | quadrupedal and bipedal there's a dog whose eyes are different sizes from each other and and a cat that has like a lamprey mouth we also often just see animals that don't really seem to have legs that they just sort of vanished into fur blobs that often conveniently end at the edge of the image so that the network doesn't need to provide the legs so did anybody notice anything that | 5,881 | 5,906 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5881s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | actually looked real in in these samples Aaron yeah so the cat was the cat was real to test your discriminator network good job Aaron another really promising way to reduce the moat collapse problem besides many batch features is called unrolled gans this was recently introduced by Google brain and was submitted to iclear and I guess it's worth mentioning that a few other people had suggested | 5,906 | 5,938 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5906s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | doing this for a few years beforehand so it's it is an idea that was floating around into ether a little bit I imagine some people in the audience are probably thinking like oh I told people about that but Brian was the first to go ahead and get it to really work really well revisiting the same visualization that we saw earlier there unrolled Gunn is able to actually get all the different | 5,938 | 5,959 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5938s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | modes so the way that unrolling works is that to really make sure that we're doing min/max rather than max-min we actually use that maximization operation in the inner loop as part of the computational graph that we backprop through so instead of having a single fixed copy of the discriminator we build a complete tensor flow graph describing K steps of the learning process for the | 5,959 | 5,985 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5959s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | discriminator so the generator nut is essentially looking into the future and predicting where the discriminator will be several steps later and because it's the generator looking into the future rather than the discriminator looking into the future we're actually setting a direction for that min max problem we're saying that it's max over the discriminator and the inner loop and | 5,985 | 6,007 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5985s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | then men over the generator in the outer loop and that very elegantly gets us around the mode collapse problem another really big important research direction for generative address that works is figuring out how to evaluate them this is actually a problem that's broader than just generative address donuts it's a problem for generative models across the board models with good likelihood | 6,007 | 6,028 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6007s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | can produce bad samples models with good samples can actually have a very bad likelihood and then even when we talk about good samples and bad samples there's not really a very effective way to quantify how good about a sample is there's a really good paper called a note on the evaluation of generative models the walks through a lot of corner cases to clearly explain all the | 6,028 | 6,050 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6028s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | problems with the different metrics that we have available for today and then for genitive adverse so networks these problems are compounded by the fact that it's actually pretty hard to estimate the likelihood there is a paper based on estimating the likelihood in submission to I clear though so that problem might be cleared up pretty soon once once we have more | 6,050 | 6,066 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6050s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | experience with that particular methodology another research frontier is figuring out how to use discrete outputs with generative ever sale networks I described earlier that the only real condition we impose on the generator network is that it be differentiable and that's a pretty weak criterion but unfortunately it means that we can't really generate sequences of characters | 6,066 | 6,088 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6066s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | or words because those are discrete and if the output is discrete then the function isn't differentiable you can imagine a few ways around this one is you could use the reinforce algorithm to do policy gradients and use that to Train the generator network there's also the recently introduced techniques based on the Gumbel distribution for doing relaxations that allow you to Train | 6,088 | 6,111 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6088s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | discrete variables or finally you could do the old-fashioned thing that we used to do I saw geoff hinton on thursday and he was mentioning to me how this reminds him a lot of the way that Boltzmann machines were really bad at generating continuous values so what we do there is we would pre process continuous values to convert them into a binary space and then we'd use Boltzmann machines from | 6,111 | 6,132 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6111s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | there so you could do the same thing in Reverse with genitive adversarial nuts you could have a model that converts these binary values to continuous values and then use generative adverts or networks from there you could for example train a word embedding model and then have a generative adversarial network that produces word embeddings rather than directly producing discrete | 6,132 | 6,151 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6132s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | words one very interesting extension of the discriminator is to actually make it recognize different classes and this allows us to participate in an important research area of semi-supervised learning with generative adversarial networks originally generative adversarial networks used just a binary output value that said whether things are real or fake but if we add extra | 6,151 | 6,177 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6151s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | outputs saying which class they belong to and then having one fake class we are able to then take the and use it to classify data after we finished training the whole process and because it's learned to reject lots of fake data it actually gets regularize drooly well using this approach tim Salomon's and and i and our other collaborators in open area we're able to | 6,177 | 6,200 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6177s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | set the state of the art on several different recognition tasks with very few labeled examples on em nist c fart n @ sv hn another important research direction is learning to make the code interpretable Peter Chen's info Gann paper here at nips actually shows how we can learn a code where different elements of the code correspond to specific semantically meaningful | 6,200 | 6,224 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6200s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | variables like the position of an image another research Direction is connections to reinforcement learning recent papers have shown that generative address cell networks can be interpreted as an actor critic method or used for imitation learning or interpreted as inverse reinforcement learning finally if we're able to come up with a good algorithm for finding equilibria in | 6,224 | 6,252 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6224s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | games we can apply that algorithm to many other places besides generative adversarial networks things like robust optimization literally playing games like chess and checkers resisting adversarial examples guaranteeing privacy against an attacker who wants to thwart your privacy and all of these different application areas are all examples of games that are rise in | 6,252 | 6,276 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6252s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | artificial intelligence and might be improved by the same kinds of techniques that could help us to improve generative adversarial networks we're very close to out of time but I'll give you five minutes to do this exercise and I'll answer the last set of questions during the exercise this exercise is jumping back a little bit to earlier how I described that there's a different cost | 6,276 | 6,299 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6276s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | function that you can use for maximum likelihood and generative adversarial networks and I think this is a really good closing exercise because it really drives home the point that the key mathematical tool generative a versatile networks give you is the ability to estimate a ratio and to see how the estimate ratio estimation works you are going to derive the maximum | 6,299 | 6,320 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6299s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | likelihood learning rule in particular we have a cost function for the generator network which is an expectation of X sampled from the generator and then applying f of X and we want to figure out what f of X should be to make this cost function give us a maximum likelihood as a hint you should first start by showing the following that the derivatives with respect to the | 6,320 | 6,344 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6320s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | parameters of the cost function are given by this expectation of f of X multiplied by the derivatives of the likelihood and if you'd like you could actually just take that as a given and skip to the last step at the very end what you do is you should figure out what f of X should be given this fact about the gradients if you can choose the right f of X you can | 6,344 | 6,366 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6344s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | get the maximum likelihood gradient so I'll give you a few minutes to work on that and I'll take a few questions and then and then I'll conclude so in your previous about as a generative Network are you missions out there is a important assumption that is a function should be differentiable what if the function is not differentiable because in some area such | 6,366 | 6,399 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6366s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | lacto Informatics the data is some categorical categorical label not numerical value so it's 94 insurable so in that use that acceleration how to generate a synthetic data and you using GT and network so there's there haven't been any papers actually solving that problem yet I talk about this a few slides earlier and my recommendations are to try the reinforce algorithm to do | 6,399 | 6,424 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6399s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | policy gradients with discrete actions to try the concrete distribution and Gumbel softmax which are two papers that were recently released about how to train models with discrete outputs or to convert the problem into a continuous space where a generative address donuts can be applied so the variance and ganz is that it's very powerful in capturing the modes of the distribution | 6,424 | 6,453 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6424s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | right but it's not really truly understanding what images are as in disease you know you start from zero to generate X right so the question is you know if you increase the systems increase the image size assumably the modes of the distribution going to increase exponentially so ultimately you know if you have a you know practically this may not be solution this may not be | 6,453 | 6,479 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6453s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | a problem maybe we just care about hundred by hundred pixel images but this assume I'm interested in two thousand by 2,000 pixel images you know if I truly understand what images are how images are generated you know there is no difference between a hundred by a hundred and two thousand by two thousand I can you know that ultimate machine my question is about like way down the | 6,479 | 6,502 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6479s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | future I mean at the end of the day you are capturing modes of the distribution but this mode is going to explode if you go to larger images so at some point you know the the modes of the model also have an exponential explosion as you use a bigger convolutional net so if I mean I mean I don't want to repeat the same structure I mean the question is the modes of the distribution right at the | 6,502 | 6,529 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6502s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | end at the end of the day you are capturing the modes of the distribution yeah but a larger model can capture more modes I guess the nice thing about natural images is that when you increase the resolution you're looking at a different level of detail but within the same level of detail the same structure is repeated all across the image so let's say that we've been studying 64 by | 6,529 | 6,552 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6529s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | 64 images and we couldn't really see the individual firs ah like Harrison and animals fur and then we move up to a higher resolution we can see their fur at the higher resolution we don't need to relearn the distribution over images of fur at every pixel separately we we learn one level of detail that can be replicated across the whole image and we generate different Z values at every X | 6,552 | 6,580 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6552s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | and y coordinate the that randomly decided you know fine details of the fur like which angle it should be pointed in and things like why do you think in practice ganz don't ask a love well when you go to larger images oh well you might be surprised by what comes in a few slides yeah I think I should probably move toward the conclusion now so recalling exercise 3 | 6,580 | 6,604 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6580s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | we're looking to design this f of X this cost function that's applied for every example generated by the generator in order to recover the maximum likelihood gradient we start by showing this property that we can write down the gradient of the generator in terms of an expectation where the expectation is we've taken with respect to generator samples and we multiplied f of X by a | 6,604 | 6,628 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6604s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | likelihood gradient that's relatively straightforward to show the basic step is to turn the expectation into an integral use leibnitz's rule which means you have to make a few assumptions about the structure of the distribution involved and then finally we take advantage of our earlier assumption that the generator distribution is nonzero everywhere that allows us to say that | 6,628 | 6,651 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6628s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | the derivatives of pg are equal to P G times the derivatives of log P G so that gives us this nice expression where we can get a gradients of the likelihood in terms of samples that came out of the generator but what would you really like is gradients of the likelihood in terms of samples that came from the data so the way that we're able to do that is important sampling we have this f of X | 6,651 | 6,677 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6651s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | coefficient that we're able to multiply by each of the gradients and we can fix the problem that we're sampling from the generator when we want to sample from the discriminator by setting f of X to be P data over P generator and this means that we'll have kind of bad variance in our samples because we're sampling from the generator and then rewriting everything to make it look | 6,677 | 6,696 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6677s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | like we sampled from the discriminator but in theory this is unbiased from there it takes a little bit of algebra to figure out exactly how we should take the discriminator and implement this ratio we recall that the optimal discriminator gives us this ratio of p data over p data plus p generator and doing a little bit more algebra we can rearrange that to say that we need to | 6,696 | 6,718 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6696s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | set f of X to negative e to the logits this is maybe a lot to absorb right now but I think it's it's pretty intuitive once you've worked through it slowly on your own once and it gives you an idea of how you can take this ratio that the discriminator gives you and build lots of other things with it so to conclude the talk I'd like to show you some really exciting new | 6,718 | 6,743 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6718s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | results that came out using generative adversarial networks and that kind of addressed the last question we had about whether a generative Ebersole networks scale to very large images a new model just came out last week I seem to have this curse that every time I have to give a talk about something an important new result comes out right as I have finished my slides so I desperately made | 6,743 | 6,766 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6743s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | some new slides on the plane on the way here plug and play generative networks or generative models sorry make 256 by 256 high-resolution images of all thousand classes from imagenet and have very good sample diversity the basic idea is to combine adversarial training moment matching in a latent space do you know atom encoders and Monte Carlo sampling using the gradient and | 6,766 | 6,796 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6766s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | the really cool thing is they also work for captioning or inverse captioning where you generate images by giving an input senate's that describes the image overall the basic technique is to follow a Markov chain that moves around in the direction of the gradient of the logarithm of P of x and y with with Y marginalized out you can use denoising auto-encoders to estimate the required | 6,796 | 6,823 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6796s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | gradient but to make the denoising auto-encoder create really good images the auto encoder needs to be trained with several different losses one of those losses is the adversarial networks loss and that forces it to make images that look very realistic as well as images that are close to the original data in l2 space this confirms some of the tips that I gave earlier in the talk | 6,823 | 6,845 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6823s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | for example on the tips and tricks section I said that you often get much better results if you include class labels we see here that plug-and-play generative models don't make nearly as recognized full of images if we generate samples without the class we also see that the adversarial loss is a really important component of this new system if you look at the reconstructions of the denoising | 6,845 | 6,866 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6845s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | auto-encoder we begin on the left with the raw data in the middle we share the reconstructed image and on the right we show the reconstruction that you get if you train the model without the adversarial Network loss so adversarial learning has contributed a lot to the overall quality of this current state of the art model so in conclusion I guess I'd hope that everyone remembers that | 6,866 | 6,888 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6866s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | generative adversarial networks are models that use supervised learning to approximate in intractable costs by estimating ratios and that they can simulate many different cost functions including the one that's used for maximum likelihood the most important research frontier in generative adversarial networks is figuring out how to find Nash equilibria in high | 6,888 | 6,909 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6888s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
HGYYEUSm-0Q | dimensional non convex continuous games and finally generate veteran-owned networks are important component of the current state of the art in image generation and are now able to make high resolution images with high diversity from many different classes and that concludes my talk and I believe that we're out of time for questions because we already took several of them in the | 6,909 | 6,930 | https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=6909s | Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) | |
9JpdAg6uMXs | um thank you all for coming this is a massive room so today we will have six great invited talks panel discussion and a selection of posters and spotlight presentations I don't have much to say but welcome young good fellow from open AI he will be given the first talk of today an introduction to generative adversarial networks thank you good morning thank you everybody for coming | 0 | 34 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=0s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | I guess I'll explain first a little bit what my goals for this this talk are I know there's a lot of different people here at the workshop and the main purpose of the talk is just to give everyone a little bit of context so that you know what adverse sail training is what generative adversarial networks are if you were at my tutorial on Monday you probably will have seen a lot of these | 34 | 56 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=34s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | slides before but I'm also going to throw in a few new ideas just so that you feel like you've got something extra for your time but this talk is mostly for the people who have just arrived at the workshop and needed some context so this workshop is about adversarial training and the phrase adversarial training is a phrase whose usage is in flux and I don't claim exclusive | 56 | 80 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=56s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | ownership of the phrase but to avoid confusion I thought I'd comment a little bit on how the phrase has been used before and how it's mostly used now so I first used the phrase adversarial training in a paper called explaining and harnessing adversarial examples and in that context I used it to refer to the process of training and neural network to correctly classify | 80 | 102 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=80s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | adversarial examples by training the network on adversarial examples today other people have started using the phrase adversarial training for lots of different areas almost any situation where we train and model in a worst case scenario where the worst case inputs are provided either by another model or by an optimization algorithm so the phrase episode training now applies to lots of | 102 | 126 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=102s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | ideas that are both new and old the way that we use the phrase adverse sail training now it could apply to things like and an agent playing a game against a copy of itself like Arthur Samuels checkers player back in the 1950s so it's important to recognize that when we use the phrase adversarial training today we're not only referring to things that were invented recently but the | 126 | 149 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=126s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | usage has expanded to encompass a lot of older things that also had other names like robust optimization most of the day's workshop is about a specific kind of adverse ale training which is training of generative adversarial networks in the context of generative adversarial networks both both players in the game are neural networks and the goal is to learn to generate data that | 149 | 175 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=149s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | resembles the data that was in the training set the reason that we call the training process for generative adverse cell network adversarial training is that the worst case input for one of these networks is generated by the other player and so one of the players is always trained to do as well as possible on the worst possible input it's worth mentioning that there are other works | 175 | 199 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=175s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | going on in the space of adversarial training where the goal is still to train on adversarial examples inputs that were maybe created by an optimization algorithm to confuse the model and you will see some posters about that here there's also some work about that in the reliable ml workshop but I hope that clears up any confusion about the term adversarial training so | 199 | 223 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=199s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | generative adversarial networks are mostly intended to solve the task of generative modeling the idea behind generative modeling is that we have a collection of training examples usually of large high dimensional examples such as images or audio waveforms most of the time we'll use images as the running scenario that we we show pictures of in slides because it's much easier easier to show a | 223 | 247 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=223s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | picture of an image than to play an audio waveform but everything that we describe for images applies to more or less any other kind of data so there are two things you might ask for a generative model to do one is what we call density estimation we're given a large collection of examples we want to find the probability density function scribes as examples but another thing we | 247 | 268 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=247s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | might do is try to learn a function or a program that can generate more samples from that same training distribution so I show that on the lower the lower row here where we have a collection of many different training examples in this case photos from the imagenet data set and we'd like to create a lot more of those photos and we create those photos in a random way where the model is actually | 268 | 290 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=268s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | generating photos that have never been seen before but come from the same data distribution in this case the images on the right are actually just more examples from the image net data set generative models are not yet good enough to make this quality of images but that's the goal that we're striving toward the particular approach the generative adversarial Network to take | 290 | 311 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=290s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | to generative modeling is to have two different agents playing a game against each other one of these agents is a generator network which tries to generate data and the other agent is a discriminator network that examines data and estimates whether it is real or fake the goal of the generator is to fool the discriminator and as both players get better and better at their job over time | 311 | 333 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=311s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | eventually the generator is forced to create data that is as realistic as possible the data that comes to the same distribution as the training data the way that the training process works is that first we sample some image from the training data set like the face that we show on the Left we call this image X it's just the name of the input to the model and then the first player is this | 333 | 358 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=333s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | discriminator network which you represent with a capital D the discriminator network is a differentiable function that has parameters that control the shape of the function in other words it's usually a neural network we then apply the function D to the image X and in this case the goal of D is to make D of X be very close to one signifying that X is a real example that came from the training | 358 | 384 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=358s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | set in the other half of the training process we sample some random noise Z from a prior distribution over latent variables in our generative model you can think of Z as just a sort of randomness that allows the generator to output many different images instead of outputting only one realistic image after we've sampled the input noisy we apply the generator function just like | 384 | 411 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=384s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | the discriminator the generator is a differentiable function controlled by some set of parameters and in other words it's usually a deep neural network after applying the function G to input noisy we obtain a value of x sampled in this case from the model like the face on the right this sample X will hopefully be reasonably similar to the data distribution but might have some | 411 | 437 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=411s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | small problems with it that the discriminator could detect in this case we've shown a slightly grainy noisy image of a face suggesting that this brain and noise is a feature that the discriminator might use to detect the images faked we applied the discriminator function to the fake example that we pulled from the generator and in this case the discriminator tries to make its output D | 437 | 461 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=437s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | of G of Z be near zero earlier when we use the discriminator and real data we wanted D of X to be near one and now the discriminator wants D of G of Z to be near zero to signify that the input is fake simultaneously the generator is competing against the discriminator trying to make D of G of Z approach one we can think of the generator the discriminator is being a little bit like | 461 | 489 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=461s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | counterfeiters and police the police would like to allow people with real money to safely spend their money without being punished but would like to also catch counterfeit money and remove it from circulation and punish the counterfeiters simultaneously the counterfeiters would like to fool the police and successfully use their money but if the counterfeiters are not very | 489 | 511 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=489s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | good at making fake money they'll get caught so over time the police learned to be better and better at catching counterfeit money and the counterfeiters learn to be better and better at producing it so in the end we can actually use game theory to analyze this situation we find that if both the police and the counterfeiters or in other words if both the discriminator and the generator | 511 | 534 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=511s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | have unlimited capabilities the Nash equilibrium of this game corresponds to the generator producing perfect samples that come from the same distribution as a trading data in other words the counterfeit are producing counterfeit money that is indistinguishable from real money and at that point the discriminator or in other words the police can not actually distinguish | 534 | 557 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=534s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | between the two sources of data and simply says that every input has probability one-half of being real and probability one-half of being fake we can formally describe the learning process using what's called a minimax game so we have a cost function for the discriminator and we call J superscript D which is just the normal cross entropy cost associated with the binary | 557 | 581 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=557s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | classification problem of telling real data from fake data we have one mini batch of real data drawn from the data set and what a mini batch of fake data drawn from the generator and then if we use this minimax formulation of the game then the cost for the generator is just the negation of the cost for the discriminator the equilibrium of this game is a saddle point of a superscript | 581 | 604 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=581s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | D and finding this saddle point resembles the process of minimizing the Jensen's Shannon divergence between the data and the model we can use that to actually prove that we'll recover the correct data distribution if we go to the equilibrium of the game we can analyze what the discriminator does and they play this game and we see exactly what it is that allows generative | 604 | 627 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=604s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | adversarial networks to be effective the basic idea is that if you take the derivatives of the minimax games value function with respect to the outputs of the discriminator we can actually solve for the optimal function that the discriminator should learn this function turns out to be the ratio between P data of X and P data of X plus P model of X you can do a little bit of algebra on | 627 | 651 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=627s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | that to rearrange it and you get P data of x over P model of X so we're learning a ratio between the density that the real data is drawn from and the density of the model currently represents estimating that ratio allows us to compute a lot of different divergences like the Jenson Shannon divergence and the KL divergence between the data and model that are used for training with | 651 | 675 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=651s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | maximum likelihood so the key insight of generative Ebersole networks is to use supervised learning to estimate a ratio that we need to be able to do unsupervised learning there are also a variety of other papers by Shakir Muhammad and his collaborators and Sebastian knows and his collaborators that talk a lot about the different divergences that you can learn with these kinds of techniques and | 675 | 697 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=675s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | how this estimation procedure compares to other techniques have also been developed in the statistical estimation literature previously but this is the basic idea right here is that we're able to learn this ratio so far I've described everything in terms of the minimax game I personally recommend that you don't use exactly that formulation you use a slightly different formulation | 697 | 722 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=697s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | where the generator has its own separate cost and the idea is that rather than minimizing the discriminators pay off the generator should maximize the probability that the discriminator makes a mistake the nice thing about this formulation is that the generator is much less likely to suffer from the vanish and gradients problem but this is more of a practical tip and trick rather | 722 | 745 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=722s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | than a strong theoretical recommendation and some of the other speakers you'll see today might actually give other advice so it's kind of an open question about exactly which tips and tricks work the best one of the really cool things about generative adversarial Nets is that you can do arithmetic on the z vectors that drive the output of the model we can think of Z as a set of | 745 | 770 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=745s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | latent variables that describe what is going to appear in the image and so Alec radford the co-organizer of this workshop and his collaborators showed that you can actually take Z vectors corresponding to pictures of a man with glasses the Z vector for a picture of a man and the Z vector for a picture of a woman and if you subtract the vector for men from the vector for men with glasses | 770 | 795 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=770s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | and you add the vector for women you'll actually get a vector that describes woman with glasses and when you decode small jitters of that vector you get many different pictures of a woman wearing glasses a lot of you may have seen a similar result before with language models where the word embedding for Queen could be used to do arithmetic where if you subtract off the word | 795 | 819 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=795s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | embeddings for female and add the word embedding for male you get a vector that is very close to the word embedding for King in this case Alec and his collaborators have a slightly more exciting result because they not only show that the arithmetic works in vector space but also that the vector can be decoded to a high dimensional realistic image with many different pixels all set | 819 | 843 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=819s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | correctly in the case of language modeling the final result was a vector that was very near the word for King but there was no need to decode that vector into some kind of extremely complicated observation set that corresponds to a king probably the biggest issue with generative adversarial networks and to some extent with other forms of adversarial training is that the | 843 | 870 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=843s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | training process does not always converge most of deep learning consists of minimizing a single cost function but the basic idea of adversarial training is that we have two different players who are adversaries and each of them is minimizing their own cost function when we minimize a single cost function that's called optimization and it's unusual for us to have a major problem | 870 | 894 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=870s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | with non convergence we might get unlucky and converge to a location that we don't like such as a saddle point with a high cost function value but we'll usually at least converge to some general region when we play a game with two players and each of them is simultaneously trying to minimize their own cost we might never actually approach the equilibrium of the game in | 894 | 918 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=894s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | particular one of the worst forms of non convergence that we see with generative adversarial networks is what we call mode collapse or if you're in on a little joke in our first paper we also caught the Helvetica scenario sometimes the basic idea of the hind mode collapse is that when we use the minimax formulation of the game we'd really like to see is minimization over | 918 | 942 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=918s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | G in the outer loop and maximization over D in the inner loop if we do this min max problem applied to the value function V we are guaranteed to actually recover the training distribution but if we swap the order of the mechs and the men we get a different result in fact if we minimize every G in the inner loop the generator has no incentive to do anything other than map all inputs Z to | 942 | 968 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=942s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | the same output X and that output X is the point that is currently considered most likely to be real rather than fake by the current value of the generator so we really want to do min max and not max min which one are we actually doing the way that we train models we do simultaneous gradient descent on both players costs and that looks very symmetric it doesn't naturally | 968 | 991 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=968s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | prioritize one direction of the min Max or max min in practice we find that we often see results that look an awful lot like Max min unfortunately with G in the inner loop so using some very nice visualizations from Luke Metz and his collaborator collaborators we see here that if we have a target distribution we'd like to learn with several different modes in two dimensions the | 991 | 1,019 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=991s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | training procedure shown in the bottom row of images actually visits one mode after another instead of learning to visit all of the different modes so what's going on is that the generator will identify some mode that the discriminator believes is highly likely and place all of its maps there and then the discriminator learns not to be fooled by the generator going to that | 1,019 | 1,041 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1019s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | one particular location and instead of learning that the generator ought to go to multiple locations the generator moves on to a different location until the discriminator learns to reject that one - one way that we can try to mitigate the mode collapse problem is with the use of what we call mini-batch features this is introduced in the paper that we presented on Monday night from | 1,041 | 1,065 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1041s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | open AI where the basic idea is to add extra features to the discriminator so the discriminator can look at an entire mini batch of data and if all the different samples in the mini batch are very similar whether the discriminator can realize that mode collapse is happening and reject those samples is being fake on the CFR 10 dataset this approach allowed us to learn samples | 1,065 | 1,087 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1065s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | that show all the different object classes in CFR 10 for the first time on the Left I show you what the training data looks like for CFR 10 you can see that it's not that beautiful to start with because there are only 32 by 32 pixel images so the resolution is very low on the right we see the samples that come from the model and you see that you can actually recognize horses ships | 1,087 | 1,109 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1087s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | airplanes and so on and cars that we actually have the real object classes recognizably occurring within this data set an image net there's a thousand classes so it's much more difficult to resist the mode collapse problem an image that our model mostly produces samples that have kind of the texture of photographs but don't necessarily have rich class structure we do occasionally | 1,109 | 1,136 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1109s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | get rich class structure if I show you some very cherry-picked examples we're able to make lots of different pictures of things like dogs spiders koalas bears and birds and so on we still see a lot of problems with the model though in particular we often see problems with counting we think that this might be something to do with the architecture of our convolutional | 1,136 | 1,157 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1136s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | network that it's able to test whether a feature is absent or present but it doesn't necessarily test how many times that feature occurs so we see things like this giraffe head with four eyes this dog with something like six legs or this kind of three-headed monkey thing or you know stacks of puppies rather than a single puppy or a cat with one and a half faces we also often see | 1,157 | 1,185 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1157s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | problems with perspective where the model generates images that are extremely flat in particular the image on the lower left looks to me like somebody skinned a dog you know like a bearskin rug and then took a picture with the camera looking straight down at it on the ground whether the picture in the lower middle looks to me literally like a cubist painting we're in the cubism | 1,185 | 1,206 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1185s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI | |
9JpdAg6uMXs | movement artists intentionally removed all the perspective from an image and rearranged the object to show us different parts from different angles but representing the entire thing is flat in many cases we see images that are really quite nice that have some problem with the global structure a lot of the time this just consists of images of animals where we don't actually get | 1,206 | 1,229 | https://www.youtube.com/watch?v=9JpdAg6uMXs&t=1206s | Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.