video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
Z6rxFNMGdn0 | experiment to tell whether a person is a zombie or not and similarly I don't know how you could run an experiment to tell whether an advanced AI system had become conscious in the sense of qualia or not but in the more practical sense like almost like self attention you think consciousness and cognition can in an impressive way emerge from current types of architectures though | 401 | 424 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=401s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | yes yeah or or if if you think of consciousness in terms of self-awareness and just making plans based on the fact that the agent itself exists in the world reinforcement learning algorithms are already more or less forced to model the agents effect on the environment so that that more limited version of consciousness is already something that we get limited versions of with | 424 | 451 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=424s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | reinforcement learning algorithms if they're trained well but you say limited so the the big question really is how you jump from limited to human level yeah right and whether it's possible you know the even just building common-sense reasoning seems to be exceptionally difficult so K if we scale things up forget much better on supervised learning if we get better at labeling | 451 | 476 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=451s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | forget bigger datasets and the more compute do you think we'll start to see really impressive things that go from limited to you know something echoes of human level cognition I think so yeah I'm optimistic about what can happen just with more computation and more data I do think it'll be important to get the right kind of data today most of the machine learning systems we train our | 476 | 502 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=476s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | mostly trained on one type of data for each model but the human brain we get all of our different senses and we have many different experiences like you know riding a bike driving a car talking to people reading I think when you get that kind of integrated data set working with a machine learning model that can actually close the loop and interact we may find that algorithms not so | 502 | 530 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=502s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | different from what we have today learn really interesting things when you scale them up a lot and a large amount of multimodal data so multimodal is really interesting but within like you're working adversarial examples so selecting within modal within up one mode of data selecting better at what are the difficult cases from which are most useful to learn from | 530 | 555 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=530s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | oh yeah like could we could you get a whole lot of mileage out of designing a model that's resistant to adverse fare examples or something like that right yeah question but my thinking on that has evolved a lot over the last few years one nice thing when I first started to really invest in studying adversarial examples I was thinking of it mostly as that versus aryl examples | 555 | 575 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=555s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | reveal a big problem with machine learning and we would like to close the gap between how machine learning models respond to adversarial examples and how humans respond after studying the problem more I still think that adversarial examples are important I think of them now more of as a security liability then as an issue that necessarily shows there something | 575 | 598 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=575s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | uniquely wrong with machine learning as opposed to humans also do you see them as a tool to improve the performance of the system not not on the security side but literally just accuracy I do see them as a kind of tool on that side but maybe not quite as much as I used to think we've started to find that there's a trade-off between accuracy on adversarial examples and accuracy on | 598 | 622 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=598s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | clean examples back in 2014 when I did the first adversary trained classifier that showed resistance to some kinds of adversarial examples it also got better at the clean data on M NIST and that's something we've replicated several times an M NIST that when we train against weak adversarial examples Emnes classifiers get more accurate so far that hasn't really held up on other data | 622 | 646 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=622s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | sets and hasn't held up when we train against stronger adversaries it seems like when you confront a really strong adversary you tend to have to give something up interesting this is such a compelling idea because it feels it feels like that's how us humans learn yeah the difficult cases we we try to think of what would we screw up and then we make sure we fix that yeah | 646 | 670 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=646s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | it's also in a lot of branches of engineering you do a worst case analysis and make sure that your system will work in the worst case and then that guarantees that it'll work in all of the messy average cases that happen when you go out into a really randomized world you know with driving with autonomous vehicles there seems to be a desire to just look for think I'd viscerally tried | 670 | 695 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=670s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | to figure out how to mess up the system and if you can be robust to all those difficult cases then you can it's a hand waving empirical way to show that your system is yeah yes today most adverse early example research isn't really focused on a particular use case but there are a lot of different use cases where you'd like to make sure that the adversary can't | 695 | 717 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=695s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | interfere with the operation of your system like in finance if you have an algorithm making trades for you people go to a lot of an effort to obfuscate their algorithm that's both to protect their IP because you don't want to research and develop a profitable trading algorithm then have somebody else capture the gains but it's at least partly because you don't want people to | 717 | 738 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=717s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | make adversarial examples that fool you our algorithm into making bad trades or I guess one area that's been popular in the academic literature is speech recognition if you use speech recognition to hear an audio waveform and then in turn that into a command that a phone executes for you you don't want and a malicious adversary to be able to produce audio that gets | 738 | 763 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=738s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | interpreted as malicious commands especially if a human in the room doesn't realize that something like that is happening in speech recognition has there been much success in in being able to create adversarial examples that fool the system yeah actually I guess the first work that I'm aware of is a paper called hidden voice commands that came out in 2016 I believe and they were able | 763 | 788 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=763s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | to show that they could make sounds that are not understandable by a human but are recognized as the target phrase that the attacker wants the phone to recognize it as since then things have gotten a little bit better on the attacker side when worse on the defender side it's become possible to make sounds that sound like normal speech but are actually interpreted as a different | 788 | 818 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=788s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | sentence than the human here's the level of perceptibility of the adversarial perturbation is still kind of high the when you listen to the recording it sounds like there's some noise in the background just like rustling sounds but those rustling sounds are actually the adversarial perturbation that makes the phone hear a completely different sentence yeah that's so fascinating | 818 | 839 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=818s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | Peter Norvig mention that you're writing the deep learning chapter for the fourth edition of the artificial intelligence the modern approach book so how do you even begin summarizing the field of deep learning in a chapter well in my case I waited like a year before I actually read anything is it even having written a full length textbook before it's still pretty | 839 | 864 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=839s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | intimidating to try to start writing just one chapter that covers everything one thing that helped me make that plan was actually the experience of having ridden the full book before and then watching how the field changed after the book came out I realized there's a lot of topics that were maybe extraneous in the first book and just seeing what stood the test of a few years of being | 864 | 888 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=864s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | published and what seems a little bit less important to have included now helped me pare down the topics I wanted to cover for the book it's also really nice now that the field is kind of stabilized to the point where some core ideas from the 1980s are still used today when I first started studying machine learning almost everything from the 1980s had been rejected and now some | 888 | 910 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=888s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | of it has come back so that stuff that's really stood the test of time is what I focused on putting into the book there's also I guess two different philosophies about how you might write a book one philosophy is you try to write a reference that covers everything and the other philosophy is you try to provide a high level summary that gives people the language to understand a field and tells | 910 | 932 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=910s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | them what the most important concepts are the first deep learning book that I wrote with Yahshua and Aaron was somewhere between the the two philosophies that it's trying to be both a reference and an introductory guide writing this chapter for Russell and Norvig book I was able to focus more on just a concise introduction of the key concepts and the language you need to | 932 | 954 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=932s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | read about them more and a lot of cases actually just wrote paragraphs that said here's a rapidly evolving area that you should pay attention to it's it's pointless to try to tell you what the latest and best version of a you know learn to learn model is right you know I can I can point you to a paper that's recent right now but there isn't a whole lot of a reason to delve into exactly | 954 | 977 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=954s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | what's going on with the latest learning to learn approach or the latest module produced by learning to learn algorithm you should know that learning to learn is a thing and that it may very well be the source of the latest and greatest convolutional net or recurrent net module that you would want to use in your latest project but there isn't a lot of point in trying to summarize | 977 | 997 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=977s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | exactly which architecture in which learning approach got to which level of performance so you maybe focus more on the basics of the methodology so from back propagation to feed-forward to recur in your networks convolutional that kind of thing yeah yeah so if I were to ask you I remember I took algorithms and data structures algorithm there of course remember the professor asked what is an | 997 | 1,027 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=997s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | algorithm and yelled at everybody in a good way that nobody was answering it correctly everybody knew what the alkyl it was graduate course everybody knew what an algorithm was but they weren't able to answer it well let me ask you in that same spirit what is deep learning I would say deep learning is any kind of machine learning that involves learning parameters of more than one consecutive | 1,027 | 1,055 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1027s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | step so that I mean shallow learning is things where you learn a lot of operations that happen in parallel you might have a system that makes multiple steps like you might have had designed feature extractors but really only one step is learned deep learning is anything where you have multiple operations in sequence and that includes the things that are really popular today | 1,055 | 1,079 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1055s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | like convolutional networks and recurrent networks but it also includes some of the things that have died out like Bolton machines where we weren't using back propagation today I hear a lot of people define deep learning as gradient descent applied to these differentiable functions and I think that's a legitimate usage of the term it's just different from the way that I | 1,079 | 1,106 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1079s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | use the term myself so what's an example of deep learning that is not gradient descent on differentiable functions in your I mean not specifically perhaps but more even looking into the future what's your thought about that space of approaches yeah so I tend to think of machine learning algorithms as decomposed into really three different pieces there's the model which can be | 1,106 | 1,131 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1106s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | something like a neural nut or a Bolton machine or a recurrent model and I basically just described how do you take data and how do you take parameters and you know what function do you use to make a prediction given the data and the parameters another piece of the learning algorithm is the optimization algorithm or not every algorithm can be really described in | 1,131 | 1,154 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1131s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | terms of optimization but what's the algorithm for updating the parameters or updating whatever the state of the network is and then the the last part is the the data set like how do you actually represent the world as it comes into your machine learning system so I think of deep learning as telling us something about what does the model look like and basically to qualify as deep I | 1,154 | 1,181 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1154s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | say that it just has to have multiple layers that can be multiple steps in a feed-forward differentiable computation that can be multiple layers in a graphical model there's a lot of ways that you could satisfy me that something has multiple steps that are each parameterised separately I think of gradient descent as being all about that other piece the how do you | 1,181 | 1,202 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1181s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | actually update the parameters piece so you can imagine having a deep model like a convolutional net and training it with something like evolution or a genetic algorithm and I would say that still qualifies as deep learning and then in terms of models that aren't necessarily differentiable I guess Boltzmann machines are probably the main example of something where you | 1,202 | 1,224 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1202s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | can't really take a derivative and use that for the learning process but you you can still argue that the model has many steps of processing that it applies when you run inference in the model so that's the steps of processing that's key so geoff hinton suggests that we need to throw away back prop back propagation and start all over what do you think about that what could an | 1,224 | 1,247 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1224s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | alternative direction of training nil networks look like I don't know that back propagation is going to go away entirely most of this time when we decide that a machine learning algorithm isn't on the critical path to research for improving AI the algorithm doesn't die it just becomes used for some specialized set of things a lot of algorithms like logistic regression don't seem that exciting to | 1,247 | 1,272 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1247s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | AI researchers who are working on things like speech recognition or autonomous cars today but there's still a lot of use for logistic regression and things like analyzing really noisy data and medicine and finance or making really rapid predictions in really time-limited contexts so I think I think back propagation and gradient descent are around to stay but they may not end up | 1,272 | 1,296 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1272s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | being everything that we need to get to real human level or superhuman AI are you optimistic about us discovering you know back propagation has been around for a few decades so I optimistic bus about us as a community being able to discover something better yeah I am I think I think we likely will find something that works better you could imagine things like having stacks of models where some | 1,296 | 1,326 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1296s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | of the lower level models predict parameters of the higher level models and so at the top level you're not learning in terms of literally calculating gradients but just predicting how different values will perform you can kind of see that already in some areas like Bayesian optimization where you have a Gaussian process that predicts how well different parameter | 1,326 | 1,344 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1326s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | values will perform we already used those kinds of algorithms for things like hyper parameter optimization and in general we know a lot of things other than back prep that work really well for specific problems the main thing we haven't found is a way of taking one of these other non back based algorithms and having it really advanced the state-of-the-art on an AI level problem right but I | 1,344 | 1,367 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1344s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | wouldn't be surprised if eventually we find that some of these algorithms that even the ones that already exists not even necessarily a new one we might find some way of customizing one of these algorithms to do something really interesting at the level of cognition or or the the level of I think one system that we really don't have working quite right yet is like short-term memory we | 1,367 | 1,393 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1367s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | have things like LST M's they're called long short-term memory they still don't do quite what a human does with short-term memory like gradient descent to learn a specific fact has to do multiple steps on that fact like if I I tell you the meeting today is at 3 p.m. I don't need to say over and over again it's at 3 p.m. it's not 3 p.m. it's at 3 p.m. it's a 3 p.m. right for you to do a gradient | 1,393 | 1,419 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1393s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | step on each one you just hear it once and you remember it there's been some work on things like self attention and attention like mechanisms like the neural Turing machine that can write to memory cells and update themselves with facts like that right away but I don't think we've really nailed it yet and that's one area where I'd imagine that new optimization algorithms are | 1,419 | 1,442 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1419s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | different ways of applying existing optimization algorithms could give us a way of just lightning-fast updating the state of a machine learning system to contain a specific fact like that without needing to have it presented over and over and over again so some of the success of symbolic systems in the 80s is they were able to assemble these kinds of facts better but dude there's a | 1,442 | 1,467 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1442s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | lot of expert input required and it's very limited in that sense do you ever look back to that as something that will have to return to eventually sort of dust off the book from the shelf and think about how we build knowledge representation knowledge place well we have to use graph searches searches right and like first-order logic and entailment and things like that a thing | 1,467 | 1,488 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1467s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | yeah exactly in my particular line of work which has mostly been machine learning security and and also generative modeling I haven't usually found myself moving in that direction for generative models I could see a little bit of it could be useful if you had something like a differentiable knowledge base or some other kind of knowledge base where it's possible for some of our fuzzier machine | 1,488 | 1,514 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1488s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | learning algorithms to interact with the knowledge base immanuel Network is kind of like that it's a differentiable knowledge base of sorts yeah but if if we had a really easy way of giving feedback to machine learning models that would clearly helped a lot with with generative models and so you could imagine one way of getting there would be get a lot better at natural | 1,514 | 1,535 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1514s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | language processing but another way of getting there would be take some kind of knowledge base and figure out a way for it to actually interact with a neural network being able to have a chat within y'all network yes so like one thing in generative models we see a lot today is you'll get things like faces that are not symmetrical like like people that have two eyes that are different colors | 1,535 | 1,557 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1535s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | and I mean there are people with eyes that are different colors in real life but not nearly as many of them as you tend to see in the machine learning generated data so if if you had either a knowledge base that could contain the fact people's faces are generally approximately symmetric and eye color is especially likely to be the same on both sides being able to just inject that | 1,557 | 1,579 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1557s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | hint into the machine learning model without it having to discover that itself after studying a lot of data it would be a really useful feature I could see a lot of ways of getting there without bringing back some of the 1980s technology but I also see some ways that you could imagine extending the 1980s technology to play nice with neural nets and have it help get there | 1,579 | 1,599 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1579s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | awesome so you talked about the story of you coming up with idea of Gans at a bar with some friends you were arguing that this you know Gans would work Jenner of adversarial networks and the others didn't think so then he went home at midnight coated up and it worked so if I was a friend of yours at the bar I would also have doubts it's a really nice idea but I'm very skeptical that it would | 1,599 | 1,626 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1599s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | work what was the basis of their skepticism what was the basis of your intuition why he should work I don't want to be someone who goes around promoting alcohol for the science in this case I do actually think that drinking helped a little bit mm-hmm when your inhibitions are lowered you're more willing to try out things that you wouldn't try out otherwise so I I have | 1,626 | 1,651 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1626s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | noticed it in general that I'm less prone to shooting down some of my own ideas when I'm when I have had a little bit to drink I think if I had had that idea at lunch time yeah I probably would have thought it it's hard enough I mean one neural net you can't train a second neuron that in the inner loop of the outer neural net that was basically my friends action was that trying to train two | 1,651 | 1,670 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1651s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | neural nets at the same time would be too hard so it was more about the training process unless so my skepticism would be you know I'm sure you could train it but the thing would converge to would not be able to generate anything reasonable and any kind of reasonable realism yeah so so part of what all of us were thinking about when we had this conversation was deep Bolton machines | 1,670 | 1,694 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1670s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | which a lot of us in the lab including me were a big fan of deep bolts and machines at the time they involved two separate processes running at the same time one of them is called the positive phase where you load data into the model and tell the model to make the data more likely the owners called the negative phase where you draw samples from the model and tell the model to make those | 1,694 | 1,718 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1694s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | samples less likely in a deep Bolton machine it's not trivial to generate a sample you have to actually run an iterative process that gets better and better samples coming closer and closer to the distribution the model represents so during the training process you're always running these two systems at the same time one that's updating the parameters of the model and another one | 1,718 | 1,739 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1718s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | that's trying to generate samples from the model and they worked really well on things like Amnesty a lot of us in the lab including me had tried to get the Boltzmann machines to scale past em inist to things like generating color photos and we just couldn't get the two processes to stay synchronized so when I had the idea for Gans a lot of people thought that the discriminator would | 1,739 | 1,760 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1739s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | have more or less the same problem as the negative phase in the Boltzmann machine that trying to train the discriminator in the inner loop you just couldn't get it to keep up with the generator and the outer loop and that would prevent it from converging to anything useful yeah I share that intuition yeah what turns out to not be the case a lot of the time with machine | 1,760 | 1,783 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1760s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | learning algorithms it's really hard to predict ahead of time how well they'll actually perform you have to just run the experiment and see what happens and I would say I still today don't have like one factor I can put my finger on it say this is why ganz worked for photo generation and deep Boltzmann machines don't there are a lot of theory papers showing that under some theoretical settings the | 1,783 | 1,806 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1783s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | the gun algorithm does actually converge but those settings are restricted enough that they don't necessarily explain the whole picture in terms of all the results that we see in practice so taking a step back can you in the same way as we talked about deep learning can you tell me what generative adversarial networks are yeah so generative adversarial networks are a particular | 1,806 | 1,832 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1806s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | kind of generative model a generative model is a machine learning model that can train on some set of data like so you have a collection of photos of cats and you want to generate more photos of cats or you want to estimate a probability distribution over cats so you can ask how likely it is that some new image is a photo of a cat ganzar one way of doing this | 1,832 | 1,855 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1832s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | some generative models are good at creating new data other generative models are good at estimating that density function and telling you how likely particular pieces of data are to come from the same distribution as a training data gans are more focused on generating samples rather than estimating the density function there are some kinds of games like flow gun | 1,855 | 1,877 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1855s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | that can do both but mostly guns are about generating samples of generating new photos of cats that look realistic and they do that completely from scratch it's analogous to human imagination when again creates a new image of a cat it's using a neural network to produce a cat that has not existed before it isn't doing something like compositing photos together you're not you're not literally | 1,877 | 1,905 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1877s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | taking the eye off of one cat on the ear off of another cat it's it's more of this digestive process where the the neural net trains on a lot of data and comes up with some representation of the probability distribution and generates entirely new cats there are a lot of different ways of building a generative model what's specific against is that we have a two-player game in the game | 1,905 | 1,926 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1905s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | theoretic sense and as the players in this game compete one of them becomes able to generate realistic data the first player is called the generator it produces output data such as just images for example and at the start of the learning process it'll just produce completely random images the other player is called the discriminator the discriminator takes images as input and guesses whether | 1,926 | 1,950 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1926s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | they're real or fake you train it both on real data so photos that come from your training set actual photos of cats and you try to say that those are real you also train it on images that come from the generator network and you train it to say that those are fake as the two players compete in this game the discriminator tries to become better at recognizing where their images are real | 1,950 | 1,972 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1950s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | or fake and the generator becomes better at fooling the discriminator into thinking that its outputs are are real and you can analyze this through the language of game theory and find that there's a Nash equilibrium where the generator has captured the correct probability distribution so in the cat example it makes perfectly realistic cat photos and the discriminator is unable | 1,972 | 1,996 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1972s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | to do better than random guessing because all the all the samples coming from both the data and the generator look equally likely to have come from either source so do you ever do sit back and does it just blow your mind that this thing works so from very so it's able to estimate that density function enough to generate generate realistic images I mean does it yeah do you ever | 1,996 | 2,021 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=1996s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | sit back yeah how does this even why this is quite incredible especially where Gant's have gone in terms of realism yeah and and not just to flatter my own work but generative models all of them have this property that if they really did what we asked them to do they would do nothing but memorize the training data right some models that are based on maximizing the likelihood the | 2,021 | 2,045 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2021s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | way that you obtain the maximum likelihood for a specific training set is you assign all of your probability mass to the training examples and nowhere else forgets the game is played using a training set so the way that you become unbeatable in the game is you literally memorize training examples one of my former interns wrote a paper his name is a Vaishnav nagarajan and he | 2,045 | 2,071 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2045s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | showed that it's actually hard for the generator to memorize the training data hard in a statistical learning theory sense that you can actually create reasons for why it would require quite a lot of learning steps and and a lot of observations of of different latent variables before you could memorize the training data that still doesn't really explain why when you produce samples | 2,071 | 2,097 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2071s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | that are new why do you get compelling images rather than you know just garbage that's different from the training set and I don't think we really have a good answer for that especially if you think about how many possible images are out there and how few images the generative model sees during training it seems just unreasonable that generative models create new images as well as they do | 2,097 | 2,120 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2097s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | especially considering that we're basically training them to memorize rather than generalize I think part of the answer is there's a paper called deep image prior where they show that you can take a convolutional net and you don't even need to learn the parameters of it at all you just use the model architecture and it's already useful for things like in painting images I think that shows us | 2,120 | 2,142 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2120s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | that the convolutional network architecture captures something really important about the structure of images and we don't need to actually use learning to capture all the information coming out of the convolutional net that would that would imply that it would be much harder to make generative models in other domains so far we're able to make reasonable speech models and things like | 2,142 | 2,164 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2142s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | that but to be honest we haven't actually explored a whole lot of different data sets all that much we don't for example see a lot of deep learning models of like biology datasets where you have lots of microarrays measuring the amount of different enzymes and things like that so we may find that some of the progress that we've seen for images and speech turns | 2,164 | 2,187 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2164s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | out to really rely heavily on the model architecture and we were able to do what we did for vision by trying to reverse-engineer the human visual system and maybe it'll turn out that we can't just use that same trick for arbitrary kinds of data all right so there's aspects of the human vision system the hardware of it that makes it without learning without cognition just makes it really | 2,187 | 2,211 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2187s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | effective at detecting the patterns we've seen the visual world yeah that's yeah that's really interesting what in a big quick overview in your view in your view what types of Gans are there and what other generative models besides games are there yeah so it's maybe a little bit easier to start with what kinds of generative models are there other than Gans so most generative models are likelihood | 2,211 | 2,239 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2211s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | based where to train them you have a model that tells you how how much probability it assigns to a particular example and you just maximize the probability assigned to all the training examples it turns out that it's hard to design a model that can create really complicated images or really complicated audio waveforms and still have it be possible to estimate the the likelihood | 2,239 | 2,267 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2239s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | function from a computational point of view most interesting models that you would just write down intuitively it turns out that it's almost impossible to calculate the amount of probability they assign to a particular point so there's a few different schools of generative models in the likelyhood family one approach is to very carefully design the model so that it is computationally | 2,267 | 2,291 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2267s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | tractable to measure the density it assigns to a particular point so there are things like auto regressive models like pixel CN n those basically break down the probability distribution into a product over every single feature so for an image you estimate the probability of each pixel given all of the pixels that came before it hmm there's tricks where if you want to measure the density | 2,291 | 2,317 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2291s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | function you can actually calculate the density for all these pixels more or less in parallel generating the image still tends to require you to go one pixel at a time and that can be very slow but there again tricks for doing this in a hierarchical pattern where you can keep the runtime under control or the quality of the images it generates putting runtime aside pretty good | 2,317 | 2,341 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2317s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | they're reasonable yeah the I would say a lot of the best results are from Gans these days but it can be hard to tell how much of that is based on who's studying which type of algorithm if that makes sense the amount of effort invest in it but yeah or like the kind of expertise so a lot of people who've traditionally been excited about graphics or art and things like that | 2,341 | 2,364 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2341s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | have gotten interested in Gans and to some extent it's hard to tell our Gans doing better because they have a lot of graphics and art experts behind them or our Gans doing better because they're more computationally efficient or our Gans doing better because they prioritize the realism of samples over the accuracy of the density function I think I think all of those are | 2,364 | 2,386 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2364s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | potentially valid explanations and it's it's hard to tell so can you give a brief history of Gans from 2014 we paid for 13 yeah so a few highlights in the first paper we just showed that Gans basically work if you look back at the samples we had now they looked terrible on the CFR 10 dataset you can't even recognize objects in them your papers I will use CFR 10 we use em NIST which is | 2,386 | 2,416 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2386s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | little handwritten digits we used the Toronto face database which is small grayscale photos of faces we did have recognizable faces my colleague Bing Xu put together the first again face model for that paper we also had the CFR 10 dataset which is things like very small 32 by 32 pixels of cars and cats and dogs for that we didn't get recognizable objects but all the deep | 2,416 | 2,444 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2416s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | learning people back then we're really used to looking at these failed samples and kind of reading them like tea leaves right and people who are used to reading the tea leaves recognize that our tea leaves at least look different right maybe not necessarily better but there was something unusual about them and that got a lot of us excited one of the next really big steps was lap gown | 2,444 | 2,465 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2444s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | by Emily Denton and seemeth chintala at Facebook AI research where they actually got really good high-resolution photos working with gans for the first time they had a complicated system where they generated the image starting at low res and then scaling up to high res but they were able to get it to work and then in 2015 I believe later that same year palek Radford and sumh intelli and Luke | 2,465 | 2,494 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2465s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | Metz published the DC gain paper which it stands for deep convolutional again it's kind of a non unique name because these days basically all gans and even some before that were deep in convolutional but they just kind of picked a name for a really great recipe where they were able to actually using only one model instead of a multi-step process actually generate realistic | 2,494 | 2,518 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2494s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | images of faces and things like that that was sort of like the beginning of the Cambrian explosion of gans like you know once once you got animals that had a backbone you suddenly got lots of different versions of you know like fish and right they have four-legged animals and things like that so so DC Gann became kind of the backbone for many different models that came out used as a | 2,518 | 2,539 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2518s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | baseline even still yeah yeah and so from there I would say some interesting things we've seen are there's a lot you can say about how just the quality of standard image generation ganz has increased but what's also maybe more interesting on an intellectual level is how the things you can use guns for has also changed one thing is that you can use them to learn classifiers without | 2,539 | 2,564 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2539s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | having to have class labels for every example in your your training set so that's called semi-supervised learning my colleague at open AI Tim Solomon's who's at at brain now wrote a paper called improved techniques for training guns I'm a co-author on this paper but I can't claim any credit for this particular part one thing he showed in the paper is that you can take the gun | 2,564 | 2,587 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2564s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | discriminator and use it as a classifier that actually tells you you know this image is a cat this image is a dog this image is a car this image is a truck and so and not just to say whether the image is real or fake but if it is real to say specifically what kind of object it is and he found that you can train these classifiers with far fewer labeled examples learn traditional classifiers | 2,587 | 2,609 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2587s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | so a few supervised based on also not just your discrimination ability but your ability to classify you're going to do much you're going to convert much faster to being effective at being a discriminator yeah so for example for the emne status set you want to look at an image of a handwritten digit and say whether it's a 0 a 1 or 2 and so on to get down to less than 1% accuracy | 2,609 | 2,636 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2609s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | required around 60,000 examples until maybe about 2014 or so in 2016 with this semi-supervised degan project tim was able to get below 1% error using only a hundred labeled examples so that was about a 600 X decrease in the amount of labels that he needed he's still using more images in that but he doesn't need to have each of them labeled as you know this one's a 1 this one's a 2 this one's | 2,636 | 2,665 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2636s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | a 0 and so on then to be able to for Ganz to be able to generate recognizable objects so object for a particular class you still need labelled data because you need to know what it means to be a particular class cat dog how do you think we can move away from that yeah some researchers at brain Zurich actually just released a really great paper on semi-supervised de Gans whether | 2,665 | 2,692 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2665s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | their goal isn't to classify its to make recognizable objects despite not having a lot of label data they were working off of deep minds big gun project and they showed that they can match the performance of began using only 10% I believe of the of the labels big gun was trained on the image net dataset which is about 1.2 million images and had all of them labelled this latest project | 2,692 | 2,718 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2692s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | from brain Zurich shows that they're able to get away with only having about 10% of the of the images labeled and they do that essentially using a clustering algorithm where the discriminator learns to assign the objects to groups and then this understanding that objects can be grouped into you know similar types helps it to form more realistic ideas of what should be appearing in the image | 2,718 | 2,745 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2718s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | because it knows that every image it creates has to come from one of these archetypal groups rather than just being some arbitrary image if you train again with no class labels you tend to get things that look sort of like grass or water or brick or dirt but but without necessarily a lot going on in them and I think that's partly because if you look at a large image net image the object | 2,745 | 2,768 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2745s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | doesn't necessarily occupy the whole image and so you learn to create realistic sets of pixels but you don't necessarily learn that the object is the star of the show and you want it to be in every image you make yeah you've heard you talk about the the horse the zebra cycle Gann mapping and how it turns out again thought provoking that horses are usually on grass and zebras | 2,768 | 2,794 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2768s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 | |
Z6rxFNMGdn0 | are usually on drier terrain so when you're doing that kind of generation you're going to end up generating greener horses or whatever so those are connected together it's not just yeah yeah be able to you're not able to segment yeah it's generating the segments away so there are other types of games you come across in your mind that neural networks can play with each other to to | 2,794 | 2,822 | https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2794s | Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.