video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
Z6rxFNMGdn0
to be able to solve problems yeah the the one that I spend most of my time on is insecurity you can model most interactions as a game where there's attackers trying to break your system and you order the defender trying to build a resilient system there's also domain adversarial learning which is an approach to domain adaptation that looks really a lot like Ganz the the author's
2,822
2,849
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2822s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
had the idea before the game paper came out their paper came out a little bit later and you know they they're very nice and sighted again paper but I know that they actually had the idea before I came out domain adaptation is when you want to train a machine learning model in 1:1 setting called a domain and then deploy it in another domain later and he would like it to
2,849
2,871
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2849s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
perform well in the new domain even though the new domain is different from how it was trained so for example you might want to train on a really clean image data set like image net but then deploy on users phones where the user is taking you know pictures in the dark or pictures while moving quickly and just pictures that aren't really centered or composed all that well
2,871
2,893
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2871s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
when you take a normal machine learning model it often degrades really badly when you move to the new domain because it looks so different from what the model was trained on domain adaptation algorithms try to smooth out that gap and the domain adverse oral approach is based on training a feature extractor where the features have the same statistics regardless of which domain
2,893
2,913
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2893s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
you extracted them on so in the domain adversarial game you have one player that's a feature extractor and another player that's a domain recognizer the domain recognizer wants to look at the output of the feature extractor and guess which of the two domains oh the features came from so it's a lot like the real versus fake discriminator and ends and then the feature extractor you
2,913
2,935
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2913s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
can think of as loosely analogous to the generator in games except what's trying to do here is both fool the domain recognizer and two not knowing which domain the data came from and also extract features that are good for classification so at the end of the day you can in in the cases where it works out you can actually get features that work about the same in both domains
2,935
2,960
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2935s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
sometimes this has a drawback where in order to make things work the same in both domains it just gets worse at the first one but there are a lot of cases where it actually works out well on both do you think gas being useful in the context of data augmentation yeah one thing you could hope for with Kenz is you could imagine I've got a limited training set and I'd like to make more
2,960
2,983
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2960s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
training data to train something else like a classifier you could train Magan on the training set and then create more data and then maybe the classifier would perform better on the test set after training on those big ERG and generated data set so that's the simplest version of of something you might hope would work I've never heard of that particular approach working but I think there's
2,983
3,006
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=2983s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
some there's some closely related things that that I think could work in the future and some that actually already have worked so if you think a little bit about what we'd be hoping for if we use the gun to make more training data we're hoping that again we'll generalize to new examples better than the classifier would have generalized if it was trained on the same buddy at us
3,006
3,025
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3006s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
and I don't know of any reason to believe that the Gann would generalize better than the classifier would but what we might hope for is that the Gann could generalize differently from a specific classifier so one thing I think is worth trying that I haven't personally tried but someone could try is what have you trained a whole lot of different generative models on the same
3,025
3,045
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3025s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
training set create samples from all of them and then train a classifier on that because each of the generative models might generalize in a slightly different way they might capture many different axes of variation that one individual model wouldn't and then the classifier can capture all of those ideas by training in all of their data so we'd be a little bit like making an ensemble of
3,045
3,065
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3045s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
classifiers and I say oh of gans yeah in a way I think that could generalize better the other thing that gans are really good for is not necessarily generating new data that's exactly like what you already have but by generating new data that has different properties from the data you already had one thing that you can do is you can create differentially private
3,065
3,088
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3065s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
data so suppose that you have something like medical records and you don't want to train a classifier on the medical records and then publish the classifier because someone might be able to reverse-engineer some of the medical records you trained on there's a paper from Casey greens lab that shows how you can train again using differential privacy and then the samples one again
3,088
3,108
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3088s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
still have the same differential privacy guarantees as the parameters that again so you can make fake patient data for other researchers to use and they can do almost anything they want with that data because it doesn't come from real people and the differential privacy mechanism gives you clear guarantees on how much the original people's data has been protected that's really interesting
3,108
3,130
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3108s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
actually I haven't heard you talk about that before in terms of fairness I've seen from triple AI your talk how can an adversarial machine learning help models be more fair with respect to sensitive variables yeah there was a paper from Amos Torquay's lab about how to learn machine learning models that are incapable of using specific variables so to say for example you
3,130
3,155
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3130s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
wanted to make predictions that are not affected by gender it isn't enough to just leave gender out of the input to the model you can often infer gender from a lot of other characteristics like say that you have the person's name but you're not told their gender well right if if their name is Ian they're kind of obviously a man so what you'd like to do is make a
3,155
3,174
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3155s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
machine learning model that can still take in a lot of different attributes and make a really accurate informed prediction but be confident that it isn't reverse engineering gender or another sensitive variable internally you can do that using something very similar to the domain adversarial approach where you have one player that's a feature extractor and another
3,174
3,196
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3174s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
player that's a feature analyzer and you want to make sure that the feature analyzer is not able to guess the value of the sensitive variable that you're trying to keep private right that's yeah I love this approach so we'll yeah with the with the feature you're not able to infer right this sensitive variables yeah brilliant it's quite quite brilliant and simple actually another
3,196
3,219
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3196s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
way I think that Ganz in particular could be used for fairness would be to make something like a cycle again where you can take data from one domain and convert it into another we've seen cycle again turning horses into zebras we've seen other unsupervised gains made by Ming Yue Lu doing things like turning day photos into night photos I think for fairness you could imagine taking
3,219
3,246
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3219s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
records for people in one group and transforming them into analogous people in another group and testing to see if they're they're treated equitably across those two groups there's a lot of things that be hard to get right to make sure that the conversion process itself is fair and I don't think it's anywhere near something that we could actually use yet but if you could design that
3,246
3,266
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3246s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
conversion process very carefully it might give you a way of doing audits where you say what if we took people from this group converted them into equivalent people in another group does the system actually treat them how it ought to that's also really interesting you know in a popular in popular press and in general in our imagination you think well gangs are able to generate
3,266
3,291
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3266s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
data and use to think about deep fakes or being able to sort of maliciously generate data that fakes the identity of other people is this something of a concern to you is this something if you look 10 20 years into the future is that something that pops up in your work in the work of the community that's working on generating models I'm a lot less concerned about 20
3,291
3,315
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3291s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
years from now than the next few years I think there will be a kind of bumpy cultural transition as people encounter this idea that there can be very realistic videos and audio that aren't real I think 20 years from now people will mostly understand that you shouldn't believe something is real just because you saw a video of it people will expect to see that it's been
3,315
3,335
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3315s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
cryptographically signed or or have some other mechanism to make them believe the the content is real there's already people working on this like there's a startup called true pic that provides a lot of mechanisms for authenticating that an image is real there they're maybe not quite up to having a state actor try to to evade their their verification techniques but it's
3,335
3,361
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3335s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
something people are already working on and I think we'll get right eventually so you think authentication will will eventually went out so being able to authenticate that this is real and this is not yeah as opposed to gas just getting better and better or generative models being able to get better and better to where the nature of what is real I don't think we'll ever be able to
3,361
3,382
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3361s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
look at the pixels of a photo and tell you for sure that it's real or not real and I think it would actually be somewhat dangerous to rely on that approach too much if you make a really good fake detector and then someone's able to fool your fake detector and your fake detector says this image is not fake then it's even more credible than if you've never made a fake detector in
3,382
3,405
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3382s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
the first place what I do think we'll get to is systems that we can kind of use behind the scenes for to make estimates of what's going on and maybe not like use them in court for a definitive analysis I also think we will likely get better authentication systems where you know if a match every phone cryptographically signs everything that comes out of it you
3,405
3,430
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3405s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
wouldn't go to conclusively tell that an image was real but you would be able to tell somebody who knew the appropriate private key for this phone was actually able to sign this image and upload it to this server at this timestamp so you could imagine maybe you make phones that have the private keys Hardware embedded in them if like a State Security Agency really wants to infiltrate the company
3,430
3,459
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3430s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
they could probably you know plant a private key of their choice or break open the chip and learn the private key or something like that but it would make it a lot harder for an adversary with fewer resources to fake things most of us yeah okay okay so you mentioned the beer and the bar and the new ideas you were able to implement this or come up with this new idea pretty quickly and
3,459
3,482
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3459s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
implement it pretty quickly do you think there are still many such groundbreaking ideas and deep learning that could be developed so quickly yeah I do think that there are a lot of ideas that can be developed really quickly guns were probably a little bit of an outlier on the whole like one-hour timescale right but just in terms of a like low resource ideas where you do something really
3,482
3,504
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3482s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
different on the algorithm scale and get a big payback I think it's not as likely that you'll see that in terms of things like core machine learning technologies like a better classifier or a better reinforcement learning algorithm or a better generative model if I had the gun idea today it would be a lot harder to prove that it was useful than it was back in 2014 because I would need to get
3,504
3,529
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3504s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
it running on something like image net or celibate high resolution you know those take a while to train you couldn't you couldn't train it in an hour and know that it was something really new and exciting back in 2014 shredding an amnesty was enough but there are other areas of machine learning where I think a new idea could actually be developed really quickly with low resources what's
3,529
3,553
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3529s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
your intuition about what areas of machine learning are ripe for this yeah so I think fairness and interpretability our areas where we just really don't have any idea how anything should be done yet like for interpretability I don't think we even have the right definitions and even just defining a really useful concept you don't even need to run any experiments could have a huge impact on
3,553
3,579
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3553s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
the field we've seen that for example in differential privacy that uh Cynthia Dworkin her collaborators made this technical definition of privacy where before a lot of things are really mushy and then with that definition you could actually design randomized algorithms for accessing databases and guarantee that they preserved individual people's privacy in a in like a mathematical
3,579
3,600
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3579s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
quantitative sense right now we all talk a lot about how interpretable different machine learning algorithms are but it's really just people's opinion and everybody probably has a different idea of what interpretability means in their head if we could define some concept related to interpretability that's actually measurable that would be a huge leap forward even without a new
3,600
3,621
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3600s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
algorithm that increases that quantity and also once once we had the definition of differential privacy it was fast to get the algorithms that guaranteed it so you could imagine once we have definitions of good concepts and interpretability we might be able to provide the algorithms that have the interpretability guarantees quickly to what do you think it takes to build a
3,621
3,646
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3621s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
system with human level intelligence as we quickly venture into the philosophical so artificial general intelligence what do you think I I think that it definitely takes better environments than we currently have for training agents that we want them to have a really wide diversity of experiences I also think it's going to take really a lot of computation it's
3,646
3,671
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3646s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
hard to imagine exactly how much so you're optimistic about simulation simulating a variety of environments is the path forward I think it's a necessary ingredient yeah I don't think that we're going to get to artificial general intelligence by training on fixed datasets or by thinking really hard about the problem I think that the the agent really needs to interact and have a variety of
3,671
3,696
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3671s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
experiences within the same lifespan and today we have many different models that can each do one thing and we tend to train them on one data set or one RL environment sometimes they're actually papers about getting one set of parameters to perform well in many different RL environments but we don't really have anything like an agent that goes seamlessly from one type of
3,696
3,721
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3696s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
experience to another and and really integrates all the different things that it does over the course of its life when we do see multi agent environments they tend to be there are so many multi environment agents they tend to be similar environments like all of them are playing like an action based video game we don't really have an agent that goes from you know playing a video game
3,721
3,743
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3721s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
to like reading The Wall Street Journal to predicting how effective a molecule will be as a drug or something like that what do you think is a good test for intelligence in you view it's been a lot of benchmarks started with the with Alan Turing a natural conversation being good being a good benchmark for intelligence what what are what would you and good fellows sit back and be really damn
3,743
3,772
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3743s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
impressed if a system was able to accomplish something that doesn't take a lot of glue from human engineers so imagine that instead of having to go to the CFR website and download CFR 10 and then write a Python script to parse it and all that you could just point an agent at the CFR 10 problem and it downloads and extracts the data and trains a model and starts giving you
3,772
3,800
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3772s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
predictions I feel like something that doesn't need to have every step of the pipeline assembled for it it definitely understands what it's doing is Auto ml moving into that direction are you thinking wave and bigger autosomal has mostly been moving toward once we've built all the glue can the machine learning system to design the architecture really well so I'm we're
3,800
3,825
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3800s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
saying like if something knows how to pre-process the data so that it successfully accomplishes the task then it would be very hard to argue that it doesn't truly understand the task in some fundamental sense and I don't necessarily know that that's like the philosophical definition of intelligence but that's something that would be really cool to build that would
3,825
3,843
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3825s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
be really useful and would impress me and would convince me that we've made a step forward in real AI so you give it like the URL for Wikipedia and then next day expected to be able to solve CFR 10 or like you type in a paragraph explaining what you want it to do and it figures out what web searches it should run and downloads all the whole unnecessary ingredients so you have a
3,843
3,871
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3843s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
very clear calm way of speaking no arms easy to edit I've seen comments for both you and I have been identified as both potentially being robots if you have to prove to the world that you are indeed human how would you do it but I can understand thinking that I'm a robot it's the flipside yeah touring test I think yeah yeah the proof prove your human test I mean I lecture so you have
3,871
3,903
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3871s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
to is there something that's truly unique in your mind I suppose it doesn't go back to just natural language again just being able to so proving proving that I'm not a robot with today's technology yeah that's pretty straightforward too like my conversation today hasn't veered off into you know talking about the stock market or something because in my training data but I think it's more
3,903
3,927
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3903s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
generally trying to prove that something is real from the content alone it was incredibly hard that's one of the main things I've gotten out of my can research that you can simulate almost anything and so you have to really step back to a separate channel to prove that slang is real so like I guess I should have had myself stamped on a blockchain when I was born or something but I
3,927
3,947
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3927s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
didn't do that so according to my own research methodology there's just no way to know at this point so what last question problem stands all for you that you're really excited about challenging in the near future so I think resistance to adversarial examples figuring out how to make machine learning secure against an adversary who wants to interfere it in control with it
3,947
3,968
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3947s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
is one of the most important things researchers today could solve in all domains in image language driving in I guess I'm most concerned about domains we haven't really encountered yet like like imagine twenty years from now when we're using advanced day eyes to do things we haven't even thought of yet like if you ask people what are the important problems in security of phones
3,968
3,994
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3968s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
in in like 2002 I don't think we would have anticipated that we're using them for you know nearly as many things as we're using them for today I think it's going to be like that with AI that you can kind of try to speculate about where it's going but really the business opportunities that end up taking off would be hard to predict ahead of time well you can predict ahead of time is
3,994
4,015
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=3994s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
that almost anything you can do with machine learning you would like to make sure that people can't get it to do what they want rather than what you want just by showing it a funny QR code or a funny input pattern and you think that the set of methodology to do that can be bigger than you want domain and that's I think so yeah yeah like one methodology that I think is not not a specific
4,015
4,040
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=4015s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
methodology but like a category of solutions that I'm excited about today is making dynamic models that change every time they make a prediction so right now we tend to train models and then after they're trained we freeze them and we just use the same rule to classify everything that comes in from then on that's really a sitting duck from a security point of view if you
4,040
4,062
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=4040s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
Z6rxFNMGdn0
always output the same answer for the same input then people can just run inputs through until they find a mistake that benefits them and then they use the same mistake over and over and over again I think having a model that updates its predictions so that it's harder to predict what you're going to get will make it harder for the for an adversary to really take control of the
4,062
4,084
https://www.youtube.com/watch?v=Z6rxFNMGdn0&t=4062s
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
https://i.ytimg.com/vi/Z…axresdefault.jpg
eYgPJ_7BkEw
hi today we're looking at fix match simplifying semi-supervised learning with consistency and confidence by cukes on David birth Berthelot and others of Google research so this paper concerns semi-supervised learning so what does semi-supervised learning mean in semi-supervised learning you have a data set of labeled samples so right you have this data set of X's and corresponding Y
0
32
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=0s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
labels but this data set sometimes is very small now you have a much bigger data set of unlabeled examples just X's with no labels right so you don't know what the labels of the of the unlabeled examples are but what you would like to do is you would like to use this really large data set in order to help you with learning the association between the data points and the labels so for
32
66
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=32s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
example in this case you would have something like like an image classification data set and I'm gonna take the example here of medical data so you have a pictures of lungs let's draw a long here that is an ugly long you have pictures of lungs and whether or not they are they have like a tumor in them right so medical data is very hard to get especially labeled medical data
66
96
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=66s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
because you need first of all you need the data itself but then you also need like like one at least one but ideally like three radiologists to look at whether or not this is a good or a bad image and label it so it's usually very expensive to collect that data but you might have plenty of unlabeled data right you might just be able to go who you're through through some database and
96
123
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=96s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
find like anonymized undiagnosed long scans somewhere lying around the same with image like other images so labeling images is pretty human intensive but the internet contains like a whole bunch of unlabeled images so the task of semi-supervised learning is how do you use this unlabeled data set in order to make your classification on the label data set easier and fix match
123
153
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=123s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
combines two approaches to this in a smart way namely the consistency and confidence approach right so what does what does well it will jump right into into the method so basically what you want to do is you want to say my loss that I optimized right this is my loss consists of two parts namely a supervised loss which is your classic classification loss right plus an
153
185
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=153s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
unsupervised loss right and then you have like some sort of a trade-off parameter in front now your supervised loss here this is where this is just the the cross-entropy let's call it h between your predicted labels and your the actual true labels right and the predicted labels say they can be you know kind of a distribution over labels now the magic of course is
185
211
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=185s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
here in the unsupervised loss and this unsupervised loss this is what's described here in this part right so the unsupervised loss is going to be this age between P and Q and we'll see what P and Q is so if for the unsupervised loss you two of course want to start with an unlabeled example then you have the same sample go into two different pipelines in the first pipeline up here what you
211
243
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=211s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
do is you so-called weekly augmented and here we're dealing with images so we have to talk about image augmentation so image augmentation has long been used in supervised learning to kind of give you more it's kind of a cheat to give you more training data so if you have an image right of let's say famous cat you can obtain met more training data if you for example by
243
277
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=243s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
random cropping so you can random crop let's say we just take this bottom right corner here and then we enlarge it to the original size right then it is still sort of a cat but it's just a part of a cat right but usually that helps because you you say okay um my image data set is just pictures of animals right it's entirely conceivable that someone held the camera like this or like this right
277
306
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=277s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
so technically in terms of generalizing to a test set these both data points should be valid so I'm just gonna add both to my training data so you can see how from one training data point you can get many training data points just by doing this cropping what you can also do is you can flip it left right right you just in swap the pixels left right and usually the these kind of um so a a a
306
334
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=306s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
cat that has a little dark spot here is still a cat when it has too little dark spot over there right but to your classifier those are two different samples so you can do many of those things and they have to kind of augmentations they have what they call weakly augmented and strongly augmented right so in the weakly augmented pipeline I think they just they crop and
334
359
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=334s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
they they shift and they rotate or something like this so you can see here this this horsey here it is something like it's cropped here about then it is turned slightly to the left and then yeah I think that's it so they crop they rotate and then they also flip horizontally at random in like 50 percent of the time so these are what's called weekly augmented the goal here is
359
391
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=359s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
just to kind of obtain a bit more training data alright so you run this through your model through your classification model as you would a regular sample and you get a prediction now from your prediction you can take the highest prediction here and that is going to be your pseudo label so this is P of Y this is your distribution that you estimate right so and this and
391
421
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=391s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
this if you just take the max this is going to be your Y hat right and this is what they call a pseudo label sorry you'll see why it is called a pseudo label so the other pipeline here is the strong augmentation pipeline now in weak augmentation we just wanted to get somewhere training it in strong augmentation now the goal is to really screw up that picture to the point where
421
448
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=421s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
it's still you know you could recognize it in the same class but you can see here the augmentations they go wild so you play around with the color with the hue you play around with the light intensity right with the contrast you can do many many things you can see this this image looks basically nothing like this image buddied you can still kind of recognize it as a horse but the strongly
448
478
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=448s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
augmented data is much more distorted than the weakly augmented data and that's the point so also you send the strongly augmented data through the model and again you get a prediction right and now is that the trick is you take the label from here and you you take that as if it were the true label right you take that as if it were the true label and you form a loss from this
478
511
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=478s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
prediction being the model prediction as if this thing here that also comes from the model as if that was the true label right that's why it's called a pseudo label because it is a label that you produce from the model itself now of course if these were to be the same picture it would be kind of pointless right that's why you see there needs to be a weekly and a strongly
511
536
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=511s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
augmented pipeline I am pretty sure ammo if you want a more basic version of this make this just clean so no augmentation and make this augment it right that's that's how you can think of it the fact that there is weak and here strong augmentation I think is just a your classic trick to get more training data but in essence you can think of it as this is here the clean
536
564
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=536s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
thing you just want to produce a label and then you want the that an Augmented version of the image has the same label now you can think of it shortly what does this model learn if you just have this you remember I think the important thing is always to remember that there are two components here right there is first the supervised loss this is the important one ultimately because we have
564
590
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=564s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
the true labels right and then second there is the unsupervised loss which is just an auxiliary loss that is supposed to just kind of tune our model to the nature of the data right so don't forget that this this down here just concerns the unsupervised part of that loss so if you think what does the model actually learn when whenever you train it like this it basically learns to revert this
590
622
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=590s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
strong augmentation right say basically sells hey model whenever I give you a week augmented image and I distort it heavily right whenever I give you an image and that distort it heavily I want the label to be the same so the model basically learns that whatever the image the whatever the image the model at the end of the trend will be able to basically map any strongly augmented picture to
622
661
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=622s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
the same class as a weekly augmented picture if it comes from the same source right so the model basically learned to ignore these kinds of augmentations that's what this loss over here does it basically says these sorts of augmentations these sorts of distortions of images please ignore those because I always want you to output the same label here in the prediction here as if I had not
661
693
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=661s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
distorted or just weakly distorted the image so that's that's what you have to keep in mind that this this loss is designed to make the model not distinguish between differently augmented versions of the same image and interestingly that really seems to help with the with the supervised loss right my kind of hypothesis is is that all these methods what they're kind of
693
721
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=693s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
trying to do is to just tune the neural network to the let's say the orders of magnitude of the of the input data and also to the kinds of augmentations that the humans come up with and that's a very important point so the Ottoman tations and here we said you know it's it's kind of a rotation and the crop the kind of augmentation really seemed to play a role
721
749
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=721s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
so this paper finds that on C 410 where the state of the art I believe is something like ninety six ninety seven percent accuracy on C for ten with just two hundred and fifty labeled examples right now the usual data set size is about fifty thousand it goes to ninety four point three four point nine percent so almost 95 percent accuracy with the state of the art being like ninety seven
749
780
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=749s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
this is incredible with just two hundred and fifty labeled examples crazy right and it with only four labels per class it gets eighty eight point six percent so that's just forty images with labels they get a 8.6 percent of of the of accuracy compared to the 97 percent that you get with like 50,000 images that is pretty pretty cool right simply by having all
780
817
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=780s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
other images not labeled but pseudo labeled and consistency regularized right so the the two to two things that are combined by fixed match again or consistency regularization which basically it means that the model should output similar predictions when fed perturbed versions of the same image right this they they're really forthcoming that they are not the ones
817
845
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=817s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
who invented this they just combine the consistency regularization with the pseudo labeling now the pseudo labeling they have also not invented the pseudo labeling leverages the idea that we should use the model itself to obtain artificial labels for unlabeled data we've seen a lot of papers in the last few months or years where it's like the teacher teaches the student and then the
845
872
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=845s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
student teaches the teacher model again and so on so that they simply combine the two methods in a clever way they have one last thing that is not in this drawing namely they only use the pseudo label they have a break right here and they only use the pseudo label if if the confidence if this P of Y here is above a certain threshold so they don't take all the pseudo labels but they only take
872
905
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=872s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
the labels where the model is fairly sure about right so they haven't actually an ablation study where they show that this is reasonably reasonably important and if you go down here where they say ablation or is it ablation ablation study oh yeah something I also find cool if you just give one image per class this one image per class ten images that are labeled
905
934
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=905s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
it still gets like 78 percent accuracy I think the images are chosen as good representations of their class but still one image per class pretty pretty cool an important part of this is the ablation study where they say okay we want to tease apart why this algorithm why this on semi-supervised learning technique works so well and they find several important factors they find for
934
967
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=934s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
example that they're all mentation strategy is extremely important so how they augment the images is very important you see here the error of this 4.8% and the 250 label split if you change up the if you change up the the augmentation strategies your error gets higher right and so they say we use this cutout and we measure the effect of cut out we find that both cut out and seek T
967
1,012
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=967s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
augment are required to obtain the best performance removing either results in in a comparable increase in error rate we've seen before for example they went they went from there some 93 sorry 93 point something percent to ninety four point something percent from the previous state-of-the-art semi-supervised learning and here they find that simply changing the
1,012
1,041
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=1012s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
augmentation strategy changes the error by more than a percent so you can just see this in context of of what's important here right they say again the ratio of unlabeled data seems pretty important we observe a significant decrease in error rates by losing using a large amounts of unlabeled data right then the optimizer and learning schedule seems to be very important as
1,041
1,071
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=1041s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
well in that they use this they say STD with momentum works much better than Adam and then they use this decreasing learning rate schedule this cosine learning rate schedule so there seem to be a lot of things a lot of hyper parameters that are fairly important here and you can see that the gains are substantial sometimes but they aren't like through the roof substantial where
1,071
1,107
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=1071s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
you can make a good argument that it is unclear how much really comes from this clever combination that fit fix match proposes and how much also just comes from whether or not you set the hyper parameters correctly and exactly how much computation are you able to throw at selecting your selecting your hyper parameters so that that seems to be a bit of a a bit of a pain point for me
1,107
1,143
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=1107s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
they also say we find that tuning the weight decay is exceptionally important for low label regimes right choosing a value that is just one order of magnitude larger or smaller than optimal can cost ten percentage points or more and so that all of that seems to me that this this kind of research where you're you're nibbling for half or single percentage points in accuracy while a
1,143
1,177
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=1143s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
eYgPJ_7BkEw
single misstep in a choice of hyper parameter might cost you ten times that gain is is a bit sketchy now I recognize they get numbers like no one else has gotten before but where exactly the gains come from and if the gains really come from this architecture or actually just more from throwing computer at it I don't know all right with that I hope you enjoyed this and I invite you
1,177
1,209
https://www.youtube.com/watch?v=eYgPJ_7BkEw&t=1177s
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
https://i.ytimg.com/vi/e…Ew/hqdefault.jpg
nv6oFDp6rNQ
hi there today we'll look at hopfield networks is all you need by researchers from the johannes kepler university in linz and the university of oslo so on high level this paper proposes a new type of hopfield networks that generalizes modern hopfield networks from binary patterns to continuous patterns and then shows that the retrieval update rule of these new hopfield networks
0
27
https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=0s
Hopfield Networks is All You Need (Paper Explained)
https://i.ytimg.com/vi/n…axresdefault.jpg
nv6oFDp6rNQ
is equivalent to the attention mechanism that's used in modern transformers and it's actually a more general formulation of the attention mechanism and therefore it can be used to do kind of a variety of things to improve modern deep learning and uh it also has a companion paper where it applies this to some kind of immunology research and gets uh achieves state of
27
54
https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=27s
Hopfield Networks is All You Need (Paper Explained)
https://i.ytimg.com/vi/n…axresdefault.jpg
nv6oFDp6rNQ
the art in a task that is specifically suited to this type of attention all right let's dive in together we'll go over what this paper does what it proposes and so on if you like pay if you like videos like this uh consider subscribing you know sharing it out and i hope you're enjoying this all right also thanks to my discord community um for you know very helpful bringing me
54
83
https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=54s
Hopfield Networks is All You Need (Paper Explained)
https://i.ytimg.com/vi/n…axresdefault.jpg
nv6oFDp6rNQ
up to speed on this paper uh super interesting discussions there if you're not on our discord yet uh i invite you to join it's fun okay so what is a hopfield network a hot field network is a pretty kind of old style old conceptualization of a neural network so in a hopfield network what your goal would be is you can conceptualize it as a bit of a neural network
83
116
https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=83s
Hopfield Networks is All You Need (Paper Explained)
https://i.ytimg.com/vi/n…axresdefault.jpg
nv6oFDp6rNQ
so let's say we have five neurons or something like this uh your what your goal would be is to have a neural network where you can store so-called patterns and a pattern in this case would be a binary string of size five so for example one zero one zero zero or one one zero one zero and you'd have a list of these patterns and what your goal would be is to store
116
145
https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=116s
Hopfield Networks is All You Need (Paper Explained)
https://i.ytimg.com/vi/n…axresdefault.jpg
nv6oFDp6rNQ
these patterns in the neural network such that and here you know we'll just consider everything to be sort of connected to everything else and um what your goal would be in this is that you can kind of store patterns inside this neural network and you adjust the weights somehow so this was as i said this was this was this is kind of an old model um you store you you adapt the
145
173
https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=145s
Hopfield Networks is All You Need (Paper Explained)
https://i.ytimg.com/vi/n…axresdefault.jpg
nv6oFDp6rNQ
weights such that you store these patterns and what does it mean for a pattern to be stored if you have stored a pattern you can you will then be able to retrieve it and you retrieve a pattern in these kind of old style hopfield networks by providing a partial pattern so what you'll say is for example i i want a pattern that starts with one one zero and you give that to the network and
173
200
https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=173s
Hopfield Networks is All You Need (Paper Explained)
https://i.ytimg.com/vi/n…axresdefault.jpg