video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
SsnWM1xWDu4
to use this substitute the labels it could be also quite in some kind of final stage of the competition when you kind of stuck with the ideas and your score does not approve you can try to the labels idea and probably it adds some kind of minor improvement is the final session of the competition so yeah here is a kind of screenshots of where with the winners of the
1,300
1,325
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1300s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
A7AnCvYDQrU
I got you know basically my spacian is to kind of get to the next step in AI machine running etc and the what we what we see today is is a huge amount of success in machine learning but the sample efficiency of all of the techniques that we use today are much much worse than everything we observe in humans and animals in other words it take many more samples or many more
0
37
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=0s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
trials in the case reinforcement learning for a machine to learn anything compared to humans and animals so a lot of people are very quick to draw conclusions from this but you know humans draw on an animal's drawn evolution and innate behavior but I think it's just more efficient learning and another kind of reaction to this is we draw our background knowledge about
37
63
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=37s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
about the world and that's true the big question I'm asking here is what does that come from how do we acquire all the background knowledge we have about the world that allows us to learn a new task very quickly so so all the success that we you see in practical machine learning today almost all of it is due to supervised running and we all know what what that means right to give
63
84
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=63s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
let's say you want to do image recognition you give an image to the Machine and if the machine doesn't give you the right answer you tell you what a right a right answer is and you adjust its internal parameters using stochastic gradient descent or something like that a gradient based method to get the output closer to the one you want the amount of information you give to the
84
103
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=84s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
machine at every trial is relatively small even in the case of something like image net which has 1,000 categories you tell it with the correct categories and that's less than 10 bits of information so you're asking efficient to predict a very small amount of information every time as a result you need a lot of samples to try need to do anything reinforcement learning is even worse so
103
122
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=103s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
reinforcement learning a situation where you don't tell the Machine the correct answer you only tell it whether the answer it produced was good or bad ok now there is like a harder form of reinforcement learning where what the Machine sees next types type depends on the answer we got and then there is a problem of exploration exploitation etc but even without talking about this if
122
139
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=122s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
you look at how long it takes even to learn to for a machine to learn to place the condition that it just runs by reinforcements come into play an Atari game very simple Atari game from the 1980s it takes the equivalent on average of 80 hours of training to reach the performance that any human can reach in about 15 minutes those machines actually get to superhuman performance but it takes them
139
160
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=139s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
a long time the the go system that was produced by deep mine and the one that was produced by Facebook a little bit later I can I know the numbers for Facebook because they published and also they're my friends this takes about 20 million of self play games to to reach super human performance running on 2003 P news for two weeks this is a lot of games mortal
160
186
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=160s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
games that any humans can play go yes go is complicated and Starcraft so this is a recent the paper actually just appeared last week but the results have been known for a while from deepmind the Alpha star system takes about 200 years of equivalent real time to learn to play on a single map for every single player a single type of player if you want and that's an enormous amount of computation
186
217
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=186s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
there's rumors that just to train this for a week or two that team took more computational resources at Google that auto research okay and you know similar for there is a you know recent demo by open AI and of course when he paper are using the first month on internal manipulation from simulation and then you can sort of transfer this to a real real robot and it takes the equivalent
217
249
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=217s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
of ten thousand years of real time in simulation so you can run simulation fast or you can run it in parallel it just costs money or power or co2 emissions but but it doesn't work in the real world so if you want to train a car to drive itself and you don't have accurately no simulation to turn on this in simulation it's not gonna work you will have the car you'll need a car to drive itself or you
249
278
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=249s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
know millions of hours caused thousands of accidents destroyed South multiple times it will have to run out forklift multiple times before it realizes it's a bad idea to another cliff when it starts it doesn't know anything about gravity or anything like that and so it's not practical for the real world although it may be practical in simulation if you can do an accurate enough simulation but
278
298
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=278s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
it's gonna cost you a lot in terms of computation so how is it that humans can learn to drive a car in about 20 hours of training for most of us without causing any accidents also for most of us right that's a big question how do humans and animals run so quickly and what happens is there is that it's not supervised running it's not refunds my drowning it's something else and so when
298
323
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=298s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
you look at babies you talk to a cognitive scientist development to a cognitive scientist and and you ask them you know when do baby learn basic things like gravity you know when do they learn that objects are supposed to fall I'll tell you around nine months so before nine month old you should be the scenario here but there's a little car on the platform you push it off and the
323
344
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=323s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
car appears to float in the air it's a trick of course babies barely pay attention that's another thing they see how is your Kendall stuff doing there that day they learn from every single one of them but that doesn't surprise them after nine months old they've run it by gravity and they look at this like a little girl here they're really really surprised because in the meantime
344
368
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=344s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
they've learned that objects are not supposed to you know kind of float in the air they're supposed to Foley's are not supported and so that's actually a trick that a method we just I would say that psychosis is used to identify when babies learn new concepts so you know babies run face-tracking very quickly and you know this is could be you know there are computational models that learn kind of
368
392
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=368s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
face detection based on emotion you know self supervise being on motion that learning in minutes so like that could be learn but really quickly the notion of object permanence the fact that when an object is hidden behind another one is still there we don't seem to be born with this but we learn this quite quickly as well so many moles are born with this the distinction between animate and
392
413
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=392s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
inanimate objects that's run around three months they're objects whose trajectory are completely predictable and others that are not animate objects and you go gravity inertia a conservation of momentum basically what we call intuitive physics that that comes much later around nine months and it looks as if or maybe that's our hypothesis but you know babies kind of
413
439
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=413s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
learn sort of basic concepts kind of a you know step survive stage pieces sort of building more abstract concept on top of simpler ones so for example you know are we born with the concept that the world is three-dimensional or do we learn this I think it's a good hypothesis to think that we learned this a lot of psychologists will tell you we're born with it but I don't see like how the
439
460
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=439s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
cortex could be wired to sort of you know you know tell us how to compute depth right although there is certainly some bias in the in the wiring that makes this favorable in the sense that you know connections from the left eye and the right eye actually go to the same place in the cortex so if the if the cortex wants to compute disparity it's easy for it the work the wires are
460
485
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=460s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
there okay but the function not not really and so here is how you could compute how you could learn that the world is two-dimensional if you train your visual system to predict what the world is going to look like when you move your head the best explanation for how the world changes is the fact that every pixel every location in the world has a depth right because then you get
485
507
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=485s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
parallax motion so implicitly if you want to predict what the wall is going to look like when you move your head you're gonna have to learn that implicitly even if you have no idea that the world is two-dimensional that's the best way to explain how the world changes okay so that's not that's an idea that suggests how we can learn very simple concepts just by learning to
507
525
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=507s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
predict essentially and that's going to be the general theme of this talk which is learning to predict prediction is the essence of intelligence in my opinion and so we build models of the world that allows us to learn to drive into in 20 hours to know kinds of stuff but animals do that too so I really love this video of this little baby or wrong song here is being
525
550
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=525s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
shown a magic trick where you put a cherry in a cup and and then the cherries removed but he doesn't see that and then the cup is empty and he was on the floor laughing okay so he's his model of the world is obviously being violated and he finds that funny Hey I mean there's the these two things that happen when your model of the world is violated either you find that funny or
550
576
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=550s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
you find out or you find it scary because here is something you didn't predict it could kill you in both cases you pay attention okay so that brings us to this this audio cell supervised running this idea of running by prediction okay so not learning a task not running to classify objects in categories that you know come to you from a deus ex machina but learning the
576
603
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=576s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
structure of the world by just observing the world essentially so the basic hypothesis of this is or the principle that you can base this on is predict everything from everything else what do I mean by this so let's say you have a piece of data for the sake of concreteness here let's let's think about a video clip for example there's going to be a piece of that data that
603
625
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=603s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
you're gonna tell the Machine you can look at it and there's another piece that the Machine pretends it doesn't know it doesn't see here it's the the future frames of the video okay so it looks at the video up to a point and then it tries to predict the rest of the video but it pretends it doesn't know it yet and then it trends itself to predict it of course it can just wait and observe
625
647
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=625s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
what's going to happen in the world and it trains and predicts it by just observing what happened another form of this is this it's called a mast cell supervised running you give a piece of data it's very popular in the context of natural language processing these days take a window of text a bunch of words you remove some of the words and you ask the machine to predict the words that
647
669
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=647s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
are missing in the process of doing so the machine has to basically develop a representation of language that this allows it to make those predictions and basically in the process of doing this it kind of understands language not completely not deeply but but still but really more generally is the idea of taking a piece of data and asking the machine to predict a piece of it from
669
691
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=669s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
the piece that it sees so as I just mentioned this type of learning is extremely has become in the last year has become extremely popular in natural my which processing is actually part about a huge improvement in performance of all natural language processing systems including translation search to Google there's been a series of ideas you know going back to the 90s on this
691
716
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=691s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
but really sort of a paper that that convinced everyone that's you know decide to be the thing came up on archive in October last year from from Google from Google Google a I was on Google brain actually and they use a particular type of neural net a gigantic one called a transformer architecture so transformer architecture is kind of a funny kind of neural net where groups of
716
742
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=716s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
neurons basically implement some sort of memory module differentiable memory module so they know they don't just compute weight it's on the complete weighted sums but then they compare those weighted sum with with vectors that called keys and then that gives them scores that you normalized to one and then you can compute a linear combination of other vectors and sort of
742
760
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=742s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
complicated but it's kind of an associative memory every module in there is an associative memory and you put 40 layers of those with hundreds of millions of parameters you train this on billions of words of text and you train it in the following way you take a window of a few hundred words you take out 15% of the words and you train the machine to just predict the missing
760
780
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=760s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
words now the machine cannot do a perfect job at this so what it outputs is for each word there are missing it that puts a probability vector whose size is the size of the of the dictionary and it gives you a probability for every every word right so that's the way it handles uncertainty in the prediction it produces a large probability vector this has completely
780
802
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=780s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
revolutionized NLP everybody does this now it works so well that Google deployed this within like in in the last few weeks they basically deployed this as a way of for example if you ask a question to Google it will produce an answer the answer is computed by anything like this Facebook has deployed as a develop things like this for translation and content filtering hate speech detection all
802
825
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=802s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
kinds of stuff yes right well yeah here you know what those words are right it's just you you it's not right it's not it's supervised running with two differences what is you don't have a extraneous piece of data us submission to predict so basically you're not asking you to perform any task although that understanding the input data the internal structure of input data second
825
860
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=825s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
thing is that prediction cannot be known exactly because you know you can't predict exactly which word is going to go here and so you need to deal with uncertainty and those are crucial key points that so distinguish this from sort of regular supervisor on human okay it doesn't work so well for yes yes it produces a probability vector over all words yeah it's a separate one for every
860
888
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=860s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
word by the way so yeah so there's no consistency between if you pick one where you can put another word independently format from that distribution vector yeah so of course people try to do this for images so the equivalent of this would be take an image you know blank out some of the areas of this image and then train a neural net the coefficient net or something to predict the missing parts
888
913
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=888s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
the problem with this is that now the distribution of outputs is all is over a high dimensional continuous space and we don't know how to parameterize good distributions over those so those so far have not been very successful not to the extent that those have been successful so the way you use those things is you train this network and then you take the internal representation of language that
913
935
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=913s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
those things learn and use it as input to a supervised task his fish detection answering questions you know whatever this group to students at Facebook in Paris who have used this for training your translator so you give a sentence a sentence in English and a sentence in French you remove different words randomly from the two sentences and then you ask the system to translate
935
960
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=935s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
and the magic thing is that because some of the words that are removed from the French version are present in English version it learns to produce a representation that is independent of language so what you get in the end is a meaning representation that works for English in French and you have two encoders went for engagement from French Google has a version of this has you
960
981
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=960s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
know a hundred languages Facebook now has a version of this that has compiled languages those are massive networks the latest biggest ones are tens of billions of parameters it's just ridiculously large yeah it's inventing on steroids exactly yeah yeah so because you can't you can't pick it you know you can't pick pixels independent of each other so this is a trick that deepmind has
981
1,016
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=981s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
proposed if a former postdoc of my daddy find Carl Gregor which is you make you make the prediction of pixels sequential and you turn it into a classification problem for the great scale where it's you know one among 256 its it just strikes me as wrong you know it kind of work surprisingly well but you know it can be the ultimate answer no I think we'll find something better so there is
1,016
1,048
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1016s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
actually yeah there is there is a second studies about this actually people who have tried to study what the representation inside so Chris Manning at Stanford has done some some work on this and his various groups it seems that those things actually represent meeting to some extent right it's not a deep understanding of text you know but its shadow because those are words that
1,048
1,070
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1048s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
are not connected with the real world right i mean the thing only sees text this big question so it has the linguistics community up in arms because it basically you know breaks the entire universe okay of like you know what about grammar what about you know semantics what about all those things is all statistics you know what about symbol manipulation right those things
1,070
1,091
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1070s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
basically just represent everything by vectors they embed everything in vector spaces and so the the chomskyan linguistics say oh my god they write books against this okay so supervised running you you you train a system with sort of a pretext task which is not really a task it just reconstruction or prediction and as I said as we said I said it works with you well for texts
1,091
1,120
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1091s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
and symbols and people use this now for like you know DNA sequences all kinds of stuff it's very new images yeah not so much video not so much either signal audio not so much either there is some results in other words so improves the steadily out a little bit but they're not as successful as in NLP NLP they're incredibly successful okay there's another reason why we might want to use
1,120
1,142
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1120s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
our supervised learning which is and it goes back to this idea of training a car to drive itself the reason why we can learn to drive a car in 20 hours with that crashing most of the time is that we have this model of the world that allows us to predict the consequences the consequences of our actions so we know that if we drive next to a cliff and we turn the wheel to the right
1,142
1,163
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1142s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
you know the car with we're off to the right it will run off the cliff and crashing on the bottom because we know about gravity and nothing good is going to come out of it so we don't even try right because we had this predictive model we can predict the consequences of some of our actions at least so the way this works is actually a very standard thing for optimal control theory which
1,163
1,182
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1163s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
is if you have a predictive forward model of the world they gives you the state of the world at time T plus 1 as a function of state the world time T your action and perhaps some latent variable that represents all those stuff you don't know about the world that you can you can sort of run this in your head you can you know run your world model in your head with a proposed sequence of
1,182
1,201
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1182s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
action and see what the result will be and you can measure the cost of it you know you can have an eternal cost for a good things are you know and I don't want to cry right and so you can you can sort of run this model forward and perhaps infer a action sequence that will minimize your costs right that model will have to be learned with self supervised running
1,201
1,222
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1201s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
basically here is a state of the world let me take an action and see what the result is or not taking action just because the world is being the world and that's the same problem we need to solve here the self supervised learning problem and the main part the main issue is that the world is not deterministic okay so that it be to this picture of the three paradigms of running if you
1,222
1,246
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1222s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
want reinforcement learning supervised running and soft supervising the difference is how much feedback information you give to the system at every trial or every sample here is just one scalar here it's just a few bits for example and here it's basically a whole video right it's a huge amount of information you give to the machine so the hope is that you can try and
1,246
1,266
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1246s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
gigantic networks without them being you know ridiculously overpriced and it will learn a lot of the structure about the world just by observation without actually taking any risk and without you spending money correcting labels that's probably how humans and animals learn so much that might be how common sense emerges right the accumulation of all the background knowledge we have about
1,266
1,286
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1266s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
the world by that we accumulate by observation that's how that's the basis of common sense essentially so we need to get machines to do this and for I mean I've been sort of advocating for this for a while and you know what Middle's obnoxious you know slides without reinforcement learning being a cherry on the cake of machine learning and supervised reading being the dark matter of AI we don't
1,286
1,314
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1286s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
know what it is there's actually the dark energy you've more you know it's most of the energy okay so the next revolution is I will not be supervised you will not be reinforced either I saw this from Alyosha of course my colleagues Eterna bamileke who's from Berkeley also in Facebook here says labels are the opium of the machine learning researcher so you know
1,314
1,332
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1314s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
it's all like revolutionary statements and some dude actually produces a t-shirt that you can buy okay so that gives me two inertia base models which you really really is a proposal for how we approach this problem so the main problem is how do we predict with uncertainty so if I if I do an experiment I'll come back to this if I do an experiment which is I put a pen
1,332
1,363
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1332s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
on the table and a lady go and in a minute and if I I repeat the experiment all table times and it's not the same video clip every time the pen will fall in the different direction and if I ask you to predict what is the state what is going to be the set of the world in two seconds you can tell that the pen is going to fall but you can't tell really in which direction right most of the
1,363
1,381
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1363s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
time so if you train a deterministic function to make one prediction the best thing you can do is predict the average of all the possible futures which would be a transparent pen in all possible configurations and if you actually do this you train a system to predict video to pointing video frames you're the fourth frame or the the first four frames of the videos are observed the
1,381
1,401
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1381s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
last two are predicted you get blurry predictions they're basically the average of all the stuff that could happen and then machines can't decide which one it has to make one prediction yes yes No okay so address all that works ten try to solve the same problem in a different way from the one I'm going to explain but I'll come back to that analogy later okay so the point is if you have an
1,401
1,438
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1401s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
input a deterministic function that produces one output and some cost function here that measures the discrepancy if this cost function is only zero when when the prediction and the observation are the same then this guy can only predict the average now to me to come back to your point if you make this cost function complicated in such a way that it doesn't compare
1,438
1,458
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1438s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
points but it compares then you know distribution so example then yes but then that becomes complicated that's what adversarial training is about you have to train this thing to do the to copy all distributions which is hard okay so we're not going to use a deterministic function so here is the the crux of energy based models and it's very connected to things like factor
1,458
1,484
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1458s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
graphs that the people were talking about in the context of graphical models and Bayesian networks and stuff like that so basically you have an input and observation you have an observed or hypothesis for prediction and you have an energy function here that measures the compatibility between the two if the two are compatible if Y is a good prediction for X then the energy
1,484
1,504
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1484s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
produced by this is Rho if they are not compatible it's high okay and the way you do inference is that I give you an X you find a why that might be multiple that produces a low energy okay so for example here this is x and y if I give you this X there are two possible answers for why I have low energy right this is the manifold of data that's why they does that our sample form and you
1,504
1,529
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1504s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
know you'll you'll have to answer so I'm not telling you how to do this minimization and how to produce multiple answers but that's the inference mechanism so the energy here is not used for training is used for inference it's very different so you could say well alright you know that's an energy function but you know you can take the exponential and normalize with a gives
1,529
1,548
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1529s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
distribution and it gives you a probability yes except that I don't actually want the log of a probability I don't want the energy to be the log of a probability here the probability is a set of measure zero its peak like if I know anything about math right you know it's a thin plate and so the the distribution here would be you know you know infinity on that point on that on
1,548
1,578
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1548s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
that manifold and 0 just epsilon that side of it which means this energy function or ever you parameterize it will have to have infinite parameters infinite weights something you can set your own net and it's not very useful because you can't do inference with this it becomes a golf course you can do inference what you want is a function that is smooth so that at any point here
1,578
1,598
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1578s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
the gradient of that function might tell you where to go to get a point from the manifold so this is very emphatic you do not want to learn distributions they're bad for you right maximum rank liquid sucks it just doesn't do the right thing this is big mistake that actually gets to the original formulation insists that you know this should be you know one here zero outside
1,598
1,621
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1598s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
and that's just it's just a bad idea so that's an example where you know applying probability theory blindly actually is bad for you and I've been trying to snap people out of it for twenty years without success so far okay so this is the what I just showed you is the conditional version and there's an unconditional version where you don't actually have an observation the only
1,621
1,648
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1621s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
thing you want to know is like model the internal consistency of why and really those two problems are not that different from each other in the first case you know a priori which set of variables are observable in the second case you don't know which part of why it's going to be observed and so here the what the model gives you is kind of a dependency you know a function that
1,648
1,666
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1648s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
gives you the dependency between y1 and y2 in this case so things like auto encoders or this type yes okay it's akin to negate you look like a hood but you don't want to train you to with maximum maximum okay good or not at least without heavily regularizing it and you don't need the normalization because it's not like you're gonna sample from it anyway so in the end it's just an
1,666
1,692
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1666s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
energy that's the more elementary concept that you can derive from and you know physicists in the audience know that energy is more fundamental than probabilities probability is going to derive from from it or Hamiltonian if you're a quantum physicist but it's not probability this amplitude well whatever okay so how do we trade on energy base model so of course we're gonna primate
1,692
1,715
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1692s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
Francis in some way right it's going to be some sort of neural net with a particular architecture and it's gonna have parameters in it and we need to train it in such a way that it takes to shape so that the data the training data we observe take low energy and everything else has higher energy and not specifying that it should be the you know that the difference of energy
1,715
1,734
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1715s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
should be akin to wish you know difference of love of our abilities insist on that and there are two classes of methods for doing this without contrastive methods and architectural methods so quadratic methods basically consists in pushing down on the energy of points of data points right so give a give a pair XY to the model and twist the parameters so that the energy coming
1,734
1,759
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1734s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
out of it goes down easy and then the contrastive term which prevents this thing from just collapse into zero everywhere picks points intelligently outside and pushed their energy up okay and the problem is how intelligently intelligent you have to be - can I pick those points and by the way guns are an example of this so again for example where the discriminator is this
1,759
1,785
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1759s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
energy function and the generator is the smart system that picks out the points whose energy going to push up that's called energy B's gets any paper on this a few years ago and then there is architectural methods and those architectural methods consist in building the energy function in such a way that the volume of stuff they can take low energy is is limited or
1,785
1,805
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1785s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
minimized okay so either about construction or through some regularization term and I'll come to how you do this in a minute but let's start with okay so there are all kinds of you know traditional and supervised learning methods that you can caste that in that language and I said you know basically the construct the basic idea contrastive methods is push down the energy of data
1,805
1,831
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1805s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
point push up everywhere else or which what is maximum likelihood does if they have if you have tractable partition function approximation put down the energy of data points and push up on chosen location and maximum likelihood with Monte Carlo Markov chain Monte Carlo an alternate Monte Carlo contrasted divergence metric you learn English for matching all the stuff
1,831
1,853
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1831s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
basically are different versions of this including gas and the third one is trying to function at most points of the data manifold two points on the data manifold that's called denoising auto-encoder and that's why those NLP it was largely a pea model i was telling you about do that's called a master to encode particular case of denoising auto-encoder I'm gonna mention a little bit metric
1,853
1,874
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1853s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
learning because it's one of the few cases where it works and it's in fact the only case we know it works in the context of images so it's kind of important and it's those results are reason like last week and then there's architectural methods some of them some of you I'm sure know so I think like PCA so PCA you make sure the whole space is not reconstructed because the
1,874
1,899
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1874s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
representation is constrained to be a low dimensional k-means just make sure model square is here etcetera these are the ones I'm going to talk about because that's where my money is right now so I think like sparse coding sparse or two encoders which some of you of course I've heard of and then the other ones I'm not gonna mention okay so how does it work in the context of PCA k-means so
1,899
1,922
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1899s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
in PCA the the the region of the space that is perfectly reconstructed is the principal subspace right in this case here if the data points are sampled from this from the sparrow the principal subspace here dimension 1 is this so this has become our reconstruction error 0 and everything else grows could racket quadratically right because you take a point you project it on this and so if
1,922
1,945
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1922s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
it's already there reconstruction here at zero if it's here reconstruction error scores that the square of the Euclidean distance not a good model of a spiral as you can tell k-means says so k-means interesting because it has a latent variable in it so the energy function is not directly a function is the minimum of some other more elementary energy function okay so it's
1,945
1,968
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1945s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
the the min over a vector Z of the squared distance between the data point and the this Z vector multiplied by a matrix whose columns are the prototypes of the the k-means model and you constrain this vector to be a one heart vector okay so only one one only one component can be one the other ones have to be 0 and so you have to do this search exhaustively which is
1,968
1,995
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1968s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
akin to a nearest neighbor and what you see here those of you black areas are the minima centered on the prototypes ok so KB's just put you know prototypes more or less equally distant over the many fold it looks great in two dimensions and how dimension k-means really doesn't work that well but what's interesting about those both of those cases is that they work because the
1,995
2,022
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=1995s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
capacity the volume of the white space that can take low energy is limited okay that's a key concept or you talked about the maximum likelihood so I'm going to keep that okay so that leads us to this idea of latent variable models right so so energy models f of x and y that are actually defined by minimizing a more elementary energy function e of X Y Z with respect to Z or by marginalizing
2,022
2,052
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2022s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
over Z which is equivalent to defining f of X Y as minus 1 over beta which you can think of as a new verse temperature log integral over Z exponential minus beta the energy ok this is a log partition function for those of you know and this is a free energy which is like Oh F for physicists in the room so there's the conditional version and the unconditional version which only
2,052
2,075
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2052s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
consists in taking X out ok so that's that's what the model looks like you have observe variable variable you need to predict and some latent variable you have to minimize over now why is that interesting latent variables are interesting because they are an essential tool for making a system be able to predict multiple outputs instead of just one so if I build a system out
2,075
2,104
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2075s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
of deterministic functions here I have you know a neural net with a few layers it produces some representation of the observed the observed variables and then I feed this to another neuron that I call the decoder together with a little variable by varying this latent variable over a set I can make the output very over set may be something complicated manifold of this if this network is
2,104
2,125
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2104s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
complicated and basically that allows me to solve this problem of not really take the average right I can just you know predict the actual thing I'm observing by finding the latent variable that will make my model pretty the best thing so here is how you train an image based model there you show it an x and y you find the Z that minimizes the the reconstruction error okay and if that's
2,125
2,153
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2125s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
not perfect and we're one step O's to caste gradient descent you update the parameters of whatever functions you're using to make this small okay this works great except there's a there's a slight problem with this which is that imagine that Z has the same dimension as Y okay so in C are the same dimension as Y and the decoder is not you know the generate function then it's kind of a powerful
2,153
2,184
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2153s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
parameterize function then for any Y you show the Machine there's always going to be a Z that's going to reconstruct it perfectly which means your energy surface can be completely flat it's not a good model dependency of Y on X because your energy function doesn't tell you you know which Y is good so again what we're going to have to do is limit the information capacity of Z like
2,184
2,208
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2184s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
we did with camis basically we're going to have to limit the volume of white space they can take low energy and that volume will have to be commensurate with the you know the volume of our or data manifold essentially okay but let's start with contrastive embedding so church embedding is following idea to handle the fact that multiple Y's are compatible with X you feed both x and y
2,208
2,235
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2208s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
to neural nets and those neural nets will have invariances and so you're going to be able to modify Y for a given X without changing the output okay because of the embarrasses built into the system and that's a way of handling the fact that there are multiple ways are compatible with the next but now the way you need to train this is that you need to tell it ok here are two images
2,235
2,261
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2235s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
they are actually the same you know conceptually so whatever representation you extract from this image should be similar to the representation you extract from this image so basically I want HNH fine here to be as close to each other as possible because really they represent the same thing but if you if you only do this you see you collapse you see basically those Network
2,261
2,282
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2261s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
completely ignoring the inputs and producing constant vectors right so you need a contrastive term the contractive term which is kind of a subclass so this side you are pushing up the energy of so if you don't want is you show pairs of examples that are dissimilar and then you train those networks to produce outputs that are different from each other and these various loss functions
2,282
2,302
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2282s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
to do this so this is in the business it's called Siamese neural net and it's another idea but it's been kind of revived more recently it's been successful for training of face recognition system or there's a the paper that just came out that actually beats to actually use some supervised learning in vision to improve performance of a super purely supervised
2,302
2,320
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2302s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
learning is this one moko and it's using a trick to kind of slow down one of those the main issue here is the difficulty of finding hard- so here you have to mine your entire data set for images that the system thinks are similar to this one but really aren't and that's really where where things become complicated but that idea this paper I just mentioned actually improves
2,320
2,347
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2320s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
the performance of image recognition systems over purely supervised yes/no nights you know whatever your favorite primary classes but in these cases large compositional nets in fact the name of the architecture is right here this means resonate 50 with four times the size of the future maps resonate 50 is going to a standard architecture for image recognition
2,347
2,377
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2347s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg