video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
SaJL4SLfrcY
form for a parameterize function preferably differentiable at least almost everywhere in such a way that by using gradient descent type algorithm you can tune the parameters to optimize the performance of the system right so everything is differentiable or almost differentiable you can optimize using gradient or sub gradient and and it all works we all know about that and there
75
99
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=75s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
is you know guarantees of generalization if the capacity of the machine is limited is of course another form of running which I'm not going to talk about very much called reinforcement learning and if respect running has seen a lot of success over the last several years this success are almost all can be restricted to gains or virtual environments there are also applications
99
124
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=99s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
of reinforcement running in situations where you can collect lots of data really quickly to have like a really fast adaptation so if you are you know you want to control you want to show people content and you want to kind of figure out you know well where you want to rank the content there is no differentiable objective function because you don't know what people are gonna are gonna do
124
147
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=124s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
so you can use whether they click on a piece of content or not as kind of a reinforcement and then optimize the the policy as to what you show to people to maximize the to maximize that but these are you know on these situations where we get lots and lots of feedbacks and otherwise it works for games because you can get wishes to look to to play games really quickly and so you so they can
147
174
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=147s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
play millions of games per you can run it in parallel on lots of computers and so you can train which is to carry games to play go to play Starcraft dot you know whatever what kind of games and it's only due to the fact that you can run those games faster than we all-time on many machines if the machine had to run at the same speed as as we do which means real time play the games in real
174
199
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=174s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
time it basically wouldn't be practical it would take about 80 hours for the best current algorithms to learn to play a single Atari game - level of performance that a human can reach in about 15 minutes so the efficiency the sample efficiency of this type of enforcement running is horrible compared to humans at least for go so you've probably heard of alphago alphago 0 from from deep mind
199
224
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=199s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
whether the details are not fully released and there is no open source code there is a system called alpha pango which was released by Facebook and this one you can just download and run or trade it yourself it's used by a lot of different people who are interested in this this one required about 20 million South play games running on 2000 GT use for 2 weeks to reach superhuman
224
247
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=224s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
performance so this is not cheap in terms of computation as you can tell if you were to to buy this on a on a you know cloud computing server it would cost you a couple million bucks and and it you know it's more games that you know a single person can play in a lifetime probably more than all your humanity is played in a in a number of years there's a very interesting paper
247
279
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=247s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
recently but it mind by our vineyards group alpha star which plays a single map on StarCraft with a single type of player and the training for this to the equivalent of two hundred years of real-time real-time play which is definitely more than any single Starcraft player has been able to do there's no paper on this as far as I can tell yet so they all use you know deep
279
305
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=279s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
architectures they all use conventional Nets actually with the combination of other things transformers and start quite a few particular but as you can tell in terms of sampling efficiency it's very bad and it's a huge problem because what that means is that we can't really use me for my training for other than in simulation to Train real-world systems like a car to drive itself or
305
330
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=305s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
robot that grabs objects unless you have a room full of robots you know training all day so if you were to use we first went running at the moment to train a car to drive itself it will have to you know drive itself with millions of hours and cause tons and tons of accidents and and and it's just not practical right so people do it in simulation it kind of works simulators are not very accurate
330
359
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=330s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
so there is a problem of sort of transferring from simulation to the real world it's got to be to work on this but it's a big mystery there which I'll come back to in the second half of the talk and the mystery is is how is it that humans can learn to drive a car in about 20 hours of training without causing any accident and the sort of preview of the answer to
359
381
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=359s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
this is that we have internal predictive models of the world that allows us to predict that if we drive near a cliff and we turn the wheel to the right the car is going to run off the cliff because the gravity is going to fall and nothing good is going to come out of it we don't need to actually try it to predict this and so perhaps the answer is for eventually machines to learn to
381
406
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=381s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
to learn to have those predictive models of the of the world that will allow them to predict the consequences of your actions before before they occur and plan plan ahead and to some extent we can say that the essence of intelligence is the ability to predict but for now let's stick with supervised running so I'm sure so who isn't who doesn't know what the condition that is don't be shy okay
406
436
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=406s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
that's great I can skip a lot of stuff okay so a combination that of course is an architecture that is designed to you know recognize images but in fact it's designed to recognize array data where the property is that there is strong local correlations in the in the features and and some sort of translation invariance of the statistics of the signal so it's true for images is
436
462
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=436s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
true for audio signals it's true for basically anything that comes to you in the form of an array where the locality in the array has some meaning and of course you know the first applications of this were on character recognition but we quickly realized that we could recognize multiple objects with those things not just single objects by kind of scanning if you want or doing the
462
481
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=462s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
equivalent of scanning a commercial net on the only big image which of course you don't have to do it stupidly because it's all the layers are convolution you don't actually need to explicitly recompute the converse on that every location you just make each layer bigger and then you you know make every layer convolution people rename these fully conventional Nets afterwards but it's
481
502
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=481s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
just complex on nets and when you apply this to natural images you can you know you can train systems like this to detect objects and natural images you can apply them locally you can apply accomplish on that locally to an image to have it label every pixel in the image with for example the category of the object it belongs to and the conversation that has some sort of you know each output of the
502
528
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=502s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
network has some sort of window of influence on the input which in this case is actually quite large so to decide of the category of a single pixel the the network here looks at a wide contextual window around this pixel and and then gives you an output for that particular pixel and and this is done sort of conditionally so it's very cheap so the system that was built about ten
528
552
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=528s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
years ago and it could run at about thirty frames per second on a on the specialized Hardware actually an FPGA and of course with what you probably all know is that around 2012 2013 those networks started you know beating other methods for object recognition for by a large margin largely due to the fact that the data sets we can became bigger those systems are pretty hungry in terms
552
576
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=552s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
of data more than whatever methods people were using before and so the appearance of things like data sets like image net of you know sort of made it possible to sort of really exploit the capacity in those in those networks and then the second thing was the availability of GPUs which allowed to run those systems really quickly but you all know this and what you all know also
576
598
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=576s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
is that there's been an inflation in a number of layers using in those networks over the over the years where you know some of the workhorse of image recognition nowadays is you know some sort of backbone convolutional net similar to resonate for example so ResNet is a composition on that where every pair of layers I mean a block or residual block I'm sure again many of
598
626
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=598s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
you have heard of this but you basically have pairs of layers completion non-linearity completion sometimes you have subsampling pooling as well but this one this one doesn't and then you have some sort of connection that can skips pairs of layers and so essentially what you can think of the function of one of those blocks as basically computing the identity function and those those those
626
654
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=626s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
layers compute the deviation of the function of that layer from the identity so that sounds kind of a waste to just have a layer that computes the identity function and it is in fact many of the layers in those systems don't do much you can kind of get rid of them actually after after training but what it does is that it makes the system sort of fault tolerant if you want so if the learning
654
677
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=654s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
algorithm somehow kind of gets into a situation where some layers die which can happen it's not catastrophic because you always have the information going through the bypass connection and so that you know a pair of layers just you know kind of checks itself out of the network so it's not it's not used but it doesn't kill the entire entire effort so that's one of the advantages of ResNet
677
704
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=677s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
you can you can think of the the long succession of layers as sort of progressively we finding the the answer and cleaning up the the the output or the representation between variations of this where you have you know skipping connections that skip multiple layers etc that's called dense dense net so I'm sure a lot of people will talk you know talk to you about progressing purpura
704
727
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=704s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
vision over the next few years and there's been a huge amount of progress over the last few years with things like Maersk our CN n which is sort of a two pass image recognition system that can pick out every instance of every object in an image and with you know really good performance so there's sort of a first first few layers that kind of identify regions of interest and then
727
753
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=727s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
you kind of apply a second no neural net to the conditional net to the the regions of interest that you've identified by the first one there's also kind of one pass systems that my colleague at in Menlo Park at Facebook use the you know call rich internet or future pyramid network and you can think of this as let's say you want to produce a dense map of everything that's in the image so for
753
777
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=753s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
every pixel in the input you want to give a category of an instance or or category whether it's an object or kind of a background region if you want so you have you know a bunch of layers of a commercial net where the spatial resolution goes down as you go up because of subsampling and then you have sort of a similarly architected network that can it goes the other way from low
777
803
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=777s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
resolution to high resolution you have kind of skipping connections that go from one map in the in the sort of you know abstraction pyramid if you want to the corresponing that in the output you know the part of the network that produces the output and you can train this end to end with sort of weekly supervised architectures you can plug classifiers taking inputs from sort of
803
832
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=803s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
various levels in the in the network and this works amazingly well so this is result from a scarf CNN actually not from the retina net or future I mean network but it's the results are quite similar you can get every instance of every object outlined together with a box so the colors are actually produced by the network and correspond to two categories and you know it's pretty it's
832
861
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=832s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
pretty amazing how well this works you need data these are results from this sort of single-pass sheeter pyramid network again the colors indicate so the individual objects but this system actually labels not just the object but also the background regions so it's sort of they call this panel optic vision so these this type of architectures we have sort of a commercial net with decreasing
861
891
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=861s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
resolution followed by another one with sort of increasing resolution which some people called accomplished on that this is a paper from I guess 2012 or so or 11 by my colleague Bob Fergus on this idea geek on visual net and this architecture is used quite a lot in image segmentation particularly for applications in medical image analysis and some people call this kind of
891
918
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=891s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
architecture a unit because of the of the shape when you represent it this way so it's exactly you know very very much the same idea I showed before except now the the layers of the kind of feed-forward part of the network I kind of drawn on this side and then the the sort of resolution increasing half is drawn on that side with skipping connections going directly so it looks
918
940
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=918s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
like a u and these sort of variations of this so this is work from my colleagues that at NYU who are working on with medical image analysis these are 3d MRI scans and so the cognate here is three-dimensional the completions take place in three dimensions over the three spatial dimensions and and every voxel now is labeled as you know one of a number of categories so you can do
940
971
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=940s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
things like segment hip bones and things like this for you know preparing for hip replacement surgery and stuff like that and and it works really it works much better if you use 3d rather than 2d because you get the consistency of all the slices so as you see at the top here is there are artifacts of the recognition if you use kind of 2d segmentation perhaps with a little bit
971
998
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=971s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
of cleanup and if you really you you sort of get with your of those artifacts the same team with a different subset of people has applied this to things like mammograms so this is 2d data but you have multiple images from sort of multiple with use angles of view and here's a surprising thing so this is the kind of application that you some of you may not have heard of which
998
1,027
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=998s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
is the application of comp nets in physics this is an example in astrophysics so this is a paper from I think was in PNAS published in PNAS a few months ago from the Flatiron Institute which is a private research institute in in New York and what they did here was use a accomplished on a to accelerate the solution of partial differential equations solvers so what
1,027
1,051
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1027s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
they were interested in these are cosmologists and they're interested in you know what are the initial conditions of you know baby universe that will cause the kind of universe verbs or in today what you have to do for that is basically simulate the entire universe at its birth you know the expansion the first expansion phase of the universe you can do this because you could do
1,051
1,070
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1051s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
this in principle because you know you have the density of matter and dark matter ordinary matter dark matter photons whatever at every location and you can solve a partial differential equation which basically is just you know physics at every location and I'm gonna compute the evolution of the universe this way the problem of with this is that she only do this at the
1,070
1,089
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1070s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
scale of the universe given the size of the the grid the grid that you have to use to solve this equation it will take too long and so what they did here was they used one of those solvers known PDE solvers to kind of solve those equations on kind of small domains small four-dimensional domains right because it's three dimensions of space and one dimensional time and they train
1,089
1,115
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1089s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
a convolutional net to produce the same result but that convolutional net has a bigger grid right so a PDE solver basically takes one value when vauxhall okay four dimensional voxel if you want or 3-dimensional Vauxhall and then looks at the neighbors and then passes it to some function that computes the new value for the next time step let's say of the the center or the central grid
1,115
1,142
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1115s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
cell so it's a completion like operation except that you know it's maybe nonlinear so so what they did was train accomplish on that with a few layers and they it you use this it uses this unit architecture so we can take a fairly large context into account not just the neighboring cells but sort of a bigger neighborhood rather big grid cells and it's trained to produce the result that
1,142
1,172
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1142s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
the PD solver would would produce and they can easily generate data by running the PD solver but they run it on kind of small 3d domains and then once they have this commercial net they can run it on the the big scale of you know universai universe size scale if you want and and what they get is those kind of match here which are the displacement maps of densities and there's also different
1,172
1,198
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1172s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
methods and the colors indicate errors blue is low error and you know red is high error and those are also various ways of doing this and this is kind of their proposed method and compared with what the PDE solver we do for a relatively small domain so that's kind of an interesting thing which is to use neural nets or D planning in general as a phenomenological model of something
1,198
1,225
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1198s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
that we might possibly know the underlying physics but it's computationally too expensive people are doing this also for predicting the properties of materials for solving problems in molecular dynamics so for example confirmation of protein where the two proteins are going to stick to each other you know things like that which is of course super important for
1,225
1,246
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1225s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
ticks like drug design I was at Harvard a couple weeks ago and I talked to people who are trying to use neural nets to predict the property of certain solids so if you take graphene which is a two-dimensional mesh of carbon atoms and you take two and it's cool you know just a single atom thick layer you take two layers of graphene and the one on top you twist it
1,246
1,271
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1246s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
just a little bit compared to the the one at the bottom there's a particular angle at which this material becomes superconductor and nobody has any idea why and so there's some idea of using neural Nets to kind of build phenomenological models of all those properties so that perhaps we could predict other properties there's interesting work along those lines also
1,271
1,291
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1271s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
by Pascale flora who is actually originally a vision guy at a pea ferry and what he's doing what he's been doing was to predict the aerodynamic or hydrodynamic properties of a solid using a by training accomplish on that feel accomplished on that basically so you feed the shape of the the solid to the to the system and again using fluid dynamics computation to generate data
1,291
1,318
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1291s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
you train it to produce the properties of that shape for example its drag or lift if you are interested in designing air foils for for you know blades of propellers or airplanes or hydrofoils or whatever and then what you have now is on your net that predicts those properties and so because it's on your own that is differentiable so now you can optimize the shape by doing gradient
1,318
1,343
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1318s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
descent in input space you can optimize the shape so as to get the properties you want on the output which you you know you can't really do it with a regular computational fluid dynamics piece of code so it's really interesting he actually has a startup that works on this yes huh well I guess you have to really know the underlying physics to be able to make that generalization so I mean you can
1,343
1,431
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1343s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
test on relatively small spatial domain because you can run the PD solver so you know how accurate your new ComNet is the question is when you extend the size is it still accurate okay and there is a leap of faith there's no question now to your comment that this has nothing to do with Commerce on that no it does it has very very much to do with Congress on it
1,431
1,451
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1431s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
because all of those piggies are local operations that basically look like convolutions that are essentially the same you know press some nonlinear thing because you know if you do navier-stokes equation for free dynamics you have to do some projection afterwards that's nonlinear but you know but it's a local operation and it's the same operation you do everywhere in the image so or if
1,451
1,472
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1451s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
we're in the in the volume so it is accomplish on that it's it's directly it's probably one of the most appropriate use of kinetically imagine so of course it's been quite a bit of progress in in things like you know start driving cars it's working progress these are actually videos that are quite a few years old I think about five years five years old from this one is from
1,472
1,499
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1472s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
mobile I which is now Intel and NVIDIA and there's a huge amount of work on surviving cars as you know a lot of engineering goes into this but all the perception system use you know some sort of cognitive process either images from cameras or from various other types of sensors like lidar and other things like this okay so all this is great it's all supervised and reinforcement and one big
1,499
1,525
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1499s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
question that we can ask ourselves is is this going to take us to the possibility of building you know truly intelligent machines machines were that you know I'm not talking about human level intelligence but maybe intelligence of the house cat or something like that so a house cat has more common sense than any AI systems that we build that we can build today and the answer is no
1,525
1,547
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1525s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
we need we need you know significant can a conceptual progress if you really if you really want to make machines that are more intelligent that we have today so we can do all the stuff we have on the on the left assuming that we put enough efforts in them engineering efforts like you know separate cars you know semi autonomous cars better medical image analysis systems you know
1,547
1,571
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1547s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
all kinds of stuff stupid chat BOTS you know that are entertaining but we can't have things that you know we the technology we have is not enough to get machines that have common sense to get to build things like intelligent personal assistants that really help us it can help us in the daily lives answer any question we have and you know kind of be a bit more like like human
1,571
1,594
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1571s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
assistance we can have really smart chat BOTS we can have household robots that you know take care of all the chores in their house we don't really have a jar in dexterous robots they are a giant dexterous and very kind of specific situations but it's sort of very brittle and we can't have artificial intelligence so in general we can't have artificial general intelligence because
1,594
1,616
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1594s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
that concept does not exist there is no such thing as general intelligence and I hate this term AVI there's a lot of people who claim that you know they are going to get to AGI by scaling up reinforcements running just having more computation this is completely false okay those people are after investment so they're ready to either you know self be so deluded or or kind of stretch the
1,616
1,645
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1616s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
truth a little bit but in my opinion we're not gonna get there with the current type of running that we are that we're using and so why is there no such thing as artificial general intelligence and that's because there is no general intelligence human intelligence is incredibly specialized I'm sorry to say that okay that applies to everyone in this room but our intelligence is super
1,645
1,672
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1645s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
specialized you know we're built by hit by evolution to kind of survive in our environment and we have this sort of impression that our intelligence in general but we just suck at a lot of tasks okay and in fact a lot of the tasks that computers can do quite well we totally suck at it so there was this idea that you know before alphago if I go 0 etc that humans were
1,672
1,702
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1672s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
the best go player in the world were very very close to the ideal player okay good god alright that you could you could get just a few stones of handicap with idea player and basically beat the idea player two or three stones and you care for something like this turns that no turns out the best human players are horrible you know current machines are much much better than they are like by a
1,702
1,728
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1702s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
huge margin so we just suck at it we're really bad which means you know that's not part of the stuff that evolution kind of built into our our brain to be able to do well now the thing is the reason why people were thinking that you know they were very close to the ideal player was because they could not imagine you know smarter considerably smaller entities and so we cannot
1,728
1,756
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1728s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
imagine all the stuff that we're not able to do and therefore we think of ourselves as having general intelligence it's just that or imagination for what you know what functions we need to be able to do is very limited let me give you another more specific example anymore I wouldn't say mathematical but it's you more quantitative your your optical nerve has 1 million fibers so imagine we
1,756
1,789
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1756s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
just take the 1 million fibers coming out of your optical nerve that goes to your brain and imagine that they're just binary so what you see is just a binary image ok 1 million bits how many so a particular recognition function if you want ok recognizing your grandmother or whatever is a boolean function 1 million bits in the input and 1 bit in the output and the question is how many
1,789
1,816
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1789s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
search functions are there anyone has any idea how many search function how many boolean functions of 1 million bits are there any suggestion any idea to to the 1 million yet you are off by a huge factor but it's a good start 25 yes 2 to the 2 to the 1 million that's a correct answer ok so you have 2 to the 1 million input configuration of 1 million bits right and for each of
1,816
1,848
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1816s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
those 2 to the 1 million configuration you have one output bit that's the truth table of a particular boolean function ok so the number of configurations of 2 to the 1 million bits is 2 to the 2 to the 1 million it's an adorably large number I mean it's just a ridiculously large number now among all of those functions how many do you think what proportion do you think your brain can
1,848
1,869
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1848s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
actually compute your visual cortex visual cortex has you know order between 10 and 100 billion neurons order 10 to the 14 synapses okay so it has 10 to the 14 synapses let's say to be generous each synapse scanning can store 10 bits ok so there's 10 to the 15 bits in your entire visual cortex that's your that's what determines the function of you know of your your visual cortex that means
1,869
1,906
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1869s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
the number of functions your visual cortex can possibly implement is 2 to the 10 to the 15 that's a lot less than 2 to the 2 to the 1 million not a lot less it's just you know this is like no comparison right one so the number of functions that your visual cortex can implement compared to all possible functions it's just this tiny tiny tiny sliver we're super specialized in
1,906
1,931
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1906s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
particular if I play a trick on you I cut your article nerve I'm gonna do it ok and I put a device between your your retina and your brain that permutes all the pixels in your head in your optical nerve with a random permutation but a fixed one ok so now there is no spatial consistency in the signal you get to your visual cortex I don't think you can envision because your cortex has local
1,931
1,958
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1931s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
connections and those local connections are there to exploit local correlation and now you break the circle correlation but doing this permutation you can you can see anyone you might see at very low resolution because the higher layers have big context but what is it true what is true yes okay so it is retin-a topic so the connection between the the optical nerve and the
1,958
1,998
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1958s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
visual cortex is within a topic which means the topology is preserved the connections are largely local there are long range collections but this is only a small number of them and so you don't you don't have a huge amount of you know communication bandwidth for the long range you you have you know big bundles of connections from sort of low layers to high riders if you want from V 1 to V
1,998
2,020
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=1998s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
2 and V 2 to V 4 and once you get to the higher layers the the spatial distribution is not represented anymore it's like a cornet where you are pulling and so in the high high layers you don't need that organization but by the time you get there the spatial resolution is lost so we can do the experiment actually it would be fun ok so next question is how do humans and
2,020
2,044
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2020s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
animals learn so I don't know how many of you were were here yesterday at the inauguration probably not many but there is this idea that humans learn in a very different way from either reinforcement or supervised running and I'll call this later so supervised running but this is just a hypothesis but if you but you know babies learn concepts they learn sort of basic facts about basic
2,044
2,072
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2044s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
knowledge about the world basically just by observation in the first few weeks and months of life and Emanuel depuis who is a colleague of Jean and me Vaness and on Facebook put together this chart that shows at what age babies learn different concepts so things like being able to make the difference between animate and inanimate object that pops up around three months and the
2,072
2,100
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2072s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
notion of object permanence the fact that an object that is hidden behind another one is still there still exists the notions of solidity rigidity stability and then intuitive physics like gravity inertia are things like this pop up around eight months so if you show a six-month-old baby the scenario on the top left where you put a little car on a on a platform and you
2,100
2,128
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2100s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
push up you push the car off the platform and it doesn't fall it's hidden in the back put the baby you can see that it's a trick six months they they're not surprised to just you know that's how the world works it was one more thing I need to learn after nine months they've learned that objects are not supposed to float in the air that you're supposed to fall and they go like
2,128
2,148
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2128s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
this okay and you can measure how long they stare at it and with how much attention and so that's how you know that a concept has been has been known on that we know if a concept is is violated by a particular scene that you show the baby the baby is going to be really surprised and you can measure the degree of surprise if you want so how is it that babies learned is just basically
2,148
2,175
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2148s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
by observation you know young babies you know before a few months old or completely helpless I just observed and don't really have any way of affecting the physical world around them so how does that happen so it's a different type of learning that either refers mental supervised ready and it's not just babies it's you know almost animals learn learn this kind of stuff this is a
2,175
2,196
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2175s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
baby or wrong.we Tong is being shown a magic trick whether it's a an object in the cup and the object is removed but he doesn't see that and now the cup is empty and he's running on the floor laughing so obviously you know his model of the world includes object permanence and objects are not supposed to disappear like this and you know and when we see something that surprises us
2,196
2,217
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2196s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
we are you know we laugh or we get scared because here is something we didn't predict and it can kill us so it's all kinds of concepts like this the reason for this animation here at the top is is that these very basic concepts like the fact that the world is filled emotional that perhaps we can learn but there's training of cells to predict very simple things so if I if I train
2,217
2,244
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2217s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
myself at train my brain or if I train a running machine to predict what the world is going to look like when I'm when I move my my head or the camera a few centimeters to the left the view of the world changes depending on you know objects move with parallax depending on the depth the distance to my eyes and so if I train myself to predict what the world's going to look
2,244
2,266
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2244s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
like when I move the camera perhaps I can automatically infer that every object in the world has a depth because that's the best explanation for the simplest explanation for how things change okay so the notion of depth the fact that the world is feel emotional might just simply emerge from training ourselves to predict what the world looks like when we move our head once
2,266
2,287
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2266s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
you have that you have occlusion edges you know objects that are nearby don't move the same way that objects that are far away and so you see them as as objects okay there's a bunch of you know so weekly supervised vision systems that exploit this this kind of this kind of property once you have objects you know you have objects they can they can move independently of others your background
2,287
2,314
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2287s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
objects you have you know the notion of obstacle you know things like that localization so you could think of you know concepts like this being sort of built hierarchically by just training yourself to predict and then representing you know coming up with good representations of that allow you to do a good job at predicting okay so this is not supervised it would be an
2,314
2,336
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2314s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
unsupervised way form of running and that led some of us to so this is the joke from earlier shell force it's a play on some poem from the 1960s or 70s the variation will not be supervised so the future is in a new form of learning that will allow a machine to accumulate all those background knowledge about how the world works mostly by observation a little bit by interaction but mostly
2,336
2,366
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2336s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
without supervision mostly without reinforcement so maybe that's the salvation so supervised learning what is it the the basic concepts the basic concept is I'll give you a piece of data let's say a piece of video for the sake of being concrete and I'm going to mask a piece of that video perhaps the second half of the video which is in blue at the top I'm going to
2,366
2,400
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2366s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
train a machine to predict the future of the video from the past and the present okay but the general concepts of supervised learning is you have a piece of data you mask a piece of it and you ask the machine to predict the piece that is masked from the piece that is not mask if the piece that is masked is always the same so is the future for example you know you can use a some sort
2,400
2,422
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2400s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
of prediction architecture for that but more often than not you don't actually know which piece is gonna be masked for example in this scene here right now you don't see my back but you can have you might have some good idea that you know what it looks like and you know maybe your brain sort of unconsciously tries you know it's predicting what I look like from the back
2,422
2,443
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2422s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
and once I turn around you know your belief about this is updated it you can you can train yourself you can train your model you know same for all kinds of parts of the scene here which are currently occluded from your from your view so this principle of sort of learning to predict things that you will eventually see I think is a good one now this is so again you could train
2,443
2,469
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2443s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
yourself to predict the past from the present to predict the top of the image from the bottom you know whatever doesn't doesn't matter exactly what it is the advantage of this is that this is something that you know Jeff Fenton has claimed for a long time is that the amount of information you're giving to the machine at every time step but every trial at every sample is enormous you're
2,469
2,493
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2469s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
asking to predict every pixel you know in a bunch of frames in a video which is a lot of information much more than the label of a of an image for example which means you're putting a lot more constraints on the parameters of the machine which means you can train the machine to learn a lot of knowledge with a relatively small number of samples and furthermore those samples are free
2,493
2,517
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2493s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
because you know we have more video data than then we can deal with so essentially if you think about sort of a hierarchy of the type of loading paradigms that we've been talking about here in South supervisors running there's a huge amount of feedback you're giving to the machine you know you're giving it a piece of video and then you were asking it you're telling it you know predict
2,517
2,542
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2517s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
all those pixels it's an enormous amount of information this is a technical issue with it which I'll come back I'll come to in a minute supervise running you give a relatively small amount of feedback you tell the Machine this is class number three out of a thousand it's not a huge amount of information and I should say right now the the reason why neural nets work so well on
2,542
2,566
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2542s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
image net is not just that it has 1 million training samples it's that it has 1,000 categories having a problem with lots of categories helps a lot to kind of construct good representations and then reinforcement learning is a very very weak feedback you're only telling the Machine once in a while you got it right or you get it wrong you're giving just a scalar value
2,566
2,585
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2566s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
it's absolutely no way that a machine can learn anything complex without lots and lots of interactions using basic reinforcement running it's just no way you're just not giving it a lot of information yeah you know that's what you know a learning theory the only theory is called this sample complexity and it's just completely obvious that there is no way you can run complex
2,585
2,605
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2585s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
stuff without tons and tons of interactions when you're giving just one scalar value once in a while as a feedback so the path to human intelligence you know my go through reinforcement running but it's not gonna be necessary it's not gonna be sufficient that's for sure that led me to this obnoxious analogy of intelligence as a cake and you know if so supervised running is the
2,605
2,634
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2605s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
bulk of the cake machine learning is in the same embarrassing situation as physics in the sense that physicists have no idea what 95% of the mass in the universe is you know it's dark matter and dark energy they have no idea what it is we only know the 5% that is actually real matter and but the rest we don't know what it is so here's the same thing we can do we can
2,634
2,657
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2634s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg
SaJL4SLfrcY
make the chair you can make the icing on the cake but we can't actually bake the cake okay so I give you a preview earlier of what's missing and Wes missing is the ability to learn models predictive models of the world you know in the example of the car you if you want to train your system to drive a car it has to have some sort of politi model of what's gonna happen so as not to try
2,657
2,684
https://www.youtube.com/watch?v=SaJL4SLfrcY&t=2657s
Self-Supervised Learning
https://i.ytimg.com/vi/S…axresdefault.jpg