video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
a-VQfQqIMrE
is you have a data set of a finite amount of data that you can put a that you can sample x and y from and so instead of your minimizing your true risk you minimize your empirical risk the empirical risk minimization right here now what's the problem with that the problem is that you can get overly confident about your data points and nothing else and that will hurt your
95
124
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=95s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
generalization so if you have a data point let's say right here and another one right here your network is basically so this this is a this is class 1 this is class 2 your network is going to maybe make decision boundaries like this and like this where it says ok here is class 1 and here is class 2 but it could you know it's very conceivable that here it says ah here is class 4 and
124
153
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=124s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
over here is class 7 and right here through is class 9 and by the way here class 4 again so the the empirical risk minimization leaves everything in between the data points open now what this paper proposes is that we should not only train our our classifier on these data points but on all the data points sort of in between the two and this is the mix of data points so this
153
191
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=153s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
data point here might be constructed if this is a and this is B from zero point one times B right and plus 0.9 times a because it's mostly a and it's a little bit B and now you think what are the labels here if a belongs to class one and B belongs to class two then of course the label of this data point is zero point one times the class of B which is 2 plus 0.9 times the class of a
191
225
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=191s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
which is 1 ultimately because what you do is you input a class like class number two if you want to input this into a machine learning model you just you don't just say it's class number two what you input is a distribution that is basically has zeros everywhere so these small things there zero zero zero one zero and this here is at class number two so this would be class number one
225
253
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=225s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
class number two class number three right you input a distribution like this if you want to express class number two now in our sample right here what we would input as a label is simply a mix between class so 0.9 0.9 of class 1 0.1 of class 2 and then zero everywhere else so this would be our label for the data point that we construct right here this will be our
253
284
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=253s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
sorry the top one would be our data point formally you take two data points and you mix them using this lambda mixing factor that'll give you a new data point that's in between the other data points and you take the two corresponding labels and you mix them accordingly as well and that will give you the label for that data point and now your model will learn to basically
284
310
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=284s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
smoothly interpolate so you will teach your model the thing on the left here is class number one right that's class number one the thing on the right is class number two this here is a half of class one and a half of class two so the model basically learns a smooth interpolation where the situation that's here on top is probably not going to happen anymore but what it would do is
310
338
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=310s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
it would sort of create these ISO lines around class two and then around class one where it's sort of smoothly getting less and less sure about the class of the data points but on the way it is always either class 1 or class 2 and they say that can help the generalization performance and it's visible or why right it's just the only thing that's not it it's not clear from
338
363
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=338s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
the beginning is that this kind of interpolation actually makes sense because if this means we sort of linearly interpolate between two images so if we have two images we just take half of one and half of the other and that will be not a natural image it will be kind of a blurry thing otherwise you know all our problems would be solved and we could just linearly classify
363
384
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=363s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
things but in any case in practice it actually seems to help probably because interpolations of two images linear interpolations are still much more like something like a natural image then any random noise you could come up with so they say it isn't code right here code is pretty simple simply want to mix the two things and the mixing factor this lambda here comes from a beta
384
413
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=384s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
distribution and they use a beta I believe of 0.4 or something just want to quickly show you this is the red line here so the red line as you can see mostly most of the time they're going to either sample the thing on the very left or the thing on the very right that means the either sample the first or the second data point but some of the time they actually sample something in the
413
441
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=413s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
middle and it's it's fairly uniform in the middle so it appears like a good distribution to sample from if you want to sample these mixing coefficients and by adjusting the the actual number of alpha and beta here you can determine how many times you sample the original data points versus how many times you sample something in the middle okay on this toy data set right here they
441
469
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=441s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
showcase what mix up can do so in a classic model you have the orange and the green data points and blue is basically where the classifier believes its class one you see this very hard border here it's quite a hard border now you only have two classes here and so the hard border is sort of a problem in itself because if you think of for example adversarial examples all they
469
494
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=469s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
have to do is basically get over that one inch and the classifier is already super duper sure it's the orange class right whereas if you use mix up your border is much much much more fuzzy it's like yeah it's only really sure here and out out here everywhere but in the middle it's sort of like me I don't know and so that's kind of a more desirable situation and of course this here works
494
525
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=494s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
particularly in this in this linear to the setting but as we can see the same reasoning applies to sort of higher higher layers and higher dimensionality data points right I have to seem to lost the ability to zoom Oh No that's back okay and that's basically it for this paper this is all they do they propose this method and then they test it they say something interesting here
525
556
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=525s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
that mix-up converges to the classical method as off approaches zero so that would push your beta distribution basically in the middle all the way down and you would only sample from the very left or the very right so you can smoothly interpolate between this mixing and the classic method they so their main results are we apply this to classifiers and what I like is since
556
586
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=556s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
again is also a classifier so the discriminator is a class fair they also apply it to Ganz and they outperform unstable eyes the classic training on ganz they show that it's more robust towards adversarial attacks because it's not so sure about intermediate things and they generally outperform other methods but also they do this nice investigation here where they measure
586
612
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=586s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
the prediction error of in between data and what it means is they say a prediction is counted as a miss if it does not belong to Y I or Y J so you have a sample right here X I in the sample right here X J and you look at what the classifier says in between the two data points so you just interpolate the two data points and just measure what the classifier says and whenever
612
640
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=612s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
the classifier either says y I or Y J either either a label of those two data points you count it as correct and you only count it as incorrect if it says something else and you can see here if you train with the classic method erm the these errors happen much more often that's exactly the situation I pointed out at the beginning where in the high dimensions it can you know occur that
640
666
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=640s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
all sorts of decision boundary sneak here in between the two data points and by interpolating between them during training you sort of much reduce that you reduce that effect a lot now this they also say that the gradient norm of the gradients of the model with respect to input in between training data it happens the same thing the norm of the gradients in the middle is also
666
700
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=666s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
much much lower and this yeah this investigation I find pretty cool I have to say I have seen mixup in practice so it might be useful I've read a paper where they basically say oh it was a big transfer paper yeah where they basically say it is useful if you have for example if you have little data and a big model so you can sort of regularize the model and is also useful to know that they did
700
726
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=700s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
test this with drop out so we can they compared it with drop out and the conclusion is basically that this is something else than drop out so it's not doing the same thing drop out of course it means you drop out some of the data points in intermediate activations and that sort of gives you a noisy version of the data point this here can actually be combined with drop out which means
726
751
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=726s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a-VQfQqIMrE
that it gives you an additional benefit you see right here most of the best numbers happen when you use mix up plus drop out so it seems to be just an additional regularization on top of drop out pretty cool pretty cool investigation awesome alright so if you like this I invite you to read the paper if you liked the video please subscribe and like and comment and yeah have a
751
780
https://www.youtube.com/watch?v=a-VQfQqIMrE&t=751s
mixup: Beyond Empirical Risk Minimization (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
dMUes74-nYY
lecture seven of deep unsupervised learning today we'll be talking about self supervised learning and it is going to be pretty different from the previous lectures you've heard so far so far you've been looking at a lot of generative models how to use various classes of generative models to generate high dimensional data like images audio text and so forth however unsupervised learning is a much
0
34
https://www.youtube.com/watch?v=dMUes74-nYY&t=0s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
broader goal than just being able to generate data and one of the goals of unsupervised learning is to be able to learn rich features from raw unlabeled data such that they can be useful for a lot of downstream tasks and this lecture is going to get at that and recently people have started calling this stuff supervised learning where the data creates its own supervision and so we
34
63
https://www.youtube.com/watch?v=dMUes74-nYY&t=34s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
are going to look at all the various classes of techniques that are allow us to do stuff supervised learning so so far we've seen density modeling where we've covered Auto regressive models flow models and we also talked about variational inference and we've also looked at implicit generative models implicit density models like jion's and energy based models and both these
63
94
https://www.youtube.com/watch?v=dMUes74-nYY&t=63s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
classes of techniques allow you to learn generative models which is you are going to be able to generate images and be able to report lac courte scores and so on but other than that we mainly looked at applications of generative models to various modalities of data we haven't actually seen how to use unsupervised learning to learn features so that's the motivation for today's lecture how do we
94
123
https://www.youtube.com/watch?v=dMUes74-nYY&t=94s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
learn rich and useful features from raw unlabeled Ino such that it can be useful for a wide variety of downstream tasks and we're also going to ask ourselves the question of what are these various pretext or proxy tasks that can be used to learn representations from raw unlabeled data and if we are able to learn good representations how can we leverage that and improve the data efficiency and
123
152
https://www.youtube.com/watch?v=dMUes74-nYY&t=123s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
performance of downstream tasks with a good pre training model so here is a figure from in Goodfellows deep learning textbook the focus here is how do we learn good representations and here's a simple case study of why representations matter see how a bunch of points two-dimensional and if you visualize this an XY coordinate the Cartesian coordinate the they're clearly two
152
185
https://www.youtube.com/watch?v=dMUes74-nYY&t=152s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
separate clusters but it's harder to visualize how to linearly separate them but the moment you visualize them in the polar coordinates you can clearly say that there are two different radii and a lot of different angles and so you can have draw linearly separable hyperplane between them so it's clear that representation matters so once you move from the Cartesian representation to
185
214
https://www.youtube.com/watch?v=dMUes74-nYY&t=185s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
polar coordinate representation things become a lot easier to handle as well you know you can actually use this linear SVM so logistic regression to learn a classifier on this particular polar coordinate representation so what is deep learning doing deep learning is basically using depth and repeated computation to iterating we will find the features as you move thereby they
214
241
https://www.youtube.com/watch?v=dMUes74-nYY&t=214s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
over there so the bottom was there is using the raw pixels as input and here it's trying to make sense that there is a person in the photograph and you know if it's looking at all the background pixels and understanding that there is a phase so the way it starts it starts with the highest frequency information at the bottom and it refines at the next level to edges and it refines the next
241
271
https://www.youtube.com/watch?v=dMUes74-nYY&t=241s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
level two corners and contours and then the next load actual object parts and finally it's able to figure out the identity of the objects present and the actual image so just like how we saw representations matter so representations are the higher levels are more semantic representations are the lower levels are more fine-grain and detailed and high frequency so the
271
297
https://www.youtube.com/watch?v=dMUes74-nYY&t=271s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
deeper on that can be thought of as writing its own representations that every layer every successive there is written on top of the previous layer which is more abstract than the raw input and that allows you to do downstream tasks if you take the topmost layers so here's the Venn diagram that being good for suggests for deeper how to think about deep learning so deep
297
322
https://www.youtube.com/watch?v=dMUes74-nYY&t=297s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
learning is a subset of representation learning which is a subset of machine learning which which can be considered as sort of AI in general and so the goal of deep learning itself nothing to do with unsupervised learning is to learn good representations of raw data so what is deep unsupervised learning so unsupervised learning is concerned with learning these representations
322
348
https://www.youtube.com/watch?v=dMUes74-nYY&t=322s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
without labels so it can be considered as another subset of deep learning which is doing deep learning without labels basically so we are gonna get at the goal of representation learning without labels and that's deepens promise learning so it sort of gets at the core goal of the class and recently it's been called a self supervised learning and it's used interchangeably with
348
373
https://www.youtube.com/watch?v=dMUes74-nYY&t=348s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
unsupervised learning the exact terminology of what itself and what is undoes not matter it's basically concerned with learning representations with our labels whether one self usually refers to the scenario where you can create your own supervision based on the data but at the end of the day it it can be considered as as another way to reprime tries unsupervised learning
373
403
https://www.youtube.com/watch?v=dMUes74-nYY&t=373s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
so why self supervised learning the expense of producing a new dataset for each task this is really high so there are actually billion dollar startups just doing data annotation for people who can just upload their images say what kind of labels they want and overnight or within an hour or fortnight you can get like high quality labels created by humans who would annotate
403
429
https://www.youtube.com/watch?v=dMUes74-nYY&t=403s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
this data on the client side so sorry on the server side so so basically you you need to prepare labeling manuals you need to figure out what categories of objects you want you need to have someone else hiring humans you need to hire your humans to annotate data and whoever is doing that job you need to create good graphical user interfaces so that it's the process of annotation is
429
458
https://www.youtube.com/watch?v=dMUes74-nYY&t=429s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
really fast you also need to create good storage pipelines so that every you know let's say people are annotating usually it's like a lot of mouse clicks per minute or second and every mouse click is automatically recorded and converted to prompt rate data storage formats and stored efficiently into the cloud so there are lots of back-end engineering you need to do not that this is a bad
458
484
https://www.youtube.com/watch?v=dMUes74-nYY&t=458s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
thing to do it is it is good unlike if we really need to work on better and better pipelines for data creation however good supervision may not be cheap for example annotating what are the objects contained in an image is probably something that you can take for granted now because people have created a lot of datasets but if you move to another domain like medicine or legal
484
510
https://www.youtube.com/watch?v=dMUes74-nYY&t=484s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
creating another data set may actually be pretty hard so taking advantage of vast amount of unlabeled data on the Internet is something supervisor learning cannot do so no matter how much you can appreciate the success of supervised learning there is still a lot more unlabeled data then there is the amount of label data and it would be nice if we can leverage the
510
539
https://www.youtube.com/watch?v=dMUes74-nYY&t=510s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
of unlabeled data to further improve the performance of systems that work on label data so it doesn't have to be a dichotomy between hey we just want to do unsupervised learning or we just wanted to supervise for anything but rather we want to figure out how to take advantage of large amounts of unlabeled data order billions of images for lots of text or audio samples or YouTube videos and
539
562
https://www.youtube.com/watch?v=dMUes74-nYY&t=539s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
learn amazing features and then make the process of doing supervised learning much more cost and compute and time efficient and finally there's this cognitive motivation which is how babies or animals learn in that like when they mostly learn by experimenting and without actually having a labels so a child can just look at other people doing things or its own experience
562
590
https://www.youtube.com/watch?v=dMUes74-nYY&t=562s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
moving its own hands or looking at other people around in the house or like you know no modern days children grow up with gadgets so they can look at videos and already start learning good features without actually knowing this is it this is a cat this is a cat this is a cat like hundreds of times that's how emission a classifier is learned so that was a really nice code
590
614
https://www.youtube.com/watch?v=dMUes74-nYY&t=590s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
by Pierre cermony who's one of the leading researchers in the field which is give a robot a label and you feed it for a second but teach a robot a label and you feed it for a lifetime what what he means by this is that if you taught the robot the underlying aspects of various objects in a completely Sal supervised fashion it knows what like a cat or dog is without actually being
614
641
https://www.youtube.com/watch?v=dMUes74-nYY&t=614s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
taught so that means that it can actually generalize much better so labels are cheap but then it may not be the most optimal way to learn representations so so what exactly is self supervised learning it's it is a version of unsupervised learning where data provides its own supervision so in general the way it could work is you withhold some part of the data and you
641
670
https://www.youtube.com/watch?v=dMUes74-nYY&t=641s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
task a neural net to predict a bit with health portion from the remaining parts so this could be like you occlude some part of the image you look at the remaining pixels and you try to predict the occluded version or you have a video and you just hide some frames in the video you have the other frames and you try to fill in the blanks of the missing frames or you have a
670
692
https://www.youtube.com/watch?v=dMUes74-nYY&t=670s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
sentence and you mask out some words and you ask the neural network to fill in those words or you just have to predict the future from the past or the past from the future or present from the past like loss a various different versions depending on the mask so this way the data is creating its own supervision and you can perfect you can ask a neural network we'll learn a lot more than just
692
719
https://www.youtube.com/watch?v=dMUes74-nYY&t=692s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
predicting labels so the details obviously decide what is a proxy loss or what is the pretext ask you can think about all these withhold and predict classes some kind of pretext ask and you can think of whatever details you use whatever loss functions or whatever tasks you create depending on that the quality of the task could be different and therefore the quality of the
719
745
https://www.youtube.com/watch?v=dMUes74-nYY&t=719s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
underlying representation study uncover could also be different and so that is basically this whole topic which is how can we create these really good tasks which make the neural network to learn a lot of useful things and therefore be very useful in downstream tasks so the motivation another motivation of why we want to learn good features is one of the biggest reasons for supervised
745
773
https://www.youtube.com/watch?v=dMUes74-nYY&t=745s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
learning to really take off not just as a research topic but also as an industry of practice is that the you can use a pre trained classifier for a lot of commercial a downstream tasks so a pre trained imagenet state-of-the-art emission a classifier like a rest at 50 can just be taken and the same backbone can be taken and put into a faster or CNN or a mass for CNN or a retina net
773
800
https://www.youtube.com/watch?v=dMUes74-nYY&t=773s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
and can be used for object detection or instant segmentation or it can also be used in a fully compositional neural net with the backbone as the rest in 54 a semantic segmentation so this way you're able to solve a lot of harder computer vision problems with black collecting label data is much harder and you can actually just retrain a good classifier take those features and start the
800
830
https://www.youtube.com/watch?v=dMUes74-nYY&t=800s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
underlying downstream tasks with a much better prior and you don't have need so much label data now and you can also converge much faster on these harder problems so that way the recipe is very clear so you just collect a large table data set your trainer model you deploy and as long as you have a lot of good data and you have sufficient data it is basically all you need in terms of
830
858
https://www.youtube.com/watch?v=dMUes74-nYY&t=830s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
getting some autom automation on on production so most of your industry or usage of computer vision like in video surveillance or in robotics where people have to detect objects or in shopping automated shopping where people you want to detect what people pick or which objects people pick it's basically just object detection and to get a very good object detector all you need is a lot of
858
884
https://www.youtube.com/watch?v=dMUes74-nYY&t=858s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
labels and a lot of good patron features so what is the goal of self-professed learning the goal is to learn equally good if not even better features without supervision and be able to deploy similar quality systems as what is currently in production without relying on too many labels so what if instead of collecting 10000 labels now you could just collect thousand labels or hundred
884
913
https://www.youtube.com/watch?v=dMUes74-nYY&t=884s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
labels that makes the process of production much faster much more efficient and you don't have to spend as much and it's also much simpler to maintain and you can keep on bootstrapping more and more data you don't have to rely on high quality expert labeling and you can still uncover the same level features as you have currently with all the effort had to collect labels so it could also
913
940
https://www.youtube.com/watch?v=dMUes74-nYY&t=913s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
generalize much better potentially because by doing some harder pretext asks than just predicting labels you are expected to learn more about the world and therefore generalizing in the longtail scenario is likely to be better so that is the hope and that's why people want to make self worth learning really work so this has been really you know very nicely put together as a more
940
969
https://www.youtube.com/watch?v=dMUes74-nYY&t=940s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
inspiring slide by young Nicole and it's often referred to as the lake which we saw in the introduction to the class which is you can think of self supervised learning as the cake if intelligence of the cake you can think of sociable as learning as the cake and you can think of supervised learning as the icing on the cake and you can think of reinforcement learning as the cherry
969
995
https://www.youtube.com/watch?v=dMUes74-nYY&t=969s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
on the cake and there the argument is that most of the useful bits can come from doing really hard pretext asks just from data and where the machine is predicting some part for missing parts and you get millions of bits that way whereas in supervised learning consider imagenet you have thousand classes so that's ten bits per image and if you have a million images you have basically
995
1,021
https://www.youtube.com/watch?v=dMUes74-nYY&t=995s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
a million times ten bit space key that's basically your whole data set and whereas if you're just doing generative modeling you're modeling all possible bits in your data set so that that's that's too huge right so some supervised learning is trying to find a middle ground between these two and it's possible that the bits you get from sabra was learning or more useful that
1,021
1,047
https://www.youtube.com/watch?v=dMUes74-nYY&t=1021s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
said there is a caveat that subscrube is learning the bits you get from there or not as high quality bits is the bits you get from supervised tasks when human is telling you that there is a cat here or there is a dog here or like there there is a cat exactly at this coordinate there's a dog there's a bounding box around a human that's very higher quality bit than
1,047
1,070
https://www.youtube.com/watch?v=dMUes74-nYY&t=1047s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
saying these two pixels are of the same color or like these this is a flipped version of that image or this image is a 90 degree rotated version of the other image things like that so it's not just a number of bits that matter the quality of the bits is equally important so you should take this slide with not not too seriously it's just for inspiration making the bits argument as a way to
1,070
1,097
https://www.youtube.com/watch?v=dMUes74-nYY&t=1070s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
like work on unsupervised learning is fundamentally flawed because the label data bits are much much much more useful much more grounded in the real world you to behave so here is the aleck own suggestion for how to do unsupervised or sub supervised learning which is creating your own proxy tasks and you can think of various different versions of that let's say that there's a video
1,097
1,123
https://www.youtube.com/watch?v=dMUes74-nYY&t=1097s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
you could predict the top from the bottom bottom from the top left from the right right from the left it's basically masked some part of your input predict the mass part from the unmask part and obviously depending on the mass the mother's going to learn something trivial or non-trivial so usually like like for instance in a video if you're just masking a couple of
1,123
1,146
https://www.youtube.com/watch?v=dMUes74-nYY&t=1123s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
frames in between it may be very easy to fill it up by just using the optical flow information and just using the reference frames and just interpolating between the pixels the model doesn't necessarily have to capture what an object is and whereas if you have a sentence and if you are predicting the missing words or sub words in a sentence it's possible that the model learns a
1,146
1,166
https://www.youtube.com/watch?v=dMUes74-nYY&t=1146s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
lot more about the grammar and the syntax and semantics of collective language because it's it's not possible to just copy-paste previous words to fill in the fill in the sentence because language is already syntactic and grammatical and separate like every cent every word is conveying something new whereas pixels are more high-frequency there are more natural signals and so
1,166
1,188
https://www.youtube.com/watch?v=dMUes74-nYY&t=1166s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
there is an spatial temporal correlation this is already available naturally so the model may not really learn the actual high-level information that you wanted to learn unless you carefully engineer what are the master you want to use so for the actual technical content we are going to separate it into three parts so the first part is we're going to learn about how to do that we're
1,188
1,219
https://www.youtube.com/watch?v=dMUes74-nYY&t=1188s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
going to separate the various cognitive principles basically the first principle is you corrupt your data and you try to predict the actual data from the corrupt aggression and the corruption can just be like you add some noise to your input or it could be like you hide some part if you input an you predict the missing part or it could be like you take your data and you basically do some signal
1,219
1,241
https://www.youtube.com/watch?v=dMUes74-nYY&t=1219s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
separation so it could be like hey an image is basically a grayscale in the color so you could predict the color from the grayscale or you have you have the depth image then you have the color image and you could try to predict the depth image from the color image where let's say you're recording everything from your Kinect so it could be source separation and then you try to predict
1,241
1,262
https://www.youtube.com/watch?v=dMUes74-nYY&t=1241s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
the separate equation so that is the first principle the second principle is we're going to do something like visual common sense to us where it's more ad hoc and you just trying to create tasks from data in a very creative way and see what kind of features the model can learn and there we are gonna look at three different techniques relative patch prediction jigsaw puzzles and
1,262
1,288
https://www.youtube.com/watch?v=dMUes74-nYY&t=1262s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
rotation and finally we are going to look at contrast learning which is really the version of supervised or unsupervised learning that's been taking off very very recently and we're going to look at a foundational work over to back which explains a lot of these foundational ideas like nice contrast loss and then we're going to look at a version that's been used in images
1,288
1,314
https://www.youtube.com/watch?v=dMUes74-nYY&t=1288s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
called CPC or contrast to predictive coding and we're also going to look at follow-ups to that that made the CPC pipeline much simpler like instance discrimination and where is state of the art instantiations of that note that in this lecture we are not going to cover the more popular sub supervised techniques like or any anything to do with the latest language retraining pipelines arguably
1,314
1,344
https://www.youtube.com/watch?v=dMUes74-nYY&t=1314s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
sub supervised learning has taken off way more language than computer vision but the focus of this lecture is going to be more on computer vision because language pre-training will be covered separately in a guest lecture by Alec Radford and we're also not going to look at how unsupervised learning helps for reinforcement learning that will also be covered separately by Peter in another
1,344
1,367
https://www.youtube.com/watch?v=dMUes74-nYY&t=1344s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
lecture ok so now let's go to denoising auto-encoders so do isaac autoencoder the basic idea is add some noise to your input and try to remove the noise and decode the actual image so here you see an amnesty j't and you and you see the noisy input on the on the left and you see the demo image on the right and the end coder takes in the noisy image puts it into a smaller latent representation
1,367
1,398
https://www.youtube.com/watch?v=dMUes74-nYY&t=1367s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
which is the features that we care about and the decoder is trying to use these features to get back the original input so you hope that the encoder gets the high level details removes the noise and the decoder can up sample that and get get you back to the actual image so depending on the kind of noise you add you've want to learn more non-trivial things if you don't add any noise you're
1,398
1,422
https://www.youtube.com/watch?v=dMUes74-nYY&t=1398s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
just going to learn an identity function that's just an or encoder but if you add some level of noise it's possible that you learn more useful features because you're learning to separate noise from the actual signal right and if you had too much noise then it may actually be a really hard task because the signal-to-noise ratio will you really low so this is the general computation
1,422
1,447
https://www.youtube.com/watch?v=dMUes74-nYY&t=1422s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
graph of the denoising auto-encoder where x tilde refers to the noise version and F data basically at the ground truth x and you're trying to figure out how to reconstruct that back and get the Layden's in a very useful way so there are various different versions of noise that you can add to in put in in a denoising auto-encoder so in the original denoising auto-encoder
1,447
1,475
https://www.youtube.com/watch?v=dMUes74-nYY&t=1447s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
paper they considered the task of Ensenada they considered three different noises additive isotropic Gaussian noise where you basically just add Gaussian noise to the pixels and another version is the masking noise where you basically some fraction of your input pixels I just chosen at random and you just forced into zero which K you just mash them out and they're going to be a black
1,475
1,501
https://www.youtube.com/watch?v=dMUes74-nYY&t=1475s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
and finally there's the salt and pepper noise where some fraction of the elements the chosen at random and you can basically set them either to the minimum possible value or maximum possible value so instead of basically it's it's a version of masking where instead of just assigning masks to be 0 you can randomly assign the mask to be 0 1 so these are three different noises to
1,501
1,526
https://www.youtube.com/watch?v=dMUes74-nYY&t=1501s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
consider in the paper you can note that as pixel level noise so you can think of denoising auto-encoders basically learning a tangent hyperplane on your data manifold where around every input x there is a distortion radius created around it based on the noise that you're trying to add so it's very easy to understand in the case of additive Gaussian noise because you can think of
1,526
1,553
https://www.youtube.com/watch?v=dMUes74-nYY&t=1526s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
Gaussian is a spherical distortion around your every input and you can think of the decoder as trying to put the distorted version back onto the tangent hyperplane so you can think of the whole denoising auto-encoder pipeline is trying to learn this tangent hyperplane that describes the data manifold so that it's able to put back to distortions around the hyperplane
1,553
1,580
https://www.youtube.com/watch?v=dMUes74-nYY&t=1553s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
back to the correct and like back to the data manifold and that way it uncovers the shape of the data manifold by operating at these local hyper planes at every individual point so here is the loss function of the denoising auto-encoder you can clearly see that there is a version where you can use the reconstruction error for the available pixels which are not having any noise
1,580
1,615
https://www.youtube.com/watch?v=dMUes74-nYY&t=1580s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
and the reconstruction error for the pictures that have been noise and you can also rate them based on you know like you can you can prioritize the reconstruction of the noise versions as compared to the versions that have not been noise so so if you have an amnesty Majin your dick map adding noise like 10% of the pixels you could prioritize the reconstruction error of those pixels
1,615
1,639
https://www.youtube.com/watch?v=dMUes74-nYY&t=1615s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
more than the other pixels so that the model is not incentivized to learn identity function or around what is like already available without noise well and the models actually striving hard to like we're getting get the details right at the noise pixels so and you could also imagine optimizing to different versions of the loss one version of the losses using the mean squared error and
1,639
1,664
https://www.youtube.com/watch?v=dMUes74-nYY&t=1639s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
the other words for the loss is using a binary signal across central P loss and both both are equally good and endless it makes more sense to use the cross entropy loss but the mean square error loss is also very likely to work well if as long as you let it train with the right set of hyper parameters so stack denoising auto-encoder is basically the version of denoising auto-encoder
1,664
1,692
https://www.youtube.com/watch?v=dMUes74-nYY&t=1664s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
where you're going to do this layer by layer by layer so you take your original amnesty which you have one hidden layer and you run it in using or encoder and you get that feeling layer now you can take that hidden layer as you're like version of the image that you want use instead of the actual pixels you can add noise at the hidden feature level and learn a denoising auto-encoder for that
1,692
1,717
https://www.youtube.com/watch?v=dMUes74-nYY&t=1692s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
feature right so and if you do this iteratively the denoising auto-encoder is now operating on more more abstract inputs instead of a raw pixels so that's basically the idea in a stack denoising auto-encoder and the hope is that as you keep stacking more layers the higher layers get more more semantics but you should also be careful in thinking about what kind
1,717
1,743
https://www.youtube.com/watch?v=dMUes74-nYY&t=1717s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
of noise you can add to the features so if back in those days people used to use neural networks with Sigma nonlinearities so in that case it's it's easy to add a noise like masking because Sigma can be considered as you know the neurons firing or not firing but but now neural nets and like like the way they did the ear and that's a design is much different like the kind of nonlinearities used are
1,743
1,772
https://www.youtube.com/watch?v=dMUes74-nYY&t=1743s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
very different so so this may not be particularly in a appealing idea to use in the current current infrastructure so finally one utility of the denoising auto-encoder is that once you've learned sufficient layers with the stark denoising auto-encoder you could basically have a target like a class table and just freeze all the features that you've learnt from the auto encoder
1,772
1,806
https://www.youtube.com/watch?v=dMUes74-nYY&t=1772s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
and have a supervised layer on top make a single linear layer that just predicts the class logit and you could use that perform a classification task and and this was particularly appealing back then because back then it was really hard to train deep neural networks to just do supervised learning even if you had a lot of data because directly training deep neural networks
1,806
1,837
https://www.youtube.com/watch?v=dMUes74-nYY&t=1806s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
was not something that was particularly working with and like innovations were needed in terms of using momentum optimizers and bigger math sizes and etc convolutional neural network so as far as hidden here look like feed-forward neural networks goes this was a standard recipe to even do supervised learning back then because you needed some reasonably high level
1,837
1,863
https://www.youtube.com/watch?v=dMUes74-nYY&t=1837s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
features to operate well to train a good supervised model so here are the filters that you learn with a denoising auto-encoder for various levels of noises and you can clearly see that the ones where you actually add more noise is learning more of these digit edges whereas the ones you don't have any noise you're hardly learning anything because it's mostly gonna do an identity
1,863
1,893
https://www.youtube.com/watch?v=dMUes74-nYY&t=1863s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
map and this is also visualized for a particular neuron in magnified and you can see that filters are like more visible for the higher masking ratios you can also see the you can also see that there's something like a six visible towards the right and and and it's getting it's getting the notions of digit edges or strokes at the hidden level so these are various classifiers
1,893
1,928
https://www.youtube.com/watch?v=dMUes74-nYY&t=1893s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
that it can train on top of these denoising auto-encoders the features that you get from stacking nosing order into rows and these are the error rates in M this classification it's not particularly relevant now because M nest is considered solve but then you can clearly see that you know it's getting like 97% accuracy or something that range with putting SPMS and top so this
1,928
1,955
https://www.youtube.com/watch?v=dMUes74-nYY&t=1928s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
is like a cool result at the time so here is another version of corrupting your image and trying to predict corruption or like trendy you try to hide some portion of your image and trying to predict the hidden portion so this is a people context encoders from the potato from work from Alyosha afer this group here at Berkeley so the wait works since it basically takes an average mass out a
1,955
1,991
https://www.youtube.com/watch?v=dMUes74-nYY&t=1955s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
rectangular region and encodes that image now with the mask and has a big order that tries to reconstruct the actual image now so that way the model is filling up the details of what's missing in the mask and supervision basically can be constructed from your data itself because you actually knew what part you masked you because you had the complete image so the model is able
1,991
2,022
https://www.youtube.com/watch?v=dMUes74-nYY&t=1991s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
to learn without any labels by creating its own supervision so that that's that's why it's called supervised learning so you can have various instantiations of this where you could mask out only the central region and try to fill up the central region or you can mask out various square blocks across spread across the image much smaller but lots of mass and you could try to fill
2,022
2,047
https://www.youtube.com/watch?v=dMUes74-nYY&t=2022s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
them all or you could if you have access to a segment the segmentation mass of actual objects in your image you could segment out particularly the pixels belonging to one particular object like in this case the baseball player and you could just try to fill in those pixels and that assumes access to label data of segmentation mass so that's not something that is completely self
2,047
2,082
https://www.youtube.com/watch?v=dMUes74-nYY&t=2047s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg