video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
dMUes74-nYY
nowhere that's not really the objective here the goal is we want to make sure that we learn a representation the encoder sits that its maximally predictive of the actual future when contrasted with with some fake futures so so that's that's really what's going on in CPC which is hey you produce your neural network a bunch of targets where your actual target is also present but
4,900
4,928
https://www.youtube.com/watch?v=dMUes74-nYY&t=4900s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
you could also have some fake targets so imagine that you could sample random audio chunks from totally different waveforms or you could also sample audio chunks which are not exactly corresponding to that future time step and you could present the inure network various different alternatives of what the true audio chunk should be and based on the past context the real truth and
4,928
4,951
https://www.youtube.com/watch?v=dMUes74-nYY&t=4928s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
all these other possible alternatives the neural network is supposed to pick or classify which is the right future and similar to word to back this can be done in a nonparametric softmax fashion where instead of actually decoding the audio chunk you're just trying to make sure that the embedding of your context and the embedding of your true future should correlate the most when
4,951
4,974
https://www.youtube.com/watch?v=dMUes74-nYY&t=4951s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
contrasted with the other fake audio chunks so so that so that way becomes the softmax or your number of negatives and and and and you can actually use any kind of a score function for how to assign the score between two embeddings it could be a simpler product in contrast pre-recording they make use of the bilinear dot product which is which which is also been used in the past in
4,974
5,000
https://www.youtube.com/watch?v=dMUes74-nYY&t=4974s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
past work so that's a little more expressive than just using a regular dot product and it doesn't require you to normalize the vectors so so that way you can think of the W matrix in the bilinear product is learning some kind of Association matrix that figures out some property that helps you to correlate two different things so that's really what's going on in CPC
5,000
5,026
https://www.youtube.com/watch?v=dMUes74-nYY&t=5000s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
you're basically going to take a huge audio chunk you are going to use audio signal you're gonna split it into small chunks you're gonna include each of these small chunks with a shared encoder it could be straight conversation on your network in this case and you could take a bunch of past audio chunks pass them through it gru any auto regressive model would do and you could use the
5,026
5,052
https://www.youtube.com/watch?v=dMUes74-nYY&t=5026s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
final gate in state of the gru to predict all the future Leyton's of true audio chunks but predict I just mean that your contrast elite trying to maximize those embeddings through the true futures when contrasted with the fake futures and you can sample negatives by using other time steps within the same audio waveform or you could use other audio waveforms and
5,052
5,074
https://www.youtube.com/watch?v=dMUes74-nYY&t=5052s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
depending on this negatives you're learning different things for instance if you're trying to collect audio signal from different speakers the negatives that come from other speakers lets you learn representations that allow you to identify the speaker while the negatives that are across the same waveform within the same speaker that you let's you learn like more fine-grained phoneme
5,074
5,098
https://www.youtube.com/watch?v=dMUes74-nYY&t=5074s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
features which are useful for like phoneme classification so depending on the downstream task the kind of negatives you pick are going to be crucial so here's basically CPC at a high level the diagram is very very very clear you're just basically gonna try to do this across various audio waveforms various different numbers of time steps you should be very careful in picking
5,098
5,124
https://www.youtube.com/watch?v=dMUes74-nYY&t=5098s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
what is the gap between the future and the context so there are overlapping audio chunks just like how you saw in the door shadow work where if you had overlapping patches it's going to be very easy to do these jigsaw puzzles Dallas CPC basically also suffers from the same problem so you should make sure that your negatives are come or your actual prediction tasks are not so
5,124
5,146
https://www.youtube.com/watch?v=dMUes74-nYY&t=5124s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
trivial enough that you could just look at the whatever is overlapping and just try to predict that so that's one really important thing about CPC but the more interesting thing is that you can actually perform all these tasks in any kind of fashion you can't you like you can so that the order of time is not so important you can pick anything as the context and
5,146
5,167
https://www.youtube.com/watch?v=dMUes74-nYY&t=5146s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
anything is target so you can actually predict from the future and go to the past as well or you could even mask something in the middle and use everything else as to context so it's it's totally up to you to how to frame what is the context and what is the target in CPC but based on how you frame it you should make sure that the negatives and the targets are chosen in
5,167
5,187
https://www.youtube.com/watch?v=dMUes74-nYY&t=5167s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
a non trivial fashion so you can clearly see that CPC is trying to generalize the older ideas like puzzle tasks and were to back together it's basically a framework in which you can perform all of these different tasks within one one particular architectural variant and and various different hyper parameters will correspond to various different versions of these different tasks so you can
5,187
5,212
https://www.youtube.com/watch?v=dMUes74-nYY&t=5187s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
think of CPC is trying to do something like multi-step prediction tasks so if you look at these multiple predictions emerging out of CT vector in the diagram you can think of a different W matrix being used for each of these different prediction time steps and each of them corresponds to say hey predict one step ahead or parade two steps ahead predict three steps and so forth and you can
5,212
5,234
https://www.youtube.com/watch?v=dMUes74-nYY&t=5212s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
think of all them is trying to make the representations learn different things and because you are optimizing all of them at once you're trying to learn a really rich representation that is able to do lots of different sub supervised dance at once so it constructs a whole variety of pretax tasks within a single loss a single framework and it's railing and then appealing that way
5,234
5,259
https://www.youtube.com/watch?v=dMUes74-nYY&t=5234s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
so one figure which is really nice to understand what CPC is trying to do is how it basically is doing something like slow feature analysis but what I mean by that is you your audio waveform is really high-frequency fast wearing and you're actually information that you care about for downstream towns is basically the slow bearing like hike signal content like phonemes because
5,259
5,285
https://www.youtube.com/watch?v=dMUes74-nYY&t=5259s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
that's what really allows you to use CPC features for something like speech recognition so what what CPC is trying to do is its trying to make predictions in the latent space at the levels of phonemes instead of variety of audio waveforms so as you keep processing these are your signals the information becomes more more semantic and so if you're trying to predict something
5,285
5,310
https://www.youtube.com/watch?v=dMUes74-nYY&t=5285s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
that's not overlapping in terms of a target if you're trying to predict your target that's reason to be a few time steps ahead you're trying to go more towards these slow varying phonemes that are that you know they would only change if the time steps are sufficiently further apart and so therefore if you're doing predictions are an appropriate offset by offset I just mean that the
5,310
5,334
https://www.youtube.com/watch?v=dMUes74-nYY&t=5310s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
gap between City and zt+ cave are the case like a number of time steps between the sufficiently high sister the phonemes actually change then you're going to end up learning really rich features so this is a really nice visualization of the representations learned on CPC audio audio tasks we're basically collecting a data set of various speakers as audio waveforms and
5,334
5,357
https://www.youtube.com/watch?v=dMUes74-nYY&t=5334s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
just performing the CBC optimization and your for and then you're taking the embeddings out and doing a piece near visualization in 2d and you can clearly see that different speakers having clustered out in separate blobs so it's clearly garden the speaker initially and you can also see the accuracy of predicting the the you can also see that the accuracy of predicting the positive
5,357
5,389
https://www.youtube.com/watch?v=dMUes74-nYY&t=5357s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
sample and the contrast of loss you it basically is so high in the beginning but it keeps going down touristy and by that what I mean is it's much easier to perform this contrast to task of identifying what is the right future when the prediction task offset is not much when you're actually trying to predict much closer to the future but as you keep moving further and further away
5,389
5,411
https://www.youtube.com/watch?v=dMUes74-nYY&t=5389s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
the mutual information between what you already have and what you're trying to predict is much lower there's much more entropy so you're not actually would optimize those future time steps as well because the context is not sufficient so the accuracy drops exponentially as you keep increasing the number of time steps you're using to predict the future so here are the CPC audio results in terms
5,411
5,436
https://www.youtube.com/watch?v=dMUes74-nYY&t=5411s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
of the downstream tasks on the left you see before the phoneme classification and speaker classification results and for phoneme classification there are four even possible classes so basically the way it works is you take the CPC features you freeze them out and you just put a linear classifier on top of these CPC features and you try to perform the task but what it look like
5,436
5,458
https://www.youtube.com/watch?v=dMUes74-nYY&t=5436s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
which is to say you try to identify the phonemes in the audio chunk or you try to identify who spoke this particular thing and you have labels for that but you're not gonna change the features you're just going to keep them frozen so for that version CPC speaker classification gets ninety 7.4 percent accuracy it is very close to what you'll get by just doing supervised learning
5,458
5,480
https://www.youtube.com/watch?v=dMUes74-nYY&t=5458s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
whereas if you just use something like m of CC features which are which is very engineered you're not able to do that well you're just able to get 17.6% so these features that you learn in a completely unsupervised fashion are v better way more semantic than something engineered with domain knowledge and also for phoneme classification which is actually even higher than speaker
5,480
5,505
https://www.youtube.com/watch?v=dMUes74-nYY&t=5480s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
classification CPC features without any fine-tuning just linear classifiers able to do is able to get 64 point six percent way better than MCC features which is this forty percent and we're better than rounding random initializations 30 percent and supervised learning gets seventy four point six percent which is 10% better than just linear classifier on top of CPC features however the
5,505
5,530
https://www.youtube.com/watch?v=dMUes74-nYY&t=5505s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
authors found that if you put an MLP instead of a linear classifier you can actually get pretty close to 74% with just CPC features no fine-tuning so this means that the information may not be linearly separable but all the useful information for performing foreign classification is there the CPC like the uncovered features and on the right you see a positions for phoneme
5,530
5,557
https://www.youtube.com/watch?v=dMUes74-nYY&t=5530s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
classification experiments and the the point I mentioned earlier is are really illustrated well here where depending on where you take the negative samples from it you're going to get different levels of results so if you if you if all if your negatives are all coming from the same speaker the accuracy is like sixty five point five percent that's that's basically for that that means that the
5,557
5,584
https://www.youtube.com/watch?v=dMUes74-nYY&t=5557s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
models like learning only phoneme relevant features it's not trying to do speaker and if occation whereas if you are if your negatives are all coming from mixed speaker so that that's going to get sixty four point six percent which is the result on the left table so that means that if you are more clever about sample negatives we already know what your downstream task is you could
5,584
5,606
https://www.youtube.com/watch?v=dMUes74-nYY&t=5584s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
prioritize something negatives in a fashion that will incentivize CPC to learn the features that will be more relevant for your downstream tasks so if you're if you don't really care about speaker identification you could just make sure that all the negatives are constantly coming from the same speaker and so that's really interesting way to illustrate this point and and and and
5,606
5,627
https://www.youtube.com/watch?v=dMUes74-nYY&t=5606s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
the second ablation that they did is the number of steps you predict into the future and namely you would imagine that predicting all the way into the future is going to be really helpful but it doesn't turn out to be the case so if you predict only up to twelve steps instead of predicting 16 up to all the way up to sixteen steps the downstream accuracy is better so this means that
5,627
5,649
https://www.youtube.com/watch?v=dMUes74-nYY&t=5627s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
the right way to predict it is such that the targets that you're trying to predict should share so amount of information with the context that you already have you go further and further into the future the entropy is higher and so the amount of actual information amount of bits that are shared between the two entities are not much so predict like for making a neural
5,649
5,669
https://www.youtube.com/watch?v=dMUes74-nYY&t=5649s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
network focus on those stars may actually end up encoding not very useful features so the hard part about CPC is trying to pick the right number of time steps to predict into the future or like they're how you sample the negatives but if you get those details right the features learnt are really useful and on par with supervised learning so one motivation for CPC was that they wanted
5,669
5,695
https://www.youtube.com/watch?v=dMUes74-nYY&t=5669s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
a single framework to work on any modality any kind supervised learning any modality you should just be able to use the same framework so that's a lofty goal so let's see how they actually instantiate it for image not so here is the image snap numbers for CPC where the framework how they actually executed is as follows you take an image you take these overlapping patches so you grid your
5,695
5,724
https://www.youtube.com/watch?v=dMUes74-nYY&t=5695s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
image into a bunch of overlapping patches so in this case the image is 256 by 256 from image net and you're basically taking a 64 by 64 patch laid across the image and you take matches with 50% overlap so 32 by 32 is a stripe and that mean you get a sent by seven grid of patches and you would encode each patch it with the same rest net so think about it as a rest at 101 or less
5,724
5,751
https://www.youtube.com/watch?v=dMUes74-nYY&t=5724s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
than 50 and you would get an embedding the meaningful embedding at the end and what you do with that is now that you have an embedding at every single patch this will form a grid of embeddings and you can perform predictive tasks 2d predictor tasks and on top of this grid so you can treat this great as your sequence just like the audio sequence there's a bunch of overlapping audio
5,751
5,774
https://www.youtube.com/watch?v=dMUes74-nYY&t=5751s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
waveforms audio chunks from your actual raw ear signal in this case it could be a bunch of were lapping patches in this 2d transcript and the task that the artists construct is to predict the future parents from the top row of patches so in this case you basically using the first three rows let's say and then you're trying to predict the bottom three rows you try to
5,774
5,799
https://www.youtube.com/watch?v=dMUes74-nYY&t=5774s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
predict every single patch in the bottom three rows by using the row the row of matches from the top three rows so in order to aggregate the context of the top few patches you would want to use some kind of a model that can take a bunch of embeddings in two-dimensional layout and try to summarize what is embedding screen at every single spatial location and that's
5,799
5,822
https://www.youtube.com/watch?v=dMUes74-nYY&t=5799s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
and you also want to do it in such a way that you don't want the information from the top to leak into the bottom because you're trying to predict the bottom for the top so we already know of one model that allows us to do this very efficiently it's called the mask convolution or the pixel RNN fix the scene and style models and and and and it also makes sense because CNN's were
5,822
5,842
https://www.youtube.com/watch?v=dMUes74-nYY&t=5822s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
also invented by the same first author so he just use pixel cnn's to aggregate the context of the top few rows of batches and once you lay that out on top of the grid of matches you can predict the bottom patches in a very parallel fashion so here is an example of how it would look like for an actual image when you grid it into patches so take this dog and you're like constructing all
5,842
5,866
https://www.youtube.com/watch?v=dMUes74-nYY&t=5842s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
these patches overlapping patches and then you let's say you you take the first three rows of embeddings and you try to predict the last two rows or which oh this is dispatch belonging to the last row second column or not it's this patch belonging to the last row third column or not so you would perform all these predictive tasks once once you get all these embeddings of individual
5,866
5,891
https://www.youtube.com/watch?v=dMUes74-nYY&t=5866s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
patches in a pixels in and on the top so how does the accuracy work out for like say you you're doing something similar to the audio experiment where you trained all this and a lot of images and then you take just the rest net encoder out and you put a linear classifier on top of it and see how well it performs on an omission air classification which is also the standard test being used in
5,891
5,920
https://www.youtube.com/watch?v=dMUes74-nYY&t=5891s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
previous self supervision methods though initially they were all attempt with Alex net and but but the baselines for rest net existed this table so we've already seen relative position we've seen by gal in synchronization and and jigsaw puzzles so all these methods when you just use a rest net encoder and you put a mean pool at the end and you put a linear classifier at the end D we get
5,920
5,946
https://www.youtube.com/watch?v=dMUes74-nYY&t=5920s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
you a top one accuracy like not more than 38% jigsaw works the best and the rot net numbers are not there on this but they're out net numbers are not are not higher than the CPC version of the CPC numbers so so if you look at the Aleks net results they are like around 38% and if you use a resident we do every single baselines numbers goes up so relative position gets a 6% game by
5,946
5,975
https://www.youtube.com/watch?v=dMUes74-nYY&t=5946s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
just using a rest net video instead of an alux net and colorization goes to 39 from 35 so the baseline for jigsaw doesn't exist here but I would imagine it getting to somewhere in the early 40s so CPC gets forty eight point seven this is really an old result now we will see in the next few slides how the state of the art has been pushed up way further but at that time this was a pretty big
5,975
6,001
https://www.youtube.com/watch?v=dMUes74-nYY&t=5975s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
jump from the existing state of the art and it also works really well if you look at if you look at competitive approaches of similar nature like you know a relative position of jigsaw is like kind of performing these spatial Association tasks too but doing it in a contrast of fashion and doing lot like a family of tasks within one parametrize model gets you much better numbers and
6,001
6,028
https://www.youtube.com/watch?v=dMUes74-nYY&t=6001s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
these are the standard reason we should visualize what kind of features that these models learn which is you take a particular feature there and you just see what new what what what kind of input maximally activates a particular neuron and you lose for a bunch of neurons and you see that you know like maximally activating neurons are the corresponding to different classes in
6,028
6,050
https://www.youtube.com/watch?v=dMUes74-nYY&t=6028s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
this case the first row is corresponding to leaf like patterns and textures calculator textures computers keypads and then skies and baby faces dogs so so there are clearly capturing all these high-level omission of features so another version of CPC was to try it on language will not really go into the details here but it but at that time the it was competitive
6,050
6,081
https://www.youtube.com/watch?v=dMUes74-nYY&t=6050s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
with skip top vectors which had similar similar ideas like predicting the future sentence when we give him the pass so CPC was able to somewhat be competitive or match those numbers but not not really that good finally it was also applied in reinforcement learning so in reinforcement learning you can think of accelerating your data efficiency by allowing your model or
6,081
6,107
https://www.youtube.com/watch?v=dMUes74-nYY&t=6081s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
agent to learn way faster by performing these unsupervised auxilary tasks in parallel along with your reward optimization and the authors try to use contrast of losses as the auxiliary losses and they were able to see some gains on sample efficiency will not really go to the details here as well so that's basically it for a CPC version 1 or CPC as it is called because the
6,107
6,134
https://www.youtube.com/watch?v=dMUes74-nYY&t=6107s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
fundamental ideas were put forth in this paper but like like like I said the numbers are not that great yet right like if you look at these numbers the linear classifier with the rest net gets 48.7% whereas a supervised learner with the rezident we do architecture typically gets like something like 76 percent top one so the gap was like really high the like around twenty eight
6,134
6,163
https://www.youtube.com/watch?v=dMUes74-nYY&t=6134s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
twenty nine percent and and and and so that needs to be addressed so until that's address self supervised is not really worth doing for practical amused classification or practical like this lofty goals we started off with which is hey we just want to learn features from data with our labels is that you can get similar quality of features so this is clearly far away and that was the
6,163
6,186
https://www.youtube.com/watch?v=dMUes74-nYY&t=6163s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
version that was the goal in in CPC version two to address that gap and this is a work I did during my internship at equine well with Ironman dinner where we basically took CPC version one and kept on hacking the architecture the the different type of parameters needed to get get all various patches and like add a lot of details in terms of data augmentations and and see
6,186
6,220
https://www.youtube.com/watch?v=dMUes74-nYY&t=6186s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
how far we could push the number so and what we ended up doing was like like we actually were able to match or sometimes beat supervised learning on various downstream tasks and I want to go through the details here so you've already looked at this where you grid an image into a bunch of matches and you encode every single pass using a really deep rest net so earlier you saw that
6,220
6,243
https://www.youtube.com/watch?v=dMUes74-nYY&t=6220s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
arrests at 101 we to the first three snacks were being used in the first CPC version this mist net is much deeper arrests at 161 with 2x wit and your third snack so it's it's having 4,000 features at the end and once you do that you get the embeddings for every single patch and you process that with pixel CNN this Pixar CNN is 2x wider than the pixel CNN use in the original work and
6,243
6,269
https://www.youtube.com/watch?v=dMUes74-nYY&t=6243s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
now you're trying to predict the bottom future like just like an original work but here we just use one offset we only predict two rows below and nothing else so like it's like you already saw in the audio experiment doing lots of predictions can hurt you if the amount of information shown art is much lower so we and also doing predictions when the information overlap as much closer
6,269
6,294
https://www.youtube.com/watch?v=dMUes74-nYY&t=6269s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
will also hurt you so you need to pick the prediction step very carefully depending on your Kratz eyes and so forth so we only predict two rows below and nothing else and so these are your context and latent vectors and you have the same kind of scoring function which is by linear product and you optimize this with a nonparametric softmax over there negatives and like I said the
6,294
6,320
https://www.youtube.com/watch?v=dMUes74-nYY&t=6294s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
naked in see how you sample the negatives is really crucial so you can sample negatives by taking negatives from other patches within the same image or you can take patches from other images and we have a version called all neg which is basically taking all the possible negatives you can you can construct which is all the patches in in your whole mini batch which is your mini
6,320
6,345
https://www.youtube.com/watch?v=dMUes74-nYY&t=6320s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
batch will be a bunch of images and each image is now a bunch of patches you just take all of them as your negatives in this particular loss so that way you get a lot of negatives this whole stack of optimization is like in general it doesn't matter how you construct the negatives of positives whether they use patches or not but just this whole framework because in general refer to as
6,345
6,365
https://www.youtube.com/watch?v=dMUes74-nYY&t=6345s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
the info and see a loss like you very basically construct contexts and targets and try to use contrast objectives to optimize for the associations and the implementation is really parallel because we can just use a pixel CNN with mass convolutions and you do the predictions at every single local position using a one by one composition so the recipe for CPC v2 is this train
6,365
6,393
https://www.youtube.com/watch?v=dMUes74-nYY&t=6365s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
on unlabeled image net train as long as possible so we trained for 500 bucks and this basically takes you like approximately a week and you augment every single local patch with a lot of species and color augmentations so like I already mentioned in the doors work on relative position prediction making a lot of spatial jitters is really useful so we take that to the extreme in this
6,393
6,418
https://www.youtube.com/watch?v=dMUes74-nYY&t=6393s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
work and use all possible augmentations and the effect a number of negatives that you have is number of instances in your mini batch times the number of patches for instance so unlike the earlier work which gridded the image into 7x7 grid of 64 by 64 we actually use much bigger patches and much bigger images like we used to 80 by 280 images and 80 by 80 crops so that that gave us
6,418
6,444
https://www.youtube.com/watch?v=dMUes74-nYY&t=6418s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
6 by 6 grid and with an overlap of start of 36 or something is that and so so that way the number of negatives is approximately 600 so so the fact is like we don't have a lot of negatives but all these negatives are really hard because they're coming from other practices within the same image so it's it's a mix of like instance negatives as for the spacial negatives
6,444
6,469
https://www.youtube.com/watch?v=dMUes74-nYY&t=6444s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
and it learns both kind of discriminative features so this is basically a diagram that illustrates this whole pipeline where you perform you you you have this feature extractor which is the rest net1 61 running on patches of images and you train the cells provision objective which is CPC and you do that for like a lot of long time once the mod of the strain there
6,469
6,496
https://www.youtube.com/watch?v=dMUes74-nYY&t=6469s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
are various ways in which you can evaluate the kind of features that you learn one is to take put a linear classifier as was already done in the past the other is you just take the rest net that you uncovered and instead of freezing those features what if we can actually find unit on a classification task so which is to say that instead of training a linear classifier and all
6,496
6,521
https://www.youtube.com/watch?v=dMUes74-nYY&t=6496s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
possible available labels what if you're allowed to put a small model on top finding the entire stack when the situation is you're not going to be given all the labels you're just going to be given a few percentage of the labels so you're allowed to train on all the unlabeled data you have but when you when you're beginning to perform supervised learning you're you're gonna
6,521
6,545
https://www.youtube.com/watch?v=dMUes74-nYY&t=6521s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
be given label data on different shards you're not gonna be given all of them but we also have benchmarks where you have all of them but in general as imagine the scenario where you can perform classification even with like 1% of the labels which is like 10 images per class and imagenet and you can also do a transfer learning which is you you're given a completely different data
6,545
6,565
https://www.youtube.com/watch?v=dMUes74-nYY&t=6545s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
set now you take this rest net throw it on that data set and perform new tasks so which could be something like pascal where you just take that rest that you've got and CPC and you throw it on object detection just like regular computer vision in benchmarking so so that's basically the goal and we will see how all these things work out if you do a lot of engineering so CPC we
6,565
6,594
https://www.youtube.com/watch?v=dMUes74-nYY&t=6565s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
do linear classifiers core remember that CPC v1 got 48.7% so just look at CPC v 1 and CP CV to CPC v2 gets 71.5% which is significantly larger than 48.7% and around that time a lot of competitive approaches were published with really good linear classifier scores as well like big buy again which is a largest Caleb I can push it up is 61% I'm Tim was another technique which is
6,594
6,625
https://www.youtube.com/watch?v=dMUes74-nYY&t=6594s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
also using something very similar to CPC that push it up to 68% and all these different methods just try to go for the same approaches just make your models as wide as possible or as deep as possible or a combination of both and optimize really long and use a lot of augmentations and careful engineering and CPC was the first method a shape improve and get to all the way up to 70
6,625
6,649
https://www.youtube.com/watch?v=dMUes74-nYY&t=6625s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
70 plus and the top rows where the models were different models were using different encoders so it's really hard to see what is helping there so on the bottom you like you can see that you just use the same rest at 50 encoder and then you compare across methods and CPC is better than all the existing methods this was the story when CPC version 2 was published but it's no longer the
6,649
6,677
https://www.youtube.com/watch?v=dMUes74-nYY&t=6649s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
case there's also a base line here in this table called momentum contrast which we'll cover as the next topic but note that momentum contrast has also improved a lot from the numbers that are presented on this table so on efficient image recognition which is you take the CPC features and you find unit for supervised learning where you can actually control the amount of label
6,677
6,702
https://www.youtube.com/watch?v=dMUes74-nYY&t=6677s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
data you have CPC version 2 is able to perform significantly better than just doing supervised learning so the red line is basically like for that corresponding percentage of label data you just do supervised learning so you just take a rest net you've trained it on whatever labels you have so you can clearly see that that would really well if you have all the labels but as
6,702
6,724
https://www.youtube.com/watch?v=dMUes74-nYY&t=6702s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
you reduce the amount of label data the rest net performance keeps going down so supervised learning is really really data Hungry's as far as the number of labels your house is concerned whereas if you do unsupervised learning and all the unlabeled data you have but you only collect label data for that corresponding percentage you can see how much gain it's giving you especially in
6,724
6,746
https://www.youtube.com/watch?v=dMUes74-nYY&t=6724s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
the low data regime so you basically just need eighty percent fewer labels to match the same amount or accuracy that supervised learning gets so with just ten images per class you're almost close to 80 percent top five accuracy on image classification which is the standard set by alx not and and and you can also see that the supervised state of the art is matched with around like twenty twenty
6,746
6,773
https://www.youtube.com/watch?v=dMUes74-nYY&t=6746s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
to thirty percent of the labels that you need so this way sub supervised learning is actually letting you to be very very data efficient for supervised learning and it also means like you need to hire very few data annotators now like instead of collecting 10,000 labels here you're collecting something like 2,000 naples or fewer right so your data annotation is much faster because you
6,773
6,796
https://www.youtube.com/watch?v=dMUes74-nYY&t=6773s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
already have a great set of features and your and-and-and the most interesting thing is even on even when you have all the labels that is when you have 100% of the labels your performance from free training and then fine-tuning is better than just doing supervised learning so there is no argument to not use unsupervised learning because even if you have all the labels the performance
6,796
6,820
https://www.youtube.com/watch?v=dMUes74-nYY&t=6796s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
you get by doing unsupervised learning and then performing supervised learning as a fine tuning step is higher than what do you get by just doing supervised learning and and it's it's uniformly consistent across all the label data regimes so here's like a good graph to understand how CPC version 1 interpolated to version 2 and you can see how each bottom improvement axis on
6,820
6,848
https://www.youtube.com/watch?v=dMUes74-nYY&t=6820s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
the x-axis added as far as the linear classification accuracy goes and basically Ln mr. layer nom in the clay layer nom helps a lot and you know like bu refers to like bottom-up predictions instead of just using top-down predictions which is basically saying that hey instead of just relating the bottom rows from the top why not do the other way and that also helps a lot
6,848
6,872
https://www.youtube.com/watch?v=dMUes74-nYY&t=6848s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
augmentation that every single patches which is referred as PA that helps a lot and so so so various different improvement axes which you can actually refer to the paper like like for instance be using bigger patches of the lot and so on so so you just from 48 49 percent you able to now go close to 72 percent but by just focusing on the engineering details and and doing large-scale
6,872
6,901
https://www.youtube.com/watch?v=dMUes74-nYY&t=6872s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
optimization really well and that that's really the success story of sub supervised learning just is just do the simple things right and you'll get really good numbers this is a table that shows the whatever graph you saw in numbers and you can see that on every single data data regime even if you train the deepest possible architecture for supervised based language is arrest
6,901
6,928
https://www.youtube.com/watch?v=dMUes74-nYY&t=6901s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
in 200 you're able to improve by 1% in the top five or more than that 1.3 percent if it with with cell supervisory training and you can also see that the low data regime your your your top fire accuracies are so good that they are even better than methods that have used very very engineered semi-supervised pipelines which which we'll cover in a future lecture
6,928
6,952
https://www.youtube.com/watch?v=dMUes74-nYY&t=6928s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
so using unlabeled data to improve the data efficiency of label data label they like like low rate a regime supervised learning it's not it's not just specific to cell supervised learning like it can also be done using other methods like semi-supervised learning and and and those numbers representing the that the thick methods that just use label propagation pseudo labeling data
6,952
6,976
https://www.youtube.com/watch?v=dMUes74-nYY&t=6952s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
augmentee unsupervised at augmentation and so forth will not cover that today but it will be covered in a future lecture and those are also very interesting but they involve more more handcrafted details as to how you go about doing simultaneous loss optimizations where it's in several ways be training is small like just think it moral destroy it once and then use it
6,976
6,998
https://www.youtube.com/watch?v=dMUes74-nYY&t=6976s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
everywhere so it's much more elegant that way so here is the final Pascal vo C numbers which is also a benchmark that people have cared about in in terms of transfer learning for self supervised learning there's always been this thing that some supervised learning will is only Garant considered to work if if the features you get from self supervised learning are gonna transfer to something
6,998
7,026
https://www.youtube.com/watch?v=dMUes74-nYY&t=6998s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
other than classification like like something like object detection on it on a data set where you don't have a lot of labels like pascal and for a long time people believe that you could never be the supervised baseline so if you look at the supervised baseline mean average position with the rest at 152 backbone you get seventy four point seven pascal walk just two thousand the data set with
7,026
7,048
https://www.youtube.com/watch?v=dMUes74-nYY&t=7026s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
2007 version and you can look at all these south supervised methods in the next row including momentum contrast which which was the first method which god a ball supervised and got got got to seventy four point nine and and and and and and they fast now seen and trained on the CPC version two features get seventy six point six as the mean average position which is which is even
7,048
7,073
https://www.youtube.com/watch?v=dMUes74-nYY&t=7048s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
better so so that goes on to say like well you know well that self supervised learning can actually work even better than supervised learning for downstream tasks so even if you collect a lot of label data you may not actually be able to get these numbers because these models are actually learning a lot more about the data so now that you looked at the principle of contrast of predictions
7,073
7,100
https://www.youtube.com/watch?v=dMUes74-nYY&t=7073s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
and contrasted learning and and and and seen the benefits and of it actually working at scale people start like like more people interested in just the image domain started looking at contrasted learning and people asked us question hey this contrast learning is cool but then do we actually need all these patches like like inherently patches is hard to deal with because when you
7,100
7,127
https://www.youtube.com/watch?v=dMUes74-nYY&t=7100s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
actually drew it an image into patches you're increasing your bat size by a lot and so even if you reduce your image size it's not it's not particularly good to pre-trained with much smaller images and find you in with larger images and secondly you also want to make sure that you use Bosch norm during your pee training and when you do something I see PC using Bosch norm is much harder
7,127
7,151
https://www.youtube.com/watch?v=dMUes74-nYY&t=7127s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
because yeah you don't want information to mix in your pixel cnn's so so therefore people want that to have this version of contrast to learning that just worked at an instance level where the the context is just one version of your image target is another version of the same image and negatives are just any other images so in this case it could be like hey you just take a
7,151
7,174
https://www.youtube.com/watch?v=dMUes74-nYY&t=7151s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
picture for dog you perform one data augmentation to it which is just apply a grayscale perform another data augmentation to it which is you flip it and you take a particular random crop so and any other image would be a negative for this particular anchor positive there so what does this actually learn them you're basically trying to learn that hey this particular you know the
7,174
7,200
https://www.youtube.com/watch?v=dMUes74-nYY&t=7174s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
legs are absent and the other version and so you're trying to learn that there's a dog here depending on the amount of random cropping and data augmentation you use the level of cheating you can afford to identify the two things with the same is lower lower and therefore you're forced to learn good features to make sure that you identify that two different images
7,200
7,222
https://www.youtube.com/watch?v=dMUes74-nYY&t=7200s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
presented you are actually fundamentally the same thing compared to any other image and that way you're able to learn really rich features it's very similar to the CPC area done at a patch level except that you're not actually doing any spatial prediction you're all you're trying to do is identify another version of the same image and this is in general referred to the principle of instances
7,222
7,243
https://www.youtube.com/watch?v=dMUes74-nYY&t=7222s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
combination and to reason people have really taken this far one paper is called Boco or momentum contrast and other paper is called sim clr or simple contrast of learning for representations for vision and we're just going to look at these two papers though there are like a lot of other papers that have competed with diesel vapors in the recent past but these two
7,243
7,266
https://www.youtube.com/watch?v=dMUes74-nYY&t=7243s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
papers are like the simplest and cleanest and also the most well functional in terms of state of the art metrics so first let's look at momentum contrast for unsupervised visual representation learning this is a paper by claiming he was also the inventor of rest nets and faster CNN masks are seen and and so forth so the late works as as false you you basically characterized
7,266
7,294
https://www.youtube.com/watch?v=dMUes74-nYY&t=7266s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
contrast to learning as a dictionary lookup task where you're saying that okay hey I want to identify if two things are the same so I present one thing treated as a query and whatever I want to pair with is also present and a bunch of keys I have and there are also lots of other keys which could serve as negatives and I want to identify what is the right positive among these bunch of
7,294
7,321
https://www.youtube.com/watch?v=dMUes74-nYY&t=7294s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
keys so you in Korea query you encode all your keys and you compute the pairwise similarities you know that true target which is because you know the ground truth for what is real other augmentation of the same image and then you just build this contrast loss and back prop so that does basically the idea of the instance discrimination so where does momentum come in here so the
7,321
7,347
https://www.youtube.com/watch?v=dMUes74-nYY&t=7321s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
idea of momentum is to make sure that you're not you can use a slowly varying encoder of the same and you basically have an encoder which is used to encode your queries but your keys are going to use a pol yet a historically average version of the same encoder and this gives you lots of different benefits so one benefit is that it can let you use a lot of negatives without using your current
7,347
7,376
https://www.youtube.com/watch?v=dMUes74-nYY&t=7347s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg
dMUes74-nYY
mini packs which is to say that hey if you have a mini batch and if all your negatives are coming from your same mini batch then your number of negatives is limited by the batch size you have so that means you you you require a large batch size to use for your foot for being really efficient because you need a lot of negatives now that you're doing things at the instance
7,376
7,396
https://www.youtube.com/watch?v=dMUes74-nYY&t=7376s
Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/d…axresdefault.jpg