video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
MpdbFLXOOIw
them on the same side of the decision boundary while having everything else on a different side right so the the task here is implicitly making the task for the later classifier harder by pushing apart samples that should be of the same class and so this is this is not happening if you introduce a labels to the pre training objective that's what they do the supervised contrast
838
865
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=838s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
objective now you still all you want to do is here we're going to draw the same embedding space and we're going to draw this original dog image and we're going to draw the Augmented version of the original dog image but now we also have the following we also have these images which are images of the same class so we're going to put them in black here and let's say the augmented versions
865
893
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=865s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
around them in smaller black thoughts augmented versions of those right you can augment them as well and then you have the negative samples and the negative samples are not just any images but just images of different classes so you just go over your mini batch and all everything that's of the same class we could becomes positives including their augmentations and everything that is not
893
920
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=893s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
in the same class becomes negatives and also you can augment them as well so now we have a bunch of things in our embedding space and our objective is going simply going to be again we want to push away all the images that not of the same class as our original as our red original image which is called the anchor so all of this needs to be pushed away but now we want to pull
920
946
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=920s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
together all the Augmented versions of the original image but also we want to pull together all of the other images of the same class including also their augmented version so all of this is going to be pulled together so not only does the let work learn about these augmentations which again for this idea the augmentations aren't even necessary the network there learns a
946
970
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=946s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
representation space where images of the same class are close together which again is going to make the task of later linear classifiers that needs to separate this class from other classes very very easy and again the other images aren't just going to be pushed away but if they're from the same class let's say this and this image are from the same class all of those are going to
970
992
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=970s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
be pushed apart from a red dot but by themselves being pushed together to their own cluster here of their own class I hope this makes sense and I hope the difference to the cross-entropy objective is sort of clear the cross-entropy objective simply from the beginning just cares about which side of the decision boundary iran while this pre-training objective first cares to
992
1,021
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=992s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
put things close together that are in the same class and then the decision classifier will have a much easier time the reason why this works better than the because because it's not entirely clear from the beginning that why this should work better because it's working with the same information it's just because people have generally found that these pre-training contrastive
1,021
1,047
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1021s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
pre-training objectives they just are somewhat better at exploiting the information in the data set then if you just hammer on hammer with the contrastive sorry with the cross-entropy loss from the beginning so but it is not fully explained yet why this works better be as it's working with the same data again the difference here is that the previous methods of contrastive pre-training the
1,047
1,076
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1047s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
self supervised ones they did not have access to the labels and the advantage of that is you can have a giant database of unlabeled additional data that you do the free training on whereas here we do the pre training including the labels so here the label dog is an intrinsic part because we need to know which of these samples we need to pull together but that also means we cannot leverage the
1,076
1,105
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1076s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
may be that we have more on labelled data and unlabeled data is pretty cheap to obtain so that's the advantages and disadvantages here so this new loss so they they do compare this here and usually in these contrastive objectives you have somewhat like two encoders want to encode the the anchor and want to encode the Augmented versions and this one is like a momentum with shared
1,105
1,135
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1105s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
weights and so on it all of this isn't really important if you want to look into that look into papers like momentum contrast or I did one on curl for reinforcement learning I think the the general gist of it is clear so they compare the formulation of their loss to the self supervised one usually it takes the form of things like this so the one is the the anchor here and then the ZJ I
1,135
1,167
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1135s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
would be the positive example and you see here that the inner product between the anchor and the positive example sorry about that the inner product should be high because here the loss is the negative of whatever is here so if you minimize the loss you say I want the inner product between my anchor and whatever is the positive sample to be high and all everything else here which includes the
1,167
1,198
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1167s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
thing on the top but it also includes everything else I the inner product to be low and which is exactly the thing where you push you pull together the positives and you push apart everything else the that is the standard objective that you had before they they extend this but looks almost the same so compared to the unsupervised objective now first of all they extend
1,198
1,228
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1198s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
this such that you can have more than one positive sample now this is also possible in the unsupervised way so they just augmented by this and they also now this is the crucial part they include the labels into the pre turning objective so they say everywhere where I and J have the same label should be maximized in the inner product so should be pulled together while everything else
1,228
1,257
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1228s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
is being pushed apart yes so they say we generalize to an arbitrary number of positives and I also say contrastive power increases with more negatives I think that's just a finding that they have that when they add more negative so when they increase the batch size that contrastive power increases they do analyze their gradient which I find it's pretty neat you can already see that if
1,257
1,293
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1257s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
you formulate a loss of course the gradient is going to go in the negative direction but they make it clear that if you look at the gradient for the positive cases what appears is this one - pIJ quantity and the pIJ quantity is exactly the inner product between I and J normalized of course so if you minim so the gradient is going to point into the negative direction of that for the
1,293
1,320
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1293s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
positives which means you you're gonna pull them together and it's going to push into this direction with for the negative classes which means you you push them up and they also analyze what happens a in with relation to hardness so they say there are two kinds of if you just look at the positive samples there are two kinds there are easy positives where the
1,320
1,347
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1320s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
network has already learned to match them closely where the inner product is almost one if you look at them that means the pIJ quantity is large right because that is basically the inner product and you look at this term this term is exactly what we saw in the gradient then you see that this here since this is one this entire thing is zero this is all so highs this is close
1,347
1,375
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1347s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
to one so this entire thing is zero this is almost zero but if you have a hard positive where the network hasn't learned yet to align the inner product properly or align the representation properly then the angle between the things again these are normalized the angle is they are approximately orthogonal so the gradient magnitude is going to be this here is going to be
1,375
1,407
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1375s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
approximately 0 so this is close to 1 and this here since this is also 0 is also close to 1 so this is going to be larger than 0 which means that their loss focuses on the examples that are that the network cannot yet represent well according to their objective which makes sense right first of all but second of all it that is exactly the same thing as in the
1,407
1,436
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1407s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
cross entropy loss if you if you look at the cross entropy loss and you have a situation where the network is really good already for a given sample so it already puts a dog into the dark class then the the gradient will not be pulling much for that sample it might mainly focuses on where you're still wrong so it is like I appreciate the analysis but it is not a notable
1,436
1,463
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1436s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
difference I think what they want to show is that their loss if you do gradient descent really does what it is supposed to do namely first of all it does this polling together pushing a part of inner products for the positive and negative samples and it mainly focuses on samples where you not yet have found a good representation to align them with others it focuses on
1,463
1,490
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1463s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
pairs that are not yet correctly close or together or far apart they also connect this to the triplet loss where they can show after some approximation that if their loss only has one positive and one negative sample it is going to be proportional to the triplet loss the triplet loss is basically where you have an image and you find one positive I think there's going to be of the same
1,490
1,519
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1490s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
class right here and you find one negative of a different class and you try to push those apart while pulling those together the problem here they say is the problem of hard negative sampling in order for this to make sense you need the negative sample to be what's called a hard negative sample so this the call is hard negative mining because you only have one negative sample you better make
1,519
1,547
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1519s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
this something where the network can learn from right and if it's too easy the network can't learn anything and they're more thereby you have the problem of hard negative mining where you often have to filter through your mini batch or even through your data set to find a good negative sample to go along with this pair of positive samples but I don't I don't really see how their
1,547
1,570
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1547s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
method except that you know it has a bunch of positives and negative samples except for that which I guess you could also apply to the triplet loss there's not really a difference here again your if your method is a contrastive method you do have the problem that if you simply sample at random your negative samples are going to be become easier and easier over the training over the
1,570
1,597
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1570s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
course of training and you get the problem of at some point you're gonna have to do actively sample hard negatives I think this paper just gets around it by having huge batch sizes so yeah but again they do get state-of-the-art on imagenet for these types of networks and augmentation strategies and they do look at how their loss appears to be more hyper parameter stable so if they change
1,597
1,628
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1597s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
out the augmentation if they change the optimizer or the learning rate you can see here that the spread in accuracy is much smaller than for the cross entropy loss except here but it is it is hard to compare variances of things that don't have the same means in terms of accuracy so take this on the right here with a grain of salt they also evaluate this on corrupted
1,628
1,656
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1628s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
image net so there's an image net data set where you it has several levels of corruptness of the data set and you can see your accuracy goes down but the accuracy for the cross entropy loss goes down faster than for the supervised contrastive loss you see they start together like this and they go further apart now it is not clear to me whether that's just an effect like if you just
1,656
1,684
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1656s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
trained a supervised contrastive loss also to this level whether it would fall off at the same speed or whether because it is the supervised contrastive loss it would kind of match that curve it's not clear whether that's really an effect of the difference of the losses or is just an effect of the fact that they aren't the same accuracy to begin with again this kind of shifting you can't really
1,684
1,712
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1684s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
compare things that have different means in the first place but that's it is an interesting finding that their method is more stable to these corruptions I just want to point out at the end their training details and just highlight they train for up to seven hundred epochs during the pre training stage which is I think standard but mad and they trained up models with batch
1,712
1,738
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1712s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
sizes up to 8192 so you need like a super TPU cluster to run these kind of things and I am never exactly trusting of numbers like this even though it's it's kind of a good improvement it is still like a 1% improvement and in these small numbers I feel I just feel the there might be sir there might be a big effect that things like batch sizes and how much you put into computing how much
1,738
1,776
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1738s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
compute you put into it and what else you're doing there might be so much influence of that that I first want to see this replicated multiple times across the entire field before I'm going to really trust that this is a good thing to do alright so I hope you like this if you're still here thank you consider subscribing if you have a comment please leave it I usually
1,776
1,803
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=1776s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
1L83tM8nwHU
hi there today we're looking at manifold mix-up better representations by interpolating hidden states by because Verma a tall number of big names on this paper as you can see and I also saw this at ICN also I was intrigued by it they proposed manifold mix-up which is sort of a regularizer of neural networks specifically of supervised learning and it's actually a pretty simple concept
0
32
https://www.youtube.com/watch?v=1L83tM8nwHU&t=0s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
and they kind of show that it has some nice properties and outperforms of the regularizer z-- so what's the problem the problem is that if you look at this spiral problem here which is often kind of used to to show properties and neural networks what you have are blue points and the blue points on our one class and the red points are another class you see the two classes here are in this kind of
32
62
https://www.youtube.com/watch?v=1L83tM8nwHU&t=32s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
spiral pattern it's just the data space is just two-dimensional so you see here this is one class this is the other class this is pretty difficult for a model to learn because of course the easy models would be like linear classifiers but there's no way to like put a line through this such that one classes on one side mostly so neural networks if you train them they will
62
88
https://www.youtube.com/watch?v=1L83tM8nwHU&t=62s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
give you something like you see here they will try to kind of bound the regions with the red points from the blue points but then gets you know is it there's some weird things like here is a weird thing here is a weird thing so you'd imagine a correct model would actually classify this area as blue but the the neural network has no concept of of let's say that the spiral should
88
116
https://www.youtube.com/watch?v=1L83tM8nwHU&t=88s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
continue that thus it it simply sees our here's blue here's blue here's a bit of a gap in the training data so yeah it in this case it assigns a red class to it this is one problem that the decision boundaries are rather not say squiggly and irregular and the second one if you look at the actual colors full blue means very confident blue class full red means
116
142
https://www.youtube.com/watch?v=1L83tM8nwHU&t=116s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
very confident red class and in between you kind of see going into the the white so if you look very closely I can actually zoom in more here if you look very closely you'll see that the blue gets lighter and lighter until it reaches white and from here the red goes lighter and lighter until it reaches white and white means not confident white means like 5050 they see the that
142
167
https://www.youtube.com/watch?v=1L83tM8nwHU&t=142s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
area of not confident is actually very small right if you consider a point here is actually still very confident that it's a blue point and the area of Nan confidence is very small even though maybe as as humans we would judge like a relatively large band in the middle to be not confident like if we get a point like this and the third problem is that you can see in multiple locations like
167
199
https://www.youtube.com/watch?v=1L83tM8nwHU&t=167s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
here or here or here that the decision boundary is very close to the data points unnecessarily close so especially if you look here the decision boundary could be much more optimally placed probably something like this right given the training data but the neural networks because the only C training data they they have no basically no incentive to do this all right one might
199
232
https://www.youtube.com/watch?v=1L83tM8nwHU&t=199s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
think of you know something like a support vector machine that actually has an incentive to to put the decision boundary away from the from the training data but the neural networks currently there are not SVM's they're basically logistic regressions and as such have no no incentive to do this so this these are the problems the other problems are this is the input space if you look at
232
261
https://www.youtube.com/watch?v=1L83tM8nwHU&t=232s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
the hidden space so they build neural networks specifically they have like the 2d input and then that goes through a bunch of layers and then at one point there's a bottleneck layer was just two hidden nodes and then I guess that goes again and then it goes into a classifier so in this bottleneck layer they analyze the hidden representations of the data points and
261
284
https://www.youtube.com/watch?v=1L83tM8nwHU&t=261s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
in this case for this spiral dataset what happens is so in red you see again the red classes in blue the blue class it's 2d so you can plot it what it does is it bunches up the hidden representations fairly fairly so it bunches them kind of up it spreads them out in directions here here here most are bunched up here and it does these kind of weird arrangements here with the
284
311
https://www.youtube.com/watch?v=1L83tM8nwHU&t=284s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
pockets of those and of course the neural network is powerful enough such that it can actually you know separate all of this from each other but it's not ideal and the black dots they represent kind of points in between or points from the input space that are not part of the training data so they say they sample uniformly in the range of the input space you see that the black dots are
311
337
https://www.youtube.com/watch?v=1L83tM8nwHU&t=311s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
all over the place right some are confidence blue some are confident red some are like somewhere right what you would expect from a good model is that if you input something that's kind of in-between or not really sure not even part of the input distribution that it assigns like a low confidence to it that it says well I'm not sure about this this must be somewhere in the middle so
337
363
https://www.youtube.com/watch?v=1L83tM8nwHU&t=337s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
just to jump and jump forward to the results what does manifold mix up do without knowing what it is in the same data set it gives you a picture like this you see the decision boundaries are much more smooth right the region of no-confidence or of low confidence indicated by the light color is here is much larger and also the decision boundary here we had specifically this data point here you
363
392
https://www.youtube.com/watch?v=1L83tM8nwHU&t=363s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
see the decision boundary is pushed away well you could argue about that particular point but the decision boundary is generally pushed away from the data points you also see no more kind of these squiggles here it doesn't happen in in here also if you look at the hidden representations the hidden representations now are spread out the classes are bunched up so not all the
392
422
https://www.youtube.com/watch?v=1L83tM8nwHU&t=392s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
points are bunched up but the the points of individual classes are bunched up together and the randomly sampled points are in the middle as they should be si only confident red is down here confident blue is up here and everything in between is unconfident and third if you look at the singular value decomposition of the hidden layer and that's kind of a measure of how spread
422
454
https://www.youtube.com/watch?v=1L83tM8nwHU&t=422s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
out in the different dimensions a dataset is you see that the manifold makes up here in green it concentrates or it it real owers the singular values of the kind of lower indexes so the first singular value is large which means that there is like a dominant direction in the in the data and this is done for each class separately as I understand it it puts a lot of weight on
454
488
https://www.youtube.com/watch?v=1L83tM8nwHU&t=454s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
the first singular vector and then it pushes down the contributions of the other singular vector which means that the data set that is analyzed is is concentrated in two fewer directions of variance this is layer one and here is layer three means so you see it happens in both that the manifold makes up compared to the baseline model does this so now you might ask what is manifold
488
520
https://www.youtube.com/watch?v=1L83tM8nwHU&t=488s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
mix-up it's actually a pretty pretty simple concept right here is another comparing it to other kind of regularization techniques and showing that none of them really does this so manifold mix-up is this basically what you do is when you train a neural network you have input data and you take mini batches of input data specifically you take two mini batches x and y and X
520
554
https://www.youtube.com/watch?v=1L83tM8nwHU&t=520s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
prime Y Prime all right and then what you do is if I have the draw the neural network here so here is the inputs like a picture of a cat [Music] it goes through layers right and then what you do is you say at some particular you say stop stop right you take the representation up you and you do this with two different mini batches so here is this is cat one down
554
586
https://www.youtube.com/watch?v=1L83tM8nwHU&t=554s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
back here is cat two dog that's a captain you pass it in right here you take it out here you pass it through the network and you take it out so you now have two different forward paths of two different mini batches and then you define a lambda and I guess they randomly sample a lambda in zero one right in the range of 0 1 so this is a mixing coefficient and then you mix you
586
622
https://www.youtube.com/watch?v=1L83tM8nwHU&t=586s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
say lambda times hidden representation of batch 1 plus 1 minus lambda of hidden representation of batch 2 and that is what you pass through the rest of the network right so basically you forward propagate to different batches until a certain layer here then you mix them with a random coefficient and then you pass it through the rest and then the only thing you also have to do is then
622
658
https://www.youtube.com/watch?v=1L83tM8nwHU&t=622s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
at the end if you think of the labels of these two things you want to mix the labels in the same fashion so you want to mix lambda times y of batch 1 plus 1 minus lambda of Y of batch 2 and and then this is your training signal for whatever comes out here right so it's it's um these are these are one hot labels so if it's class three its zero zero one zero zero
658
692
https://www.youtube.com/watch?v=1L83tM8nwHU&t=658s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
and if Y two is class five its zero zero zero zero one and then you simply mix the two alright and that becomes your training signal so in a practical example if let's just have a mini batch size of one so just one sample if this is cat and this is dog you would pass them forward right you would mix so in the hidden representation it would kind of become a cat dog maybe you do it
692
721
https://www.youtube.com/watch?v=1L83tM8nwHU&t=692s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
50/50 but then you would also mix the labels of cat and dog 5050 and tell the net well this is a mixture of 50% cut 50% dog and then you would train the network to predict that 50/50 coefficient so they do this the question is at which layer do you do this and they simply I think for each mini batch sample one hidden layer at random they might have someone waiting or something
721
750
https://www.youtube.com/watch?v=1L83tM8nwHU&t=721s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
but the way they describe it is they simply sample one layer or mean per mini batch and then do the mixing there and then you can actually backprop through everything everything is differentiable this mixing is differentiable so you can backdrop through any of everything and there's even you know a kind of an engineering trick to only use a single mini batch by mixing it with itself so
750
773
https://www.youtube.com/watch?v=1L83tM8nwHU&t=750s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
that's that's pretty neat so this is manifold mix up as you can see here is the that's kind of the description you mix the hidden representations with lambda and you mix the labels with the same lambda and that will become your actual training signal all right so they give some theory to it that it flattens representations and specifically they say under some
773
803
https://www.youtube.com/watch?v=1L83tM8nwHU&t=773s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
conditions namely if the network is large enough so if the dimension of the hidden representation is of a certain size then if you optimize this manifold mix up like if you optimize over every London over the entire training data set what you will end up is actually a linear rally near function of the input this is not too surprising that if you because what you do is you mix linearly
803
837
https://www.youtube.com/watch?v=1L83tM8nwHU&t=803s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
this mixture happens in a linear fashion so if you optimize for and you not only optimize for the training set but you optimize for every possible mixture of the training set a linear mixture your minimization your minimizer function will actually become a linear function it's not surprising but they have a formal proof of this and they also have a proof that if certain assumptions are
837
870
https://www.youtube.com/watch?v=1L83tM8nwHU&t=837s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
given then the minimizer's if you apply the minimizer's the hidden representations will actually fall on a low dimensional subspace which is also not surprising but it's kind of the theoretical an analogue to what they show with the singular value distribution that it basically suppresses low singular values that means the data set is much more into a single direction the hidden
870
898
https://www.youtube.com/watch?v=1L83tM8nwHU&t=870s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
representations sorry all right so this the theory part is you can you can read it if you if you want to it's yeah it's it's - the results are to be expected I would say from what they do and the last thing they give a pictorial example of why none fold mixup flattened representations so both of these things the fact that the minimizer's will become linear functions
898
931
https://www.youtube.com/watch?v=1L83tM8nwHU&t=898s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
and the fact that the singular value spectrum is more concentrated on the first thing you'll evaluate a shion's are flattened and here is a pictorial representation so in this case what happens if you if you basically have these four data points a 1 a 2 B 1 and B 2 where a 1 and a 2 are blue class and B 1 and B 2 or red class and if you now look at an interpolation point between
931
973
https://www.youtube.com/watch?v=1L83tM8nwHU&t=931s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
the two so if you look at this interpolation point between a 1 and B 2 what happens is that in this case this should be 50/50 blue and red but if you now look at the points that it where it's not interpolated on this is very close to a 2 in this case it probably should be more like 95 blue and 5 red do they say here well if you use manifold mix up to learn the network what you'll
973
1,009
https://www.youtube.com/watch?v=1L83tM8nwHU&t=973s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
actually do is you say ok actually this hidden representation needs to be pushed outward and you will achieve something over here where any mixture of two points of the opposite class will actually give you a 50/50 so all the midpoints here will give you a 50/50 mixture between the labels which basically means what you end up with is a line between this data and this data
1,009
1,046
https://www.youtube.com/watch?v=1L83tM8nwHU&t=1009s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
and it means that basically the network becomes more linear and the representations become more flat because flat is the optimal if you distribute our flat all the distances to the line are the same and this objective is optimized and this is basically my my kind of biggest problem with the method is that it it kind of mixes the input with a linear function where we know
1,046
1,082
https://www.youtube.com/watch?v=1L83tM8nwHU&t=1046s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
that that is kind of not the shape of the true data manifold the input manifolds as you can see here the input manifold here isn't linear or flat it's actually very very tangled and we know that neural networks as you continue in the layers will flatten those representations because ultimately at the end it needs to classify the dataset linearly because the last layer is a
1,082
1,114
https://www.youtube.com/watch?v=1L83tM8nwHU&t=1082s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
softmax layer but the the idea that you could apply this to any layer seems a bit shady to me of course it works and they show it works and it's really nice that it works but apply applying this to low layers and neural networks seems a bit not principled to me so I think this is not the end of the story of this line of work and there is kind of more that can be done in a more principled fashion
1,114
1,146
https://www.youtube.com/watch?v=1L83tM8nwHU&t=1114s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
but in any case they show that this actually works in terms of performance on generalization on kind of standard data sets so they have results on C 4 10 and C 4 100 which are famous image data sets and they show that they have a regularizer outperforms others and they also show that they can withstand one-step single step adversarial attacks more it kind of better so they have a
1,146
1,184
https://www.youtube.com/watch?v=1L83tM8nwHU&t=1146s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
better perform performance against single step adversarial attacks after regularizing mostly again giving kind of an idea that the if you push if you push it if you have a two points this is X this is X X 1 X 2 they're of different classes if you put the decision boundary really close to X 2 then an adversarial attack can simply move the point across the decision
1,184
1,216
https://www.youtube.com/watch?v=1L83tM8nwHU&t=1184s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
1L83tM8nwHU
boundary with a very small step but if you actually have the decision boundary pushed away from both data points then the an adversarial attack must go a very long way to the decision boundary and thus if you limit the size of adversarial attacks which is what you usually do you can maybe not reach this decision boundary and thus you mitigate some of the problem so it's pretty cool
1,216
1,247
https://www.youtube.com/watch?v=1L83tM8nwHU&t=1216s
Manifold Mixup: Better Representations by Interpolating Hidden States
https://i.ytimg.com/vi/1…axresdefault.jpg
mPFq5KMxKVw
you've used a pre-trained model to make predictions that gives you great results if you want to classify images into the categories used by the original models but what if you have a new use case and you don't categorize images in exactly the same way as the categories for the pre trained model for example I might want a model that can tell if a photo was taken in an urban area or a rural
0
23
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=0s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
area my pre trained model doesn't classify images into those two specific categories we can build a new model from scratch for this specific purpose but to get good results we need thousands of photos with labels for which our urban and which are rural something called transfer learning will give us good results with far less data transfer learning takes what a model learns while
23
46
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=23s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
solving one problem and applies it to a new application remember that early layers of a deep learning model identify simple shapes later layers identify more complex visual patterns and the very last layer makes predictions so most layers from a pre trained model are useful in new applications because most computer vision problems involve similar low-level visual patterns so we'll reuse
46
73
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=46s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
most of the pre trained ResNet model and just replace that final layer that was used to make predictions some layers before that in the pre trained model may identify features like roads buildings windows open fields etc will drop in a replacement for the last layer of the rezident model this new last layer will predict whether an image is rural or urban based on the results of that
73
96
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=73s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
previous layer let's look at this a little closer here we see that the resident model has many layers we cut off the last layer the last layer of what's left has information about our photo content stored as a series of numbers in a tensor it should be a one-dimensional tensor which is also called a vector the vector can be shown as a series of dots each dot is called a
96
121
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=96s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
node the first node represents the first number in the vector the second node represents the second number and so on practical models have far more nodes than we've on here we want to classify the image into two categories urban and rural so after the last layer we keep the pre-trained model we add a new layer with two nodes one node to capture how urban the photo is and another to
121
149
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=121s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
capture how rural it is in theory any node in the last layer before prediction might inform how urban it is so the urban measure can depend on all the nodes in this layer we draw connections to show that possible relationship for the same reason the information at each node might affect our measure of how rural the photo is so our structure looks like this we have a lot of
149
173
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=149s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
connections here and we'll use training data to determine which nodes suggest an images urban which suggests it is rural and which don't matter that is we'll use data to train the last layer of the model in practice that training data will be photos that are labeled as either being urban or rural we'll cover more mathematical detail on this training step in a later video
173
197
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=173s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
notice that we allow all features from one layer to influence or be connected with a prediction layer when this happens we describe the last layer as being a dense layer one other note when classifying something into only two categories we could get by with only one node at the output in this case a prediction of how urban a photo is would also be a measure of how rural it is if
197
221
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=197s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
a photo is 80 percent likely to be urban its twenty percent likely to be rural but we've kept two separate nodes at the output layer using a separate node for each possible category in the output layer will help us transition into cases when we want to predict with more than two categories in both the current case and the case with more categories we'll get a score for each category and then
221
245
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=221s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
apply a function called softmax the softmax function will transform the scores to probabilities so they'll all be positive and I'll sum to one we could then work with those probabilities however we want let's see it in code we'll introduce two new classes from Charis first is sequential this is just saying we're going to have a model that's a sequence of layers one after
245
272
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=245s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
the other there are some exotic models that don't fit into this structure and we'll get to other types of models later for now all models you would want to build are sequential we'll also want to add a dense layer so we import that in this application we classify photos into two categories or classes urban and rural we'll save that as num classes now we build the model we set up a sequential
272
300
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=272s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
model that we can add layers to first we add all of a pre-trained ResNet 50 model we've written include top equals false this is how we specify that we want to exclude the layer that makes predictions into the thousands of categories used in the imagenet competition we'll also use a file that doesn't include the weights for that last layer we hand this argument pooling equals average that
300
327
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=300s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
says that if we had extra channels in our tensor at the end of this step we want to collapse them to a 1d tensor by taking an average across channels we'll come back to intricacies of pooling in a later lesson but now we have a pre-trained model that creates the layer you saw in the graphic will add a dense layers to make predictions we specify the number of nodes in this
327
352
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=327s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
layer which in this case is the number of classes like we talked about earlier then we say we want to apply the softmax function to turn it into probabilities well tell tens flow not to Train the first layer which is the ResNet 50 model because that's the model that was already pre trained with the imagenet data now we'll get to a more complex line of code in a compile command I'll
352
378
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=352s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
describe the broad concept here and we'll give a more complete explanation of the underlying theory in a couple videos the compiled command tells tensorflow how to update the relationships in the dense connections when we're doing the training with our data we have a measure of loss or inaccuracy we want to minimize we specify as categorical cross entropy in case you are familiar with the law
378
401
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=378s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
gloss this is another term for the same thing we use an algorithm called stochastic gradient descent to minimize the categorical cross entropy loss function again we'll cover this in our Theory video we asked it to report the accuracy metric that is what fraction of predictions were correct this is easier to interpret than categorical cross entropy scores so it's nice to print it
401
426
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=401s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
out and see how the model is doing our raw data is broken into a directory of training data and a directory of validation data within each of those we have one subdirectory for the urban pictures and another for the rural pictures caris provides a great tool for working with images grouped into directories by their label this is the image data generator there's two steps
426
452
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=426s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
to using image data generator first we'll create any generator object in the abstract I'll tell it that we want to apply the ResNet pre-processing function every time it reads an image you use this function before to be consistent with how the rezident model is created then we use the flow from directory command we tell it what directory that data is in what size image we want how
452
479
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=452s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
many images to read in at a time and we tell it we're classifying data into different categories well add batch size to our list of information covered in the upcoming theory video for now assume you want categorical class mode almost every time we do the same thing to setup a way to read the validation data that creates a validation generator the image data generator is especially valuable
479
505
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=479s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
when working with large data sets because we don't need to hold the whole data set in memory at once but it's nice here even with a small data set now we fit the model we tell it the training data comes from trained generator we said to read 12 images at a time and we have 72 images so we'll go through 6 steps of 12 images then we say that validation data comes from validation
505
532
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=505s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
generator validation generator reads 20 images at a time and we have 20 images of validation data so we can use just one step as the model training is running well see progress updates showing with our loss function and the accuracy it updates the connections in the dense layer that is the models impression of what makes an urban photo and what makes a rural photo and it makes those updates
532
557
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=532s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
in six steps when it's done it got 79 percent of the training data right then it examines the validation data it gets 95 percent those right 19 out of 20 we trained on 72 photos you could easily take that many photos on your phone upload them to kaggle and build a very accurate model to distinguish almost anything you care about I think that's incredibly cool this may feel like a lot
557
583
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=557s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
mPFq5KMxKVw
of new ideas for you to take in here's our plan we have one exercise for you to build a model yourself using chance for learning after you've done transfer learning hands-on I'll show you a simple but powerful trick called data augmentation data augmentation really improves your computer vision models when working with small and medium sized datasets that topic will be very quick
583
607
https://www.youtube.com/watch?v=mPFq5KMxKVw&t=583s
Transfer Learning | Kaggle
https://i.ytimg.com/vi/m…axresdefault.jpg
5cySIwg49RI
[Music] thanks for watching Henry AI Labs this video will present the semi weak supervised learning framework presented by research at Facebook's AI research lab this framework is a really interesting extension to their previous work on weak supervision such as using the hashtags and Instagram images as a weak supervised signal to trip retrain imagenet classification models this
0
21
https://www.youtube.com/watch?v=5cySIwg49RI&t=0s
Semi-Weak Supervised Learning
https://i.ytimg.com/vi/5…axresdefault.jpg
5cySIwg49RI
research is going to extend this idea to integrate semi-supervised learning as well as weekly supervised learning and then introduce a lot of other interesting ideas like incorporating model distillation into this framework and looking at class imbalance evident in these unlabeled data sets this video will present the research paper billion scale semi-supervised learning for image
21
38
https://www.youtube.com/watch?v=5cySIwg49RI&t=21s
Semi-Weak Supervised Learning
https://i.ytimg.com/vi/5…axresdefault.jpg