video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
l0im8AJAMco
that's what these papers are doing and then so what's happening here is that if you're able to change the predictions f of X by an order one quantity while the Jacobian the gradient stay constant in a relative sense then the second order term is essentially vanishing the second order term is how fast this changes so if I can move f of X by order one which is all I need to move it because I need
667
693
https://www.youtube.com/watch?v=l0im8AJAMco&t=667s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
to move it to match Y well this stays constant then I'm done I've found that I've constructed a global minimizer that is very close to my initialization close in the sense that there's no neg there's no potential negative curvature and that's essentially what that proof was saying it was saying this H of the H matrix the gram matrix is simply outer products of these gradients and it was
693
718
https://www.youtube.com/watch?v=l0im8AJAMco&t=693s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
not moving a lot under these initialization schemes on widths and so forth ok so she's out and Bach came up with a nice sufficient condition of when you should expect this they call it this kernel regime to happen essentially the sufficient condition is that think of Y minus F 0 as order one we initialize in network so it outputs an order one quantity that's random so
718
746
https://www.youtube.com/watch?v=l0im8AJAMco&t=718s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
this is like 1 because Y is order 1 and I basically pointed out that if you're Hessian divided by the square norm of the gradient is smaller than one then the second order term doesn't matter and your gradient dynamics track very closely for some constant amount of time the dynamics of a of gradient descent on a kernel machine this is very intuitive because it's essentially saying exactly
746
774
https://www.youtube.com/watch?v=l0im8AJAMco&t=746s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
that how much the Hessian changes is very small to the gradient and the gradient when the gradient is big then you only need to move the parameters a little bit and you move your predictions by a lot so roughly in all of the ways we initialize the Hessian is something like going to zero with the width and the gradient is of one and so we have this sort of linear behavior in some
774
798
https://www.youtube.com/watch?v=l0im8AJAMco&t=774s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
region around the initialisation okay so any questions here before I move on to you saying the final state is an interpretive approximation an initial and that this linear approximation is good is this sort of an idealized model motivated by a team that works and how you think about this analysis appropriate does this change so it was all like this so I would this result I
798
837
https://www.youtube.com/watch?v=l0im8AJAMco&t=798s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
would think as if this sufficient condition holds and you use a very small learning rate then the gradient on F theta F theta sub peak and you can construct a new function that is linear call it f bar theta T these are very close for a constant amount of time so a linear that gradient dynamics track those of a gradient dynamics on a linear model for some amount of time right but
837
867
https://www.youtube.com/watch?v=l0im8AJAMco&t=837s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
that's like a statement of the result yes the motivating slide you had a prompt was an appearance a set of empirical selfish genre that you know sort of empirical results instead in our models you know with these sort of things no this is my state oh yeah there was the ICL I think John was the little thing for two years ago I know that was that was on a state of the art right
867
893
https://www.youtube.com/watch?v=l0im8AJAMco&t=867s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
this paper was about model this was about generalization I'm only talking about optimization right now this paper is about generalization this statement has nothing to do it these I mean you can make extract a statement about statistical error here but it's not a strong state the this is really an optimization thing I would say it's saying that I mean that's what I'm
893
924
https://www.youtube.com/watch?v=l0im8AJAMco&t=893s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
asking well in your mind what does this say about the state of the art motivation like that this would you be able to give me you continue could ask whether the initial configuration the final configuration of pulse like this they're not close really not close like this so I mean that in your mind and with God is does this I think it tells you initial that you're you can get
924
951
https://www.youtube.com/watch?v=l0im8AJAMco&t=924s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
training over small very quickly initially in the first few steps so it seems that one could come up with empirical protocols and somehow see how relevant this is to what's going on in practical also I've said a word that they're not quotes does that mean that we run experiments and you know this does not really capture what's going on in in standard models yes there's
951
977
https://www.youtube.com/watch?v=l0im8AJAMco&t=951s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
there's some careful papers not buying me circle papers that say that this is not yes and is there any evidence that indicates otherwise or is it depends on like your parameter settings if you set the learning race small it agrees well if you set the learning rate big then no it disagrees if you set a small aircrews yeah but there are certain values that we usually
977
1,006
https://www.youtube.com/watch?v=l0im8AJAMco&t=977s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
use in practice so and the ones like a good test error disagree Cilicia pretty simple question I hope but how should we think about Hessians for models that don't actually have Hessians you should think about it as okay one quantitative way to think about it is the change of activation pattern which is a second sigma prime is the activation pattern and sigma prime prime is how much those
1,006
1,031
https://www.youtube.com/watch?v=l0im8AJAMco&t=1006s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
change in some measure the Hessian is a border one over the has seen itself without the Hessian you don't care about the Hessians movement while you carry we are worried about is a gradient moving a lot because that means your feature scheme is moving a lot you want to ensure the feature scheme does not move so you need about the size of the movement of the feature
1,031
1,073
https://www.youtube.com/watch?v=l0im8AJAMco&t=1031s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
scheme which is the Hessian relative to the size of the gradient because the gradient multiplied by your movement is your change in F that's why it's a relative measure yeah so which Darby are measured that has some potential because still the size of the everything is just awesome Park is the only parameter moving is f so then the norm doesn't matter but the the size of the parameter
1,073
1,097
https://www.youtube.com/watch?v=l0im8AJAMco&t=1073s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
depends on that sorry you should be L to a spectrum but everything in that paper pretend so the only thing changing us out sure but I mean just even the site also yeah if you're careful you it's polynomial this paper doesn't talk about I'm not sure they talk okay so somewhat this at least tells us that at least locally we optimization is not a big deal so perhaps you can
1,097
1,139
https://www.youtube.com/watch?v=l0im8AJAMco&t=1097s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
believe that if you your learning rate initially is not very big you can get your optimization error kind of small if you try to use this kind of method to get an analysis of generalization error unsurprisingly since your feature scheme is not changing what you'll find is simply it has the same prediction exactly as that of a kernel method the kernel is called the neural tangent
1,139
1,165
https://www.youtube.com/watch?v=l0im8AJAMco&t=1139s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
kernel and your generalization bound will look exactly like rate regression will be something like like label Y transpose kernel inverse y / n whole thing under square root so that's some generalization bound you can extract from this style of analysis so I have no I I don't think it's very tight in fact I'll talk about later okay so let's look a little bit more at generalization
1,165
1,194
https://www.youtube.com/watch?v=l0im8AJAMco&t=1165s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
error what's happening a very standard found in generalization or some complexity over N for example in kernel methods it would be Y transpose K inverse Y that's sort of the RK just norm in Ridge regression it could be VC dimension VC dimensions roughly number of parameters times the depth in feed-forward networks the required list all kind of big they're bigger the sample size which is exactly
1,194
1,218
https://www.youtube.com/watch?v=l0im8AJAMco&t=1194s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
what the I Claire paper was pointing out it was pointing L essentially that the number of parameters is roughly 30 20 30 times the number of samples in many models and of course of course so this slide is slow and from naki or I think from telling you actually who stole it from naughty and I copied it from time so we've all seen this many times I'm sure by now so what they did in there my
1,218
1,248
https://www.youtube.com/watch?v=l0im8AJAMco&t=1218s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
clear paper is the number so you the black line is a training curve and if you took statistical learning you were to kind of expect the red line kind of goes well it doesn't it goes down so an over parameterize ation does not seem to hurt generalization in this case in fact in this one example it kind of helps a little you can improve your generalization even after interpolation and there's
1,248
1,274
https://www.youtube.com/watch?v=l0im8AJAMco&t=1248s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
even more evidence that number of parameters are hurting if you plot the imagenet top one error over time then you see that in fact for these very very big networks the top one error is quite small it means be creasing here I mean this is cherry pick from this paper but there's clearly networks with 600 million parameters and it has passed they are very low so throwing in
1,274
1,300
https://www.youtube.com/watch?v=l0im8AJAMco&t=1274s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
parameters in a non stupid way will not hurt your generalization I'm here okay so what's going on here let's turn to margin theory and see what it tells us so this margin theory is very classical it basically says that if you're very far from the decision boundary you should be very good why is that Pete Bartlett and Mendelssohn coalesced this almost 20 years ago now basically
1,300
1,326
https://www.youtube.com/watch?v=l0im8AJAMco&t=1300s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
your generalization cap is upper bounded by some complexity measure divided by the margin so if you can ensure some sort of Rademacher complexity that's sort of size free independent of the explicit independent of the width then you can get good generalization error so here's some papers that talk about how to opera BAM the numerator here I'm only going to talk about how to lower bound
1,326
1,354
https://www.youtube.com/watch?v=l0im8AJAMco&t=1326s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
the denominator okay so how do you get solutions with good market let's look at the simplest loss function with the simplest regularization scheme is you have logistic loss with a sum norm regularizer for example weight DK l2 norm you would kind of hope so this is the Globo max margin this is the best you can do if you search over all models in your parametric family and you thought
1,354
1,381
https://www.youtube.com/watch?v=l0im8AJAMco&t=1354s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
you kept the one with largest margin gamma star so you would hope you can get gamma star and in fact if you do a very good job of minimizing this regularized functional for small lambda then you do get gamma star so in other words if week l2 regularization leads to assuming optimization works week our two regularization will get you very high margin or the best possible March
1,381
1,411
https://www.youtube.com/watch?v=l0im8AJAMco&t=1381s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
okay this this proof is a simple unlike a lot of you into this proofs I think this is genuinely simple okay the proof is essentially you write down the logistic loss realize that logistic loss when this argument is very small can be approximated by Taylor's theorem and I could see an exponential exponential of a lot of terms added together you only need to keep the
1,411
1,440
https://www.youtube.com/watch?v=l0im8AJAMco&t=1411s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
smallest one because the rest are exponentially smaller okay so that this line is Taylor's theorem and then this one says if you add a bunch of like e to the minus ten plus e to the minus hundred you only need to care about either minus ten so only the smallest of these terms matter that's why there's a min here and then this from this you can kind of read off the results if you lambda very small
1,440
1,465
https://www.youtube.com/watch?v=l0im8AJAMco&t=1440s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
it's saying let me use as much norm as possible but among those I need a fit I need to make the worst case margin good so among solutions of the same norm we prefer the largest margin okay so how does so how does over parametrization improve the margin this is completely obvious if I have a network and this site if I have a sub network of this network the margin can only improve so
1,465
1,491
https://www.youtube.com/watch?v=l0im8AJAMco&t=1465s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
if you have a Rademacher bound that's a independent of the explicit number of parameters the margin is only improving this denominator can only get bigger okay so let's talk a little bit about how to optimize when you have a regularizer the first thing that realize is that none of these results using n key K and locally convex or whatever can ever handle this the regularizer induces
1,491
1,519
https://www.youtube.com/watch?v=l0im8AJAMco&t=1491s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
a sort of tie breaking among the global minimizer's and the global minimizer's are not equivalent like you can have two global minimizer's the unregular eyestrain loss and they will have drastically different regularization effects so the regularization induces a tiebreaking and now you really do need to find like a global minimizer of this regularize objective so how to do this
1,519
1,542
https://www.youtube.com/watch?v=l0im8AJAMco&t=1519s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
the answer is not fully satisfying but we can say something so take a very very very very wide network think of this as like wider than anything you've seen exponential in D or X Panetti whatever infinite okay then run gradient descent with a particular noise scheme then you convert your global minimizer in polynomial time polynomial in the dimension and in one over epsilon
1,542
1,569
https://www.youtube.com/watch?v=l0im8AJAMco&t=1542s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
something like Pauly D over epsilon four or something okay so gradient descent I'm very very very over parameterize networks converges to global minimizer's so over parameterization does help even when you have regularization the mechanism in which it helps is very different from this local convex intuition the intuition here is that when you're very very over primate
1,569
1,596
https://www.youtube.com/watch?v=l0im8AJAMco&t=1569s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
rising there's a descent direction in function space but if you have a finite a small width network you might miss the function space because you might miss the direction of this sign in the function space that's essentially what Frank wolf algorithm will do but computing as Frank will sup is a exponential time so but instead if you had a ton of neurons and you add noise
1,596
1,617
https://www.youtube.com/watch?v=l0im8AJAMco&t=1596s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
then there might be an exponentially small fraction of your neurons I see this Frank wolf direction but then great relu is homogeneous and then although it's exponentially small signal it goes up exponentially too and these things carefully balanced and at the end you do get a polynomial time result so because the number is very large are you what's relationship for the ndk regime
1,617
1,642
https://www.youtube.com/watch?v=l0im8AJAMco&t=1617s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
this no more I'm Kiki and you guys caught yeah you change anything NDK Scott it's very fragile and so I know I remember some of these results at that on laminate extremely small you set lambda and whatever way let's say you want gamma Magnus point one and you set lambda depending on that it's polynomial time to get within constant a polynomial iteration it's not pollen on with one
1,642
1,673
https://www.youtube.com/watch?v=l0im8AJAMco&t=1642s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
iteration is exponential time because you have exponentially many parameters some polynomial iterations or time in the pde sense and one other question so in the generation ban you're talking about a size free Rademacher bound right obtaining the margin with a very light regularization parameter president Ron a MicroBot might depend on say norms which is n have to be balanced
1,673
1,708
https://www.youtube.com/watch?v=l0im8AJAMco&t=1673s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
against this resulting mark yes you should think of whatever norms is this let's concretely take the goal of which one which is just a Frobenius norm of everything then that would be weight decay okay all right so so you might ask um okay so why add this regularizer I can already get the global min of the logistic loss without this regularization term what am I getting
1,708
1,741
https://www.youtube.com/watch?v=l0im8AJAMco&t=1708s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
here is my stat sample complexity much better and it turns out it is let's look at a very simple data set on the first two coordinates the data looks like this it's like two eggs or essentially then every other coordinate is just standard normal so there's two coordinates that have signal and that's the ones you want to pay attention to the rest of the coordinates are completely uncorrelated
1,741
1,762
https://www.youtube.com/watch?v=l0im8AJAMco&t=1741s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
with the label so they're noise coordinates are kind of for you and they exactly do full a kernel method a kernel method is unable to localize onto the first two dimensions so it has to look over all dimensions and it pays sample complexity at least d squared to get some air last if you want R less than some absolute constant you need to pay at least d square samples because a
1,762
1,785
https://www.youtube.com/watch?v=l0im8AJAMco&t=1762s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
cardinal method to solve this problem needs at least form pairwise degree to polynomials that has complexity d squared however for that sort of to XOR construction there's a neural net with four neurons and then so then the sample complexity of learning a neural net with four neurons is something like four D over N so D over and roughly so there's a clear sample complexity separation
1,785
1,813
https://www.youtube.com/watch?v=l0im8AJAMco&t=1785s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
with regularizer you can learn with D samples without regularizer uni at least d square samples so regularization explicit regularization or math you yes yes yes good you might put it it's not but we don't yeah it's not clear yeah yeah they might take exponential time we don't know no we know it takes exponential time how you scale things but I can scale this so it takes you
1,813
1,856
https://www.youtube.com/watch?v=l0im8AJAMco&t=1813s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
expert but then you can increase your learning rate anyway to analysis in the specific city yeah I guess I should be more precise to be more precise this one is better than the NDK one let's play it that way so then it covers the unregular eyes and the regular ice case okay so so do you need regularization it turns out you can do pretty well if you look at these numbers without regularization
1,856
1,895
https://www.youtube.com/watch?v=l0im8AJAMco&t=1856s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
so like the gain here is not from regularizing you gain 5% but if you just change the architecture you can gain around 10% so SGD without regularization does already have very good generalization perhaps it's not state-of-art but certainly the bulk is not from these regularization methods okay so let's turn to a simple example of logistic regression so even if you have
1,895
1,921
https://www.youtube.com/watch?v=l0im8AJAMco&t=1895s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
separable logistic regression the problems convex but not strongly or strictly convex so there's many many global minimizer's and you may be wondering which one does it converge to this is exactly what soldiery Hoffer and cerebro asked a year or two ago and they showed that this converges to the SVM solution in a very precise sense if you run gradient descent for a long long
1,921
1,943
https://www.youtube.com/watch?v=l0im8AJAMco&t=1921s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
long time then you normalize because the norm blows up you get exactly the auto SVM solution after doing some normalization this is quite amazing if you haven't seen it before there's all these directions why the heck should gradient descent get the 1 of minimum how to norm that maximizes the l2 margin right I mean this seems maybe I should depend a loading rate how
1,943
1,964
https://www.youtube.com/watch?v=l0im8AJAMco&t=1943s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
you initialize it could depend on anything and this result is quite stable any learning rate that's stable it converges to the whole to SVM solution ok so then so what is then you might be thinking what is special about gradient descent let me write down gradient descent in a very suggestive way I'm just write it down so it's trying to maximize the correlation you know yeah
1,964
1,998
https://www.youtube.com/watch?v=l0im8AJAMco&t=1964s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
because the logistic lost gradient is never zero so it's impossible to have a zero vanishing gradient even if you this is in contrast to least-squares I think which is where you're thinking yes okay what was I saying yes this norm okay the important thing is that you write gradient descent in this suggestive way so what it's saying is that gradient descent is trying to
1,998
2,031
https://www.youtube.com/watch?v=l0im8AJAMco&t=1998s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
maximally decrease the lost value of the function while paying infinitely small amount of l2 norm the key thing is is trying to use the least amount of Al to norm to achieve this goal so now you might ask why do I change a norm and this gives you a family of steepest descent algorithms and in fact you can prove that for this entire family of steepest descent algorithms you converge
2,031
2,055
https://www.youtube.com/watch?v=l0im8AJAMco&t=2031s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
the SVM given in that normal okay so for example that one norm this is exactly a form of boosting a to prove this a long long time ago in s PhD that in fact does maximize the l1 margin the same proof works for all norms okay so some examples coordinate descent which is steepest ascent with respect to l1 norm you're going to maximize l1 margin it's not quite adaboost it's adaboost with
2,055
2,085
https://www.youtube.com/watch?v=l0im8AJAMco&t=2055s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
some damping step size so add a boost with a dampened step size that maximizes our margin adaboost itself does not sign gradient method commonly used now the save on communication this gets you some L infinity bias and of course gradient descent is steepest descent with respect to l2 okay so that's kind of that's great we understand logistic regression very well that's sort of always the
2,085
2,111
https://www.youtube.com/watch?v=l0im8AJAMco&t=2085s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
starting point so now the question is what does it do on deep networks if you're if you're very very very optimistic you may hope that it solves max margin even if you do know regularization because then you will have some generalization bound that only depends on this up here and that seems good because now you can get the optimal margin of course notice that this is an
2,111
2,132
https://www.youtube.com/watch?v=l0im8AJAMco&t=2111s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
SVM problem but it's a non convex SVM problem now this SVM problem unlike the linear case has many many stationary points as first order stationary points so I can order station points on a global max so of course you cannot sort prove that it converges to the global Maximizer well you're able to prove is that it converges the first order optimal point on the following SVM this
2,132
2,157
https://www.youtube.com/watch?v=l0im8AJAMco&t=2132s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
is some nonlinear program it has first order KKT conditions and assuming the only assumption here is homogeneity then if you do gradient descent exponential loss logistic loss you get a first order optimal point the first order stationary point of this okay so you wanted to prove it gets max margin that's probably not possible because you're running a local search
2,157
2,185
https://www.youtube.com/watch?v=l0im8AJAMco&t=2157s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
algorithm but at least you can characterize that the whatever you converge to is very special in the sense that it's precisely a critical point of a fairly intuitive optimization problem okay so I've talked about I didn't really talk about the implicit regularization of n key K but if you think about what kernel methods is you can write down the sort of implicit bias of NPK or the
2,185
2,211
https://www.youtube.com/watch?v=l0im8AJAMco&t=2185s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
inductive bias of it as sort of you're trying to find the thing that best that maximizes the margin the worst-case margin but you stay infinitely close to your initialization so this top one theta hat K is exactly what would happen if you did n TK on logistic loss and terminated after some amount of time a very long time you had approximately get maximum margin maximum with respect to a
2,211
2,237
https://www.youtube.com/watch?v=l0im8AJAMco&t=2211s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
different sense in the sense that you have to stay infinitely close to this ball what the this previous result by us and Haiphong Liu and Jen we chose that actually you get a stationary point of this following program and of this program you're letting the prompter move infinitely far from its initialization so it's completely forgetting where it's initializing it's running forever and
2,237
2,263
https://www.youtube.com/watch?v=l0im8AJAMco&t=2237s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
ever which you'll always do when you have a logistic loss and when you run forever and ever you try to maximize the margin okay these are of course two endpoints of an extreme case on one of these you're trying to say infinitely close to your initialization the other you're moving infinitely far so they're very different with endpoints probably in my opinion
2,263
2,287
https://www.youtube.com/watch?v=l0im8AJAMco&t=2263s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
what's interesting is probably when this is not quite zero you're going slightly further than the linear regime but you're clearly not going to this super asymptotic regime you've deviated slightly from the N key K and what is happy there we don't really know but I think that's probably likely to correspond to closer to practice than the other end point because things like large learning rate
2,287
2,309
https://www.youtube.com/watch?v=l0im8AJAMco&t=2287s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
finite width will cause you to sort of stay infinitely close yeah okay so the final thing I want to talk about is how does architecture matter so I've told you kind of that gradient descent asymptotically the bias it gets you is an l2 regularization on all of the parameters but that's sort of uninteresting why should you ever care about the parameters again parameters
2,309
2,333
https://www.youtube.com/watch?v=l0im8AJAMco&t=2309s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
have no meaning in your own nets the only thing that matters at the end is your function f this is all you see a test time the parameter is just a way to encode F so what you should really ask is how does this bias translate over here so what is the bias on the prediction function and this is clearly where architecture matter so for example here 1 3 & 4 there was no regularization the
2,333
2,356
https://www.youtube.com/watch?v=l0im8AJAMco&t=2333s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
algorithm was just SGD the only thing that changes is the architecture yet you get a huge improvement in the test error luckily I don't know the answer for neural nets but in linear networks there's a very crisp answer you can take a linear function write it as a product of matrices then if you have outer regularization on w's which comes from gradient descent it translates a shot in
2,356
2,386
https://www.youtube.com/watch?v=l0im8AJAMco&t=2356s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
quasi norm on theta on the linear function okay so regardless of the depth and the width the prediction function class stays linear you know you can write a linear map a matrix as a product as one matrix as a product of five matrices is still matrix the function class number changes or simply varying the parameterization combined with algorithm changes the inductive bias
2,386
2,409
https://www.youtube.com/watch?v=l0im8AJAMco&t=2386s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
here and here you can characterize in a very precise sense but whatever gradient can descend converges to is a stationary point of some problem with this shot and quasi norm okay a similar phenomenon in convolutional networks same thing you can take a linear convolutional net and write it as of comp of predictors you'll also get a quasi norm except now it's sparsely instead of on the singular
2,409
2,441
https://www.youtube.com/watch?v=l0im8AJAMco&t=2409s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
values okay no it stops its learning like it's always changing zone on versatility the function is infinitely flat so in fact you can even grow the life some solar league rate of those convergence yeah you need sufficient design there's nothing okay so some random thoughts let me finish so of course we've seen now that by training Colonel training neural nets
2,441
2,507
https://www.youtube.com/watch?v=l0im8AJAMco&t=2441s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
now stay close to initialization using a kernel classes so how'd it go beyond kernels UN's I mentioned this yesterday and of course one thing missing here is how will distributional assumptions help most of these results have not used distributional assumptions in any meaningful way I would say I mean there's a lot of results about learning single relu single convolutional filters
2,507
2,529
https://www.youtube.com/watch?v=l0im8AJAMco&t=2507s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
they use Gaussian assumptions of course and those argue and even then you can see the Gaussian are helping you that much there were other ways to learn it just maybe not quite gradient descent so what are some reasonable distributional assumptions that can help us learn things via kernels don't really depend on the distribution their optimization at least so what happens right after the
2,529
2,555
https://www.youtube.com/watch?v=l0im8AJAMco&t=2529s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
ndk regime we figure out the end point in a sense we figured out what happens locally we've figured out what happens if you move infinitely far but the whole middle I don't know it's kind of interesting in architecture design an inductive bias so you know people come out with new architectures every week and how does this change the inductive bias of SGD if I do
2,555
2,575
https://www.youtube.com/watch?v=l0im8AJAMco&t=2555s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
SGD on ResNet versus amoeba net block then you know what is changing here let's say I just use STD on both but the functions I learn are different how are they different and why does amoeba net whatever generalize better [Applause] nothing GD is just easier to analyze most of the results hold for a skewed yeah that was for single oh no that's a matrix this one's a matrix this one's a
2,575
2,637
https://www.youtube.com/watch?v=l0im8AJAMco&t=2575s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
matrix you minimize there's two results one is you minimize cross-entropy with explicit weight DK you get the quasi norm second one is you minimize cross-entropy with TD no there's no optimization these are statements about global minimum okay the first statement is a statement about global minimum the second statement is invoking this following theorem this
2,637
2,679
https://www.youtube.com/watch?v=l0im8AJAMco&t=2637s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
l0im8AJAMco
says that you convert a force or the optimal point of a problem an SVM where this is the Shatan clause anymore so the inductive bias of having deep that's in a linear network using logistic loss is getting closer to is a crank this is about first-order okay so gradient descent the algorithm is involved here it's about gradient descent you run gradient descent
2,679
2,715
https://www.youtube.com/watch?v=l0im8AJAMco&t=2679s
On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization
https://i.ytimg.com/vi/l…axresdefault.jpg
MpdbFLXOOIw
hi there today we're looking at supervised contrast of learning by people from Google research and MIT now this paper proposes a new loss for supervised learning and you might recognize that this is a big claim so for ever now we basically use this cross-entropy loss in order to do supervised training of neural networks this paper proposes to replace that with
0
30
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=0s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
the supervised contrastive loss and let's jump straight into the results here they say our supervised contrastive loss outperforms the cross-entropy loss with standard data augmentations such as Auto augment and Rand augment so these are some of the previous state-of-the-art data augmentation techniques used together with the cross entropy loss and they say there are
30
56
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=30s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
supervised contrastive loss outperforms them you can see here on image net which is the biggest vision benchmark or the most famous one this new loss the supervised contrastive loss outperforms these other methods by something like a percent one percent is a big improvement on image net right now so they claim it is a big claim right you recognize if this is true this could be a
56
85
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=56s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
game-changer basically for all of supervised learning and supervised learning is really the only thing right now in deep learning that works so it could revolutionize the field but so here's the but it is actually not a new loss to replace the cross entropy loss and that's they do they do come about this pretty quickly some and you don't think they're they're dishonest or lying
85
113
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=85s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
or anything here but it is it is sort of if you start reading you like what this is a new loss it is not it is a new way of pre training the network for a classification task and so let's look into this so if you look at what does what does mean to build a classifier in this is what you usually do this is supervised cross-entropy training you have an image and the image here is of a dog you put
113
143
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=113s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
it through your network and you obtain a representation so the representation here R is this last layer or the second-to-last layer and you put that through a classification layer and then a soft Max and what you get as an output is basically a probability distribution and let's say you have three classes here there's dog there's cat and there's horse and let's say the network doesn't
143
173
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=143s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
yet isn't yet trained very well so the probability for dog here is fairly low so this is basically what the network thinks of that image like which class does it belong to with what probability I also have this label right here so the labeled dog for that image what you do with that is you do a one hot vector so that would look like this so the one is at the position where the correct class
173
201
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=173s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
is and then the cross-entropy loss takes all of this and does the following there's a sum over all your classes in this case you have three classes and let's call these the labels l and you want to always take the label of the class times the log probability that the network thinks belongs to this class so you can quickly see that this if the label is 0 so for all the incorrect
201
232
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=201s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
classes that means this entire term drops away and only if the label is 1 so only the correct class that will result in the log probability of the class where the label is the correct label right so in order to make this a loss you actually have to put a negative sign in front of here because you want to this so this entire thing reduces to the log probability of the correct class
232
265
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=232s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
this is what you want to max semi's there for you if you want to minimize something you need so you minimize the negative log probability of the correct class which means you maximize the probability a a if you've never looked at the cross entropy loss like this it is important to notice that you're gonna say hey all this does is pull this here up right and it doesn't
265
294
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=265s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
do anything to the other ones but you have to realize that this softmax operation since this is a probability distribution all of this is normalized to sum up to one so implicitly you will push these down through the normalization right so what this does is it pushes the correct class up and it pushes the other classes down so this to look at this is going to be important
294
318
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=294s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
later because if you look at what this representation here does so again you have the network produces a representation here this is 2,000 dimensional and then it does it adds on top this classification layer this classification layer is simply a linear layer and then a softmax on top so how you have to imagine this is that there is a representation space this 2010
318
346
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=318s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
shell space and the representations are made in such a way that the labels such that sorry let's have three classes here the representations are made in such a way that a linear classifier can separate them correctly right so here this would be like a boundary and then this would be another boundary and this maybe would be another decision boundary so you can see that the linear
346
378
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=346s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
classifier can separate the classes well that is the goal if you use this soft Max cross-entropy loss that is implicitly what will happen in the representation space W all the cares about is that the classes are on one side of the decision boundary and everything else is on the other side of a decision boundary so if if you have the network isn't trained very well at the beginning and
378
405
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=378s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
you maybe have a sample of the green class here it will push the network such that the representation of that sample will go on to the other side of this decision boundary and it will push the decision boundary at the same time to make that happen more easily right so it will optimize all of this at the same time that's what you do that's how you optimize representations so this work
405
431
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=405s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
here and another work has said wouldn't it be great if the representation and decision boundaries weren't just trained at the same time for this but we learn good representations first such that classifying them becomes very simple and in essence what this paper says is if we have a representation space W shouldn't images of the same class shouldn't we just make them close together you know
431
464
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=431s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
so without caring about decision boundaries we just want them to be close to each other and we want them to be far apart from other classes if that happens you can see that a linear classifier is going to have a very easy time separating these classes later so that's exactly what this paper does it has a pre training stage and a training stage so in the pre training stage this is
464
493
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=464s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
over here su provides contrastive in the pre training stage it simply tries to learn these representations right like over like down here such that without the decision boundaries class think images of the same class are close together and images of different classes are far apart which notice the the subtle difference right to the cross-entropy loss where you just care about them
493
523
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=493s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
being on one or the other side of a decision boundary and in stage this so this stage one and then in stage two and there is where where it comes in you basically freeze the network so you freeze these weights down here these are frozen you don't train them anymore all you train is this one classification layer so the represent actually freeze also the representation layer here you
523
554
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=523s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
only train the classifier on top in stage two but you train it using soft Max and using the cross-entropy loss so you you train the classifier in the old cross entropy way using just normal supervised learning the difference here is that the stage one free training is is what's training the network and the cross entropy dose only trains the classifier right so let's look at how
554
584
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=554s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
this pre training actually worked what is using what it's using is a method called contrastive pre-training now in contrastive pre training and they have a little diagram up here what this does is if you look at the classic way of doing contrastive pre train you have to go to the unsupervised pre-training literature people have kind of discovered that they can improve a neural network by
584
612
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=584s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
pre-training it first in an unsupervised way and this is also called some of these methods are called self supervise so the advantage here of self supervised or unsupervised pre training is that you don't need labels what you want to do is simply to make the representation space somewhat meaningful right so you simply want the network to learn representations of images that are
612
640
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=612s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
somehow meaningful right that are there and here's how you do it so you want to take an image like this dog here and then you want to randomly augment this image which just means you want to produce different versions of the same image in this case down here this is a random crop it's cropped about here it's still the same image but it's a different version of it in the case
640
669
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=640s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
here you can see that it's flipped left right and the brightness is slightly increased so these are just different versions of the same image and what you also want are what's called negatives negatives are simply different images from your data set right for example this or this or this you don't care as long as they're different right you just sample a bunch and what you
669
694
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=669s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
want so you're you're embedding space and they make it big a deal here that they are normalized and that seems to work better but this is not necessary for the idea to work the big idea is here that if you have an image right here let's say this is this is the dog and the blue dots here are the augmented versions of the same dog and the green dots are all the other images in the
694
725
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=694s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
data set what you want is that the all the images that come from the original same image are pulled close together and everything else is pushed apart right so that's why these are called positives and these are called negatives so the contrast of training basically means that you always want to have a set that you pull together in representation space and assets called the negatives
725
755
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=725s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
that you push apart so the network basically learns about these random transformations that you have here the network kind of learns what it means to come from the same image it learns to be robust to these kind of transformations it learns about the data in general and how to kind of spread the data and embedding space with these transformations so this usually ends up
755
780
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=755s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
in a pretty good representation space and people have been using this in recent years in order to gain significant improvements now the problem here if you specifically do this to pre train a classifier is the thing they show on the right so on the left here you have a picture of a dog right but if you just do this self supervised you do it without the labels so it can happen
780
810
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=780s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg
MpdbFLXOOIw
that this image here shows up in the negatives but it is also of a dog right and now this image here is going to end up maybe being this image here and you see what happens to it it's a green one so it's gonna get pushed apart and this is going to make the entire task for the later classifier much harder because if they are pushed apart from each other how is a linear classifier going to have
810
838
https://www.youtube.com/watch?v=MpdbFLXOOIw&t=810s
Supervised Contrastive Learning
https://i.ytimg.com/vi/M…Iw/hqdefault.jpg