video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
rk7fIhCH8Gc | gaussian noises that i considered just rescaled a little differently the way we usually do in physics and then i take this loss function that was sum of two squares and developed the squares and realized that some terms just don't depend on the x over which i'm optimizing and some terms depend on it in a trivial way if the x lives on the sphere they're just equal | 485 | 506 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=485s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | to some constant so the only non-trivial term that matters is the term here i called h of x that if i look back in statistical physics is exactly the hamiltonian of something that is called the spherical mix b spin glass so those in the audience that know about spin glasses have seen this smile because that's the one one of those that is most often studied in the field of | 506 | 533 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=506s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | statistical physics of disordered systems and so what we will be so we will be using that but what we'll be interested in is at one hand the you know when i say gradient-based algorithms i will be speaking at this in this part about mainly two one of them will be the launcher algorithm with the aim of actually estimating the ground through x star in a base optimal way which would | 533 | 563 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=533s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | be done by writing the posterior measure and computing its marginals and this corresponds to writing the boltzmann measure of the corresponding statistical physics problems and sampling sampling it at temperature one and that's exactly what the longevity algorithm aims at and the second estimator i will be looking at is the kind of more common one maximum likelihood estimator that is | 563 | 588 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=563s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | computing the minimizer of that loss function of the or the ground state of the statistical physics model and that's what the gradient descent or flow aims at so just to get a little bit more familiar with this model so if you listened to the bootcamp lecture you would you know we told you about set of tools that you can use to actually describe what's happening | 588 | 610 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=588s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | in a model such as this one from the point of view of information theory what is possible statistically and from the point of view of approximate message passing algorithm which ends up to be the best we know for this type of problem and this phase diagram summarizes of what's going on so i will just explain it and then in this talk we are interested what the | 610 | 632 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=610s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | gradient descent is doing so that will be the new part so on the axis here we have the variances of the noises delta 2 is the noise added to the matrix and delta p is the noise added to the tensor so the bigger the noise the harder this inference problem will be and for instance if the delta p was infinity that would be effectively as if the tensor was not there | 632 | 658 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=632s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | it's not giving you any information so in that case you are in the case of spiked matrix factorization that is problem widely studied in statistics and you know it has the bbp phase transition and that's precisely what the value lambda 2 equal to 1 corresponds to so that's what distinguishes the phase where the spike is impossible to recover from the easy phase where if you | 658 | 681 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=658s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | only had the matrix you recover the spike simply by spectral methods looking at the spectrum of the matrix then if the matrix was not there that is that delta 2 would be infinity 1 over delta 2 would be 0 then you only have the spiked tensor model which you know information theoretically is solvable at at some point uh highlighted with the red line here but even if the noise is smaller than | 681 | 708 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=681s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | that it's algorithmically hard and it's also a problem that have been studied so in order to make the the kind of computational question more interesting i mix them and if i mix them then you see what's going on there is this algorithmically hard phase appearing that we believe cannot be entered by any polynomial algorithm that's a conjecture and now all i want to be telling you | 708 | 730 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=708s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | about how is how bad and decent and longivan dynamics fits in this diagram you know does it do as good as the approximate message fasting does it do worse why so to define what i mean by you know more precisely was the longevity algorithm and the gradient flow it's simply the derivative so i will be working with the continuous time version here because that's the one that i know | 730 | 756 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=730s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | how to analyze so it's the time derivative of the x that's the variable over which i'm optimizing is simply equal to minus the gradient of the hamiltonian or the loss function plus a term that corresponds to weight decay or spherical constraint it would be called in physics plus noise that either is there and has a covariance proportional to a constant that is called temperature in physics and if | 756 | 784 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=756s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | that constant t is equal to 1 then this is the logic algorithm that is guaranteed at exponentially large times to sample the boltzmann measure and to solve the problem optimally but we will not be looking at exponentially large times because that's untractable our question will be what happens attractable time so that will be constant on constant times logarithm of | 784 | 807 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=784s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | the dimension or something of that kind so that you know we can wait for such a long time and then if we simply don't put this additional noise there so this constant t is zero then this is the gradient flow so so going you know how how that model is solvable so in statistical physics of disordered systems this this works cited here is very well known it's basically the reference work | 807 | 834 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=807s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | that we have in physics to understand what's going on in materials such as structural classes and it so happens that this work actually looked at a model very much related to the one we are studying here it's it's exactly the same one except that it didn't have this ground truth vector x star so it's exactly the same loss function but the tensor and the matrix is created | 834 | 859 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=834s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | without this ground true planted in but that's you know that's that's a complication of the model that can be worked out and you know this theory from this paper can be generalized and this is what we did i will not be going into details of the of the derivation that would be very lengthy but if you are interested in the details actually just um two months ago there | 859 | 884 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=859s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | has been a wonderful lecture by uh my co-author francesco urbani that you can watch at the at the leisure's website so this dynamical mean field theory that describes in a closed form what the gradient flow or the longevity algorithm i have the two versions here is doing is a set of equations that close on three parameters this function c of two times that is a | 884 | 909 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=884s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | correlation function this function c bar of one time that is a correlation of where the gradient flow is at the given time and the ground true vector x star and a so called response function r of again two parameters and in the limit when the size of the system goes to infinity these functions in the algorithm evolve following this set of pretty ugly looking equations but the | 909 | 937 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=909s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | kind of important thing here is that we started with a high dimensional problem the n corresponds to the dimension was very large and the closed equations that we wrote they are just on scalar variables these functions corresponding to two times but the dimension is not there anymore so we described the complicated high dimensional dynamics with the effective | 937 | 960 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=937s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | set of equations that are just color equations and so since they are you know scalar they're also simple to solve so we can plug them in in a computer program and solve them and yeah i will be i will be going through several open problems during the talk so the first one of them is you know proof that the dynamics gradient flow and launch of the dynamics in this model | 960 | 984 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=960s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | indeed follows these equations and there has been a related work in the past where you know this proof has been done but again for the version where there is not the spike so the equations are not are not exactly the same so this you know this is something that is quite probably not so complicated to to generalize these proofs including this the spine but it hasn't been done yet so i will not be | 984 | 1,010 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=984s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | talking about that instead i will be talking about what happens if we solve these equations what insight can we get about the behavior of this optimization problem so this is depicted here so as a function of the iteration time i am plotting the correlation with the ground roof and at the i start randomly so at the beginning it's zero and then it is growing event or not but | 1,010 | 1,035 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1010s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | here it is growing and depending on the value of the noise so there's the delta p here so darker line here is larger delta p so larger noise is harder and indeed you are seeing that when it goes up the value at which it saturates is lower for the larger noise so that's intuitive should be lower correlation because it's higher noise so it's a harder problem but what's not intuitive is that it | 1,035 | 1,062 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1035s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | actually for larger noise the correlation the good correlation with the ground truth is attained earlier whereas for smaller values of the noise it takes longer to find it so this is non-intuitive nevertheless this is what is happening here that's the property of the launch of algorithm in this problem and in the inside i'm just comparing to the very same lines for the approximate | 1,062 | 1,086 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1062s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | message passing algorithm which is another iterative but not gradient based algorithm that one that one behaves in the intuitive way the easier ones get there earlier but not for the longevity so if i collect this information i can actually extrapolate the value of the noise at which the time to get a good correlation would diverge and if i plots in the phase diagram i showed before | 1,086 | 1,113 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1086s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | where this happens i actually get that the easy regime that is easy for the other algorithms say approximate message passing has actually part the one that is colored uh orange green here that is hard for the longest of algorithm where the london algorithm you run it full time that is some proportional to the dimension maybe with some polylog factors and it's still | 1,113 | 1,138 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1113s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | stuck at completely zero correlation with the ground truth and then if you are above this line where it is really only green then it reaches the optimal correlation so you can do exactly the same thing for the gradient flow and you will get you know another curve in this phase circum which is a bit higher so the fact that it is higher is expected because this is a high | 1,138 | 1,164 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1138s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | dimensional problem with a lot of noise the maximum likelihood estimator here is not optimal the optimal one is the one that samples the posterior so in a sense by running the gradient flow we are aiming to solve the wrong problem so no wonder that we do a bit worse so that's not surprising but you know it's it's a non-trivial curve in this diagram so can we explain it can we kind | 1,164 | 1,187 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1164s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | of understand intuitively where it comes from so kind of the popular explanation of why for some parameters the gradient flow would be working and why for others it will not be working will be kind of this this cartoon with spurious local minima that either are there or are not there since there are no spurious local minima then the gradient flow has basically no | 1,187 | 1,211 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1187s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | choice than to good for the good one then to go for the good one and if they are spurious local minima then then it's a high dimensional problem there will typically be exponentially many of them so the intuition is that it will just fall into one of the exponentially many and not the good one so in this model this is actually the model is so kind of basic that we | 1,211 | 1,233 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1211s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | can we have access to actually counting exactly how many minima they are at the given value of the energy of the loss and this is done by the so-called cats rise approach and so here again i'm not giving the derivation here just the resulting formula that is telling us of you know that the entropy is always number of something that is exponentially numerous so is the | 1,233 | 1,260 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1233s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | logarithm divided by the size of the system and its number of what it's number of the local minima that have a given correlation with the ground truth as the parameter m at the given value of the loss corresponding to the matrix e2 and corresponding to the tensor e and again this is a result you know resulting from a series of works where these kind of methods were developed | 1,260 | 1,286 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1260s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | so here what i'm showing you is the annual entropy that is the expectation of the of of the number of those minima but actually at zero correlation with the ground truth this also is the is the quenched so is also the expectation of the logarithm so we actually know when they are and when they are not spruce locally minima and if we collect it from this formula | 1,286 | 1,308 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1286s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | back to my face diagram we are getting the purple line here so the purple line means that above it with high probability the only minima that is there is the one that correlates with the signal and below it there are exponentially many spruce local minima not correlating with the signal and yet you see that these are not the same lines as the one starting from which the | 1,308 | 1,335 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1308s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | gradient flow is working so there is a region between the purple and the green line where they are exponentially many first local menu with no correlation to the signal yet the gradient flow happily manages to ignore them and finds the good one so how is this possible so to understand how is this possible in this model we need to dig a little bit more into what is | 1,335 | 1,358 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1335s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | happening with the algorithm and we actually can look at the at the following plot that is showing us how does the loss function the e on the y axis change as we iterate as a function of the iteration time t and we find out that either for a high value of the noises the dynamics is stuck at some value of the loss that seems to be you know pretty flat starting from sometime about | 1,358 | 1,389 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1358s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | 100 to 100 here or it's actually stuck at that value but then somehow escapes from it and reaches good correlation with the signal that is the dashed line that's the magnetization and when we actually investigate whether that value at which it is stuck corresponds to something we find that yes that it interestingly corresponds to the value of the loss that it would reach if the signal was | 1,389 | 1,416 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1389s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | not at all there if the if the x star was not in the in the mouse so just to the non-planted map and this is a value of um of energy that was studied a lot in physics that has a name that's the threshold energy and studying the non-planted system we actually can compute that value so so here we make a hypothesis we say okay let's assume that the dynamics goes to | 1,416 | 1,443 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1416s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | this threshold energy and then what matters is whether the minima that lie at the typical ones that lie at that energy not lower one not higher one that one whether those are stable or not towards the signal and that stability decides whether you get you stay there or whether you go to gain some correlation with the signal and what i'm saying in words here we can | 1,443 | 1,468 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1443s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | actually put into equations so here the first equation is where about are the threshold states and the second equation is telling us you know derived both from the gates rise approach and directly also from the dynamic domain field theory but again the details are not shown here is the condition for uh the lowest eigenvalue of the corresponding hessian of the minima being having an | 1,468 | 1,496 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1468s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | eigenvector that points towards the signal or does not so if i put these two together i actually get the third expression here a conjecture for where is the line above which the gradient descent or lounge of our dynamics depending on what the parameter t here will work and so this leads me to the following conjecture it you know the contractor is that gradient | 1,496 | 1,520 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1496s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | flow with random initialization finds you know in time that is it finds the optimal correlation with the signal in time that is proportional to the to the input size which is n to the power p times some polynomial of log n finds the optimal solution and if the noise is bigger than that then it does not and if i plot this expression into so so again open problem prove this conjecture | 1,520 | 1,550 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1520s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | and if i plot this expression into my face diagram that is the blue line you see that that one is perfectly agreeing with the points that i got previously by numerically solving this integral differential dynamical mean field equation so so this this seems to be explaining whether or not the gradient flow work works and i can do exactly the same thing just plug t one instead of zero in | 1,550 | 1,575 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1550s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | the same expression and get the same result for the lonzeva algorithm that has an interesting point that actually the line that corresponds to this uh to this threshold it's set to it it reaches the line lambda two equal one at uh sorry delta tweak one at delta p equal to so there is a three critical point so if delta p is bigger than two there is no loss of a hard phase anymore | 1,575 | 1,604 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1575s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | but maybe let's go back to this popular explanation what was wrong with that you know absence or presence of the spruce local minima at least in this particular model the correct explanation based on the landscape and kind of intuition about how the landscape looks like is the following one it's not the presence or absence of spurious local minima nor their number | 1,604 | 1,627 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1604s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | what it really is is the fact that the dynamics goes to the highest lying minima that happens to be the threshold state and what decides whether it finds the solution or not is that these highline states have a negative direction in the in the hashem towards the solution or not and if they do then the algorithm goes to the solution even though there still may be | 1,627 | 1,656 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1627s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | exponentially many spruce local minima at lower energy so they're not really spruce because the gradient flow just never ever sees them with probability that is that is one up to some exponentially small factor so here i should be about a middle of the talk and i want to conclude about the spike matrix model so i showed you you know this is i think the first time that we have a close form | 1,656 | 1,683 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1656s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | conjecture for the threshold of what gradient-based algorithm is able to do including the constant in a high-dimensional non-convex interference problem and the question would be you know can we apply the same methodology to something that looks more like a supervised neural network simple one we also show that the gradient flow is worse than the larger algorithm that itself is expected but | 1,683 | 1,708 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1683s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | they are both worse than the approximate message passing there is quite a considerable gap so is there some generic kind of a strategy that we that we can make them work as well as the approximate message passing or at least closer so that would be question related to the second point and question really and the third point is i showed you that gradient flow | 1,708 | 1,728 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1708s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | sometimes work even when the spruce local minima are present we showed that using the cast race approach but what about stochastic gradient descent so far i was only talking about gradient no stochastic so let's say a year ago i would have stopped here and say that you don't know the the green questions would be open but today actually i do have an answer to | 1,728 | 1,749 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1728s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | each of them so i will start with the first one so is the same methodology applicable to some simple neural networks and in statistical physics when we kind of set up a model for a data so that you can keep track of constants and and not only rates and finite sample complexity the kind of popular model in which something like that can be done at least in simple neural | 1,749 | 1,778 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1749s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | networks is this teacher student setting so now i'm switching the mall no more spiked matrix tensormal in the talk now i'm going towards this teacher student neural networks where at the input i put id data not only iad sample from sample but also the components of every sampler id so that's of course you know not real data do not like look like that but that's part of the simplifying | 1,778 | 1,804 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1778s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | assumptions here then i take a neural network like for instance the one here i generate the weights of the neural network in some again random way i let this teacher neural network generate the labels y using those ground through ways w star and then i hide the w stars i never show to the student network the w stars i just show to the studio network as traditionally | 1,804 | 1,829 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1804s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | the set of samples x and y and i will have n samples each sample will live in dimension p so before p was the order of the tensor now p will be the dimension till the end of the talk and then the student may or may not not know the architecture of the teacher network i will actually be telling you about both cases in this talk and the question is what is the | 1,829 | 1,854 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1829s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | generalization error that the gradient descent is reaching depending on the number of samples that it got from the teacher so this is a setting of a neural network that have been studied in physics for 30 years kind of the most common example would be this teacher student perceptron where the nonlinearity that the teacher is using is just a sign but with just a sign and no | 1,854 | 1,878 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1854s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | constraints on the weights w this becomes a convex optimization problem so today we are interested in intrinsically non-convex optimization problems so in order to make it more interesting and intrinsically non-convex we will actually be looking today at the face retrieval where the teacher instead of using a sign uses an absolute value on the scholar product of the samples and the ground | 1,878 | 1,903 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1878s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | true base or the teacher weights w star so the labels here will not be just binary this we will be looking at this regression problem the data will be generated as is written here the input is gaussian the labels are obtained as the absolute value of the of the scalar product and the neural network then sees the set of samples and tries to regress the y or the x on the y | 1,903 | 1,932 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1903s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | and so what do we know again again without yet talking about gradient descent what do we know about this problem information theoretically and in terms of the approximate message passing that also here is conjecture to be made the best of the of the polynomial algorithms so here i am showing you the the mean square error of recovering the w star which is you know very related to the | 1,932 | 1,958 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1932s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | generalization error just like this plot as a function of the alpha which is the ratio between the number of samples and the dimension and both number of samples and dimension are large i'm in the high dimensional limit and the ratio is some small constant here you know between 0.3 and 1.2 so information theoretically the generalization error can be zero start as long as soon as you have | 1,958 | 1,984 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1958s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | more samples than [Music] than is the dimension in this problem that corresponds to the orange line now algorithmically you need slightly more samples than the dimension about 13 percent more for the approximate message passing to work and be able to generalize perfectly in this problem so now we will be looking at what gradient d sent us and how it compares | 1,984 | 2,009 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=1984s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | to to what to do this so gradient descent on watch loss which loss functions so corresponding to the phase retrieval the natural loss function is the one i write here that would correspond to in a sense to to to neural network with no hidden variable one hidden variable with quadratic activation function so that's an actual you know i just square the label sends instead of putting absolute | 2,009 | 2,033 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2009s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | value i put a square here and then i'm looking at the the performance of the gradient flow so just to you know set up the stage of what is known and what kind of we can expect so as i said if ignoring gradient descent we know that starting from one the problem is solvable information theoretically and starting from 1.13 by some very adapted algorithm to this problem for the | 2,033 | 2,057 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2033s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | gradient flow what we know is is this work that that popped here that vigorously shows that randomly initialized gradient descent will need uh we'll need the dimension times some polynomial of the log of the dimension samples in order to be able to solve the problem so there is quite a big gap between 1.13 and poly alpha it is 1.13 and alpha that is some polynomial of logarithm of | 2,057 | 2,086 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2057s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | the dimension so as a physicist we always try to look numerically what's actually going on so numerically if we are looking what's the fraction of success of writing this in terms of solving this problem as the dimension is growing so here the capital n is actually what i call the p is the dimension so we are seeing that at alpha that is around six or seven it's already solving | 2,086 | 2,110 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2086s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | the problem almost always so so can we understand that a bit more theoretically not just running the gradient decent that's of course nice but that's that's not just satisfactory so we take lessons from the spiked two plus piece per mole that i showed you and we kind of ask ours okay could it be could it be happening similarly as there could it be that the gradient flow first | 2,110 | 2,134 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2110s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | goes to the threshold states and then what matters is a kind of bvp-like transition of the hessian of these threshold states that drives the success versus failure and we just test this numerically whether it looks like true and it actually does in the sense that if we look at the non-planted phase retrieval that would be the right hand side here that defines the value of the | 2,134 | 2,157 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2134s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | loss function that i call the threshold value and then if you look at the dynamics of the gradient flow in the planted version we see that it's quite possible that it's again going to the threshold and then away from it or not or staying stuck there so we again hypothesize that this is actually the mechanism and put it into equations this time the equations are slightly more complicated | 2,157 | 2,180 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2157s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | but we can still do it actually using a recent random matrix theory results from works of ulu and square collide collaborator lee and also the fact that the threshold states are marginal meaning that the lowest eigenvalue corresponding to them is stuck to zero and this if we combine it gives us an expression of what should be the threshold above which the gradient descent | 2,180 | 2,206 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2180s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | works as a function of this probability distribution of the true labels y and the labels y hat that the gradient descent is currently estimating so that probability distribution is still something pretty not non-intuitive to capture but in the within the within the theory of one-step replica symmetry breaking that is again one of the methods coming from statistical physics | 2,206 | 2,233 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2206s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | we actually can estimate this probability distribution between the joint the true label and the label that is currently estimate currently estimated by the grading descent and this is shown in this picture i show so in the left hand side i actually show the loss the value of the threshold energy of the threshold loss as it comes from simulations that is the purple line and as it comes | 2,233 | 2,260 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2233s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | from this one rsp theory they are not exactly equal here that's not the conjecture here is that this is not exact but they are close so so we use this as an approximation on the right hand side i am showing again numerically obtained moments of the distribution on which the formula depends and the moment is computed from the one rsp theory and the agreement is pretty good so when we put | 2,260 | 2,288 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2260s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | these things together you know ignoring the little differences this actually leads us to an estimation of the gradient decent threshold that is about 13.8 so if i put it back onto this axis i showed you that the numerics this constant starting from which gradient descent is working looks like 7 from approximated theory we get something like 13.8 so we are not sure where the discrepancy | 2,288 | 2,317 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2288s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | comes from whether it is finite size effects and the numerics would actually converge to the 13.8 or whether it is the small difference between the exact result and the one rsv approximation so both is both is possible but what is nevertheless clear is that it seems that it's a constant the the polylog of b is not needed so here is another open problem proof | 2,317 | 2,341 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2317s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | that you know any constant times p is actually a sufficient number of samples for randomly initialized gradient descent to solve phase retrieval in time that is p times times some polylog p so in the time the polylog p is not avoidable because otherwise you're just stuck at kind of zero correlation but in the number of samples the conjecture from from our work is that it should be | 2,341 | 2,368 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2341s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | avoidable and that the true constant is somewhere around 7 or 13. but what about the gap between the the performance of the approximate message passing and of the grand descent there is you know still big difference between say one and ten so can we somehow close that gap can we do something generic that would diminish that gap so that's the question for the next a | 2,368 | 2,396 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2368s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | few slides and that that corresponds to to this you know when i was concluding about the spike matrix model that was the second input so that's the second point to which we are doing and surprisingly or not we will do that by over parametrization so let's still look at the phase retrieval so the problem the regression problem we are trying to solve here is still the same | 2,396 | 2,419 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2396s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | so phase retrieval with random gaussian data and the teacher coming from a gaussian and generating the labels this didn't change but what changes now is the loss function so now the loss function that i will be considering doesn't correspond anymore to simple the simplest neural network with no hidden unit or one hit unit that's the same now the neural network will have m | 2,419 | 2,444 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2419s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | hidden units and i will be working in a regime where the number of hidden units is bigger than the dimension p so this is the over parametrized two-layer neural network i will be optimizing over the weights of the first layer this matrix w and the second layer will be fixed the weights of the second layer will be fixed to one over m or to one and i use the scaling one over | 2,444 | 2,469 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2444s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | m so i'm not really learning here the second layer but that would know the conjecture kind of is that that wouldn't change much in the in the overall message of this so again i'm just running gradient flow on this loss function with a random initialization so how does this behave so this is a wide over parameterized two-layer neural network does this solve the phase you | 2,469 | 2,492 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2469s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | achieve or not and this is from a paper that that we that came up in june with a colleague from uiu eric vanden eiden and same student as stefano sarah mannelly where uh in two theorems we kind of answered provide some answer to this question so the first theorem is it is purely geometric it is telling us that if you are looking at the loss function as i just defined | 2,492 | 2,523 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2492s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | it then if alpha is so alpha again was the ratio between the number of samples and the dimension so if alpha is smaller than 2 then this loss function has many spiritual minima and if alpha is bigger than two then the probability that the only local minima that is there corresponds to the ground roof that would be this a star that is just the teacher vector times is transposed is | 2,523 | 2,551 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2523s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | actually the only one so we believe that this is actually with probability one but what we could prove is this is only with positive probability but there is something clearly happening about the threshold alpha equal to and when we and this is purely geometric no gradient descent yet but when we put this together with our second theorem about the gradient descent | 2,551 | 2,573 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2551s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | that tells us that in terms of this parameter a that is the weight matrix times its transpose the gradient descent always goes to global minima then putting these two together actually if there is only one global minima corresponding to the to the ground true with finite probability well then the gradient descent also goes there so this means that the gradient descent | 2,573 | 2,598 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2573s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | solved this problem by optimizing this loss function corresponding to the over parameterized neural network starting from alpha equal to and here is just a little plot that you know that shows that just running numerically writing descend on relatively small systems is pretty consistent with that result so if i put that back into onto undo the axis of alpha i obtain that by using over | 2,598 | 2,628 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2598s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | parameterized nail network i can push down the threshold at re starting from which the gradient flow is working down to two so not yet to the 1.13 of amp but much lower than if i was not over parameterizing so the conclusion is here that over parameterized neural networks need fewer samples and this is a quantification of how much fewer samples in this particular model | 2,628 | 2,655 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2628s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | and the open problem would be and that i i really don't know the answer like is there a neural network architecture maybe if you make it deeper or over parametrized differently for which the plane randomly slice guardian descent would just need less than alpha 2 so less than 2p samples so i think that's an interesting kind of concrete question for this particular model | 2,655 | 2,679 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2655s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | and i i might have time for the third point stop me if i don't but i wanted to mention the third point about what can we say about the stochastic gradient descent so far i was only talking about gradients and or more precisely gradient flow because i was always considering the continuous time version because that's the one that is easier to analyze so what about stochastic gravities so | 2,679 | 2,707 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2679s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | first of all a reminder was what's stochastic it's you know the same thing but we are taking the samples one by one so when we say stochastic writing decent in the literature we mean usually one of the two following things so either we mean the online stochastic grading descent where each iteration uses a fresh sample and never uses a sample that was ever seen before | 2,707 | 2,731 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2707s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | and i like to call it online stochastic running decent that one is simpler to analyze but it's also less interesting because it minimizes directly the population loss and there is no notion of the generalization gap the training test are the same so a lot of the mysteries about how comes the train error can be so much smaller than the test error that we are kind of asking and deep | 2,731 | 2,752 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2731s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | learning cannot really be answered by looking at the online stochastic gravity scene it's also not used in practice what's used in practice is multipass stochastic gradient descent where we use one or few samples at the time but we reuse the samples many times and this is much harder to analyze this has much less kind of existing theory but that's the one we want to look at | 2,752 | 2,774 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2752s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | because it's used in practice and it can access you know non-trivial generalization gap then so can we do that so first of all the first step above which i didn't even talk before because it was obvious how bright and descends in the limit of small learning rate becomes gradient flow for the stochastic gradient descent is not so clear what is the limit of the infinitely | 2,774 | 2,796 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2774s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | small learning rate that actually is well defined so so i'm just explaining it on this slide so if i define stochastic gradient descent using this variable s of t that would be you know one for some samples and zero for some other samples so if i do what is usually done in stochastic gradient descent is that every at every time step i randomly choose who is in the batch and | 2,796 | 2,821 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2796s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | who is not in the batch well then this doesn't have a well-defined limit of the learning rate going to zero it doesn't really have a gradient flow limit so this is not so nice for the dynamic community of theory so we instead define a slightly different version of stochastic iron descendant we call persistent stochastic variant descent where we as before have some fraction of | 2,821 | 2,846 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2821s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | samples that are in the batch but instead of reshuffling the batch at every time step randomly we actually decide at each time step whether we keep or not the the sample in the batch and we we keep the we keep that sample with some typical time that we call here the persistence time following the rule that is written here so if we do it this way then we can take | 2,846 | 2,869 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2846s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | the limit of the learning rate to zero and it has actually a well-defined stochastic guide in flow limit so that is the that is the that is the dynamics that's that i will be analyzing on a model that here will be um slightly different so it's not the phase retrieval or the model on which we will be analyzing this it's uh it's just a gaussian mixture a supervised | 2,869 | 2,896 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2869s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | learning of a gaussian mixture so in the two cluster case that is on the left here i have two clusters one cluster is plus labels plus one cluster is labels minus and i'm trying to separate them so that's very simple i can just imagine there is some hyperplane in the middle and separating them but the clusters are really noisy so i'm in a regime where i will not be | 2,896 | 2,918 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2896s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | able to separate perfectly and yet so so this will even lead to a convex problem so that's more for kind of a comparison but the one that will be interestingly non-convex is the three cluster case where i have three gaussian clusters two on the periphery and one in the middle but the two on the periphery they have the same label so this is a data set that is no longer | 2,918 | 2,941 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2918s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | linearly separable so actually to be able to to to to have some meaningful learning the loss function that i will be using for these three clusters is is actually is actually kind of you know using the the structure of the data set and i will be doing logistic regression but not directly on the data points but on the you know as specified here on this c c my | 2,941 | 2,967 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2941s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | mu okay but whether it is the two cluster case or the three cluster case how do we describe the full trajectory of the granules the complexity is really not so much helping us to describe the the full trajectory so this will be again done with the dynamical mean fee theory but this time with a little bit more advanced version of it because for the perceptron uh | 2,967 | 2,991 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2967s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | case the the the simple one that we used before is is not quite working it's not it's not so the equations do not close so simply but this period is the same here we start with this high dimensional markovian dynamics of a strongly correlated system and the dynamic domain field theory maps it into non-markovian so dynamics with memory but of one single degree of freedom | 2,991 | 3,019 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=2991s | Insights on Gradient-Based Algorithms in High-Dimensional Learning |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.