video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
H5vpBCLo74U
lot of these same NLP tasks so the chief state of the art result on 18 of of 20 tasks I believe maybe they test one they outperform Bert on 20 the chief state of the art on 18 including things asked question answering natural language inference sentiment analysis and so on so those are kind of remarkable results and even more remarkable is that the architecture of the network is actually
27
55
https://www.youtube.com/watch?v=H5vpBCLo74U&t=27s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
very very similar to Bert the kind of new introduction is a a pre-training a different free training procedure and we'll look into that so let's actually jump into their main points straight away what they go into is there are two kinds of currently used pre training methods for these NLP test and both or can be understood as kind of a language modeling one so language modeling for those you don't
55
87
https://www.youtube.com/watch?v=H5vpBCLo74U&t=55s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
know is predict the next word in a sequence so if I give you the sequence here unsupervised representation learning has been and then ask you what's next and then you're supposed to say highly right those language modeling in in a nutshell so what they what they differentiate are two kinds of language modeling the first one they say is order aggressive language modeling
87
115
https://www.youtube.com/watch?v=H5vpBCLo74U&t=87s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
now what auto regressive language modeling does is exactly what we've looked at I give you unsupervised learning has been you were supposed to predict highly and then in the next step I give you unsupervised representation learning has been highly and you're supposed to predict success and so on so in the next step I'm gonna give you the entire sentence up until
115
137
https://www.youtube.com/watch?v=H5vpBCLo74U&t=115s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
here and you're supposed to do predict in autoregressive because each token can look at the kind of previous ones in the in the sequence so when you sorry you can't see that when you predict when you predict you you can always kind of order aggressively look at what the previous ones were when including what you've previously predicted of course during training this is a teacher first as I
137
168
https://www.youtube.com/watch?v=H5vpBCLo74U&t=137s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
said so you put the actual words there this is auto regressive modeling in contrast to what they call Auto encoding and Auto encoding is what birth does and this is the following so in contrast to that let's say I have the same sequence unsupervised representation learning has been highly successful in the domain of yeah something and then I say okay I give you the sequence but I am going to
168
201
https://www.youtube.com/watch?v=H5vpBCLo74U&t=168s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
delete this and this right and now I ask you to predict these two right so you can see the the task is slightly different as you now have access to all of the sequence basically except the ones that you are asked to predict but you're you kind of asked to predict yet them not in any order but you're asked to predict them at the same time basically so at the same time you're
201
229
https://www.youtube.com/watch?v=H5vpBCLo74U&t=201s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
you're asked to predict this word and this word and so the first kind of this Auto regressive language modeling has been used by transformer models until birth and then basically bert really pushed this auto encoding language model pre-training which made it so successful and now this paper excel net wants to like combine the best of both of them and in order to understand what's the
229
266
https://www.youtube.com/watch?v=H5vpBCLo74U&t=229s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
best of both of them so what's good at birth we've already seen it can actually draw information from all of the context of the words it's trying to predict but what is the kind of pitfall of birth and they they actually put this really nicely in an example they give way further down where they say comparison to but I don't know why that is not like also in the introduction but here they
266
294
https://www.youtube.com/watch?v=H5vpBCLo74U&t=266s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
have the sentence New York is a city right New York is a city this one and you're asked to predict these two words and if you now compare birth to what xl9 does if so the context is is a city and you're asked to predict New York what birth does is it simply masks out the two words and says here please fill in these two words now this translates to the kind of objective being separated in
294
329
https://www.youtube.com/watch?v=H5vpBCLo74U&t=294s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
the two words such that the prediction of York here is completely independent of the prediction of new so if you know of any other city that is made of two words for example San Francisco or Los Angeles then these would be as valid and any mixture would be as valid so you might there might end up with laws York is a city and that would be perfectly fine for birth because while it's
329
358
https://www.youtube.com/watch?v=H5vpBCLo74U&t=329s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
predicting loss is a perfectly fine prediction for the first word of a two-word City and York is a perfectly fine prediction for the last word of a two-word City right so these are the kind of mistakes that bird can get into by not being order aggressive by basically predicting all of these tokens at the same time independently of each other whereas x-l net what they will do
358
383
https://www.youtube.com/watch?v=H5vpBCLo74U&t=358s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
is we specify an order let's say okay first I will predict the word noon for the first word new something is a city and then when I predict York I will actually take into account the I previously have predicted the word new so um that's the main advantage at that autoregressive training has over Auto encoding now what are the pitfalls the pitfalls or if you have this this
383
412
https://www.youtube.com/watch?v=H5vpBCLo74U&t=383s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
sentence let's look at it I'll write it down New York is a city right if you have this sentence and let's say yeah actually you're not you're not asked to predict you and your crew you're asked to predict the word a here a right you're asked to predict that in order regressive style or a city it's a better example the two words I said in order regressive style if you predict the word
412
447
https://www.youtube.com/watch?v=H5vpBCLo74U&t=412s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
a you can only ever look at what comes before hand whereas if Bert were to predict a just the word a it would be able to look at all of it that's not predict City so you see the kind of auto regressive model is bound to the order of the of the factorization of the sentence that's right it's bound to the order in which it has to predict the tokens so here if it's predicting a
447
477
https://www.youtube.com/watch?v=H5vpBCLo74U&t=447s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
you can only look at stuff that comes before it because it needs to do it in order right once it gets to city you can actually look at the entire sentence here but um before that it only ever has partial information about the about the context so actually it wouldn't be much better if I had said we're trying to predict these two words is and a right and once I predict so so Bert would
477
507
https://www.youtube.com/watch?v=H5vpBCLo74U&t=477s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
actually have access to the word City here whereas the auto regressive models only have access to the ones before it I hope that makes it clearer so the main idea in excel net is where did where does this order dependence come in the autoregressive model the order dependence actually comes from the factorization of the sentence of the of the language model so in a language
507
536
https://www.youtube.com/watch?v=H5vpBCLo74U&t=507s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
model we're actually trying to assess the probability distribution of sentences here X is a sentence right and this can be naturally factorized into a product over the words where the probability of each word is only dependent on the words before it so this is a this is an equality is not an approximation this is an equality the probability of a sequence can be
536
565
https://www.youtube.com/watch?v=H5vpBCLo74U&t=536s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
decomposed into a product of probabilities like this exactly so this here is exactly what these auto regressive models implement each word is predicted from the words before it right there are other kinds of autoregressive models that also do the other direction where here they say okay the probability of a sentence is a product and each word is predicted from the words after it but
565
595
https://www.youtube.com/watch?v=H5vpBCLo74U&t=565s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
it kind of is the same problem you only ever have access into the one direction basically however you define the order of decoding you only ever have access from a given word to what was before it in the order so the main idea of excel net is they say hey why don't we consider all possible orderings right I mean that that's kind of a that's it's an idea so let's go back to our thing
595
631
https://www.youtube.com/watch?v=H5vpBCLo74U&t=595s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
here they say why don't we consider all possible orderings so basically what we will do is if this sample comes up New York is a city all right what I can do is I can define an ordering let's say I always want to predict two words so the bird typically masks out about 15% of its input to be predicted and here let's say we'll mask out 20% which two words so of this sequence will mask
631
661
https://www.youtube.com/watch?v=H5vpBCLo74U&t=631s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
two words and ask the model to predict it that that will be our our pre training objective the first time this sample comes up from the data set I might specify the order just classically right just one two three four five all right I'll predict the last two words I'll kind of mask them out right I give the model New York is and then I could let it predict a and
661
688
https://www.youtube.com/watch?v=H5vpBCLo74U&t=661s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
then in the next step I'll give it New York is a and let it predict City cool so now if the pitfall is the word a here only has access to things before it and not to city itself City has access to everything all right so but then I continue training and the next set time this sample right it's in my data set New York is the city the next time it comes up I simply go for a different
688
717
https://www.youtube.com/watch?v=H5vpBCLo74U&t=688s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
order let's say one two three four five right so now again I'm asked and asking to predict the last two tokens which here our city and York so in the first step I would give it is a and new and I will ask it what's here and I'll ask you to predict city and then in the second step I'll also give it that and I'll ask it okay now what's here given all of that right so new is a city all right
717
754
https://www.youtube.com/watch?v=H5vpBCLo74U&t=717s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
you're asked to predict the missing word so that that's pretty so the first step its new is a hmm and you Resta predicted the second and then the second step is new is the city and the rescue protect you first so now as you can see while predicting City here all of a sudden we didn't no longer in this ordering we don't have access to the world York so we'll have to learn to predict
754
786
https://www.youtube.com/watch?v=H5vpBCLo74U&t=754s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
City from the rest of the context now even more even more if we now decide decide on a different ordering again one three four five so now well actually first step is to ask New York City please predict this thing here all right yeah you might train the model to predict is and then the second step you say New York is City please predict it now we see before before when we are at were asked to
786
828
https://www.youtube.com/watch?v=H5vpBCLo74U&t=786s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
predict the word a it only had access to things to the left of it and the very first example but now it actually has access to the entire context so the the the idea is as we sample this data point multiple times and each time we decide on a different ordering duty code for each the prediction of each token token sorry will actually have seen many many parts many different variants of the
828
859
https://www.youtube.com/watch?v=H5vpBCLo74U&t=828s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
context and in expectation will actually have seen all of the context just like Bert but will always having have done it in an order aggressive way so basically you get all the advantages of being order aggressive namely that you are able to decode step by step while always referring to everything in front of you in the ordering so the predictions are not independent but you also get the
859
888
https://www.youtube.com/watch?v=H5vpBCLo74U&t=859s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
benefit of Bert that it's able to basically look at all of the rest of the context in expectation in order to make this prediction so this is this is the main idea of of excel net they formalize this jump up again they formalize it in saying okay what Bert does here is it actually seek it factorized law probability of a sentence into this sum so the product in the law becomes sum
888
919
https://www.youtube.com/watch?v=H5vpBCLo74U&t=888s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
into the sum of log probabilities of no sorry this is auto regressive confused ah into the words conditioned on everything in front of them what bird does is it actually approximately factorizes the law of probability into each word and then everything in the context everything that's not masked in the context and this is only an approximate factorization because you're
919
954
https://www.youtube.com/watch?v=H5vpBCLo74U&t=919s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
basically dropping away all these mask tokens and um what they do now is they do the same as the AR as their auto regressive models here they decompose to log probability into a sum of log from abilities over each of the words given all the words before it but now not before it in the sequence but before it in and chosen permutations Z and Z is sampled uniformly from the set of all
954
989
https://www.youtube.com/watch?v=H5vpBCLo74U&t=954s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
possible permutations so in expectation they'll see all of the context so this is the this is the main thing they show this here in a kind of a picture with so here is the neural network this is the input layer then these are the hidden layers as the attention layers go up and up here you're asked to predict the the token so here you're always asked to predict X 3 so there is no there's never
989
1,022
https://www.youtube.com/watch?v=H5vpBCLo74U&t=989s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
going to be any awake here since if you knew X 3 you would be able trivially to predict X 3 all right so in the in the first example the factorization order chosen at random is 3 2 4 1 now you asked to predict X 3 and we know okay we should only we should only do this with things that are before it in the permutation order well here are since X 3 is the first in the permutation order
1,022
1,053
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1022s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
we actually don't we don't have anything to go on we wait Stickley asked to predict x3 from scratch as if it were the start of a sentence so we'll basically tell the model I have a sentence that goes please predict the third right it's a hard task yeah by the way you're always able to look at this memory thing here don't worry about this for now this is just this is a an augmentation they do
1,053
1,084
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1053s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
on top of their idea this is not the core idea so okay but now the second time this sample comes up from the training set we decide on a different order so the order here is 2 4 3 1 now again we're asked to predict x3 and we're allowed to look at everything before it so 2 & 4 as you see here there are weights from x2 and x4 into this column that finally is then 8 asked to
1,084
1,112
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1084s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
predict x3 so this is also this is now an easier task right you're allowed to look at the word to the left and to the right if you have the following permutation order 1 4 2 3 you're actually allowed to look at all of the other words because x3 is at the end of the permutation order in order to produce x-ray so all of these four and the fourth thing is a similar so all of
1,112
1,140
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1112s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
these four things will appear during training and you will learn from them so in expectation C you will basically have seen all variants of different of different versions of the context which which helps a lot apparently right so in the in order to achieve this they had to make some architectural changes to the the model namely what you want to do is in a single pass through the model here
1,140
1,173
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1140s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
you not only want to predict one token but you want to do many predictions this helps training a lot so vert and naturally always does like the 15 we must add 15% of the tokens or so what was that like 40 50 tokens so it masks them and it predicts them all at the same time now you would like to do this here as well you would like to predict all at the same time the ones that you're asked
1,173
1,198
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1173s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
to predict but of course the problem is for here if you're asked if in this factorization order 2 4 3 1 if you're asked to predict X 3 you're allowed to look at X 2 and X 4 if you're asked to predict X 1 you're allowed to look at X 2 X 4 and X 3 so if you only have a single pass through the model the question is do you now input X 3 or do you not because the prediction of X 3 is
1,198
1,228
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1198s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
not allowed to look at X 3 while the prediction of X 1 is allowed to look at X 3 so they do an architectural change in order to achieve both things so that you can do have a single pass through the walk through the model but the prediction of each token only depends on the things in front of it in the permutation order and they do this by having these kind of two stream
1,228
1,255
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1228s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
is masked to stream attention where they basically have not only not one hidden representation like in classic transformers but they have at each step two hidden representations one they call H only called G so the HS are initialized with the embeddings of the tokens and the g's are just initialized randomly and then they get transformed and the point is the h of the next layer
1,255
1,285
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1255s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
is always able to look at everything in front of it including its own its own H basically it's the one layer down its own position one layer down while the G is only allowed to look at the a it's allowed to look at the ages but the H is from before right so so all the G's here are only ever able to look at the H is from before the current position whereas the H is always allowed here to look at
1,285
1,320
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1285s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
the same but also at the H at the current position and now at the last layer you simply ask the model to predict the token from just the G and you can easily see that this results in this model only oh yeah only attending to things before it okay the G by the way can also look at the G of the current layer so that's that's also nothing but it cannot look at that
1,320
1,353
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1320s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
at the age so there's never any information flowing from the current from the current word embedding of the token you're trying to predict to the prediction layer so basically that that means the model can't just look like you you're not telling the model the answer yet you're still able to feed to predict multiple things in a single pass through the model formally this is described
1,353
1,381
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1353s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
here in the attention layer so they divide how they produce the queries and how they produce the keys and values usually the queries and the keys and values are produced from the same hidden representation but here they produce the keys and values from the h's in both cases but to update the G's they produce the queries from the last layers G and do produce HS they produce the queries
1,381
1,412
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1381s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
from the last layer HS and most importantly when they produce the keys and values the H is they'll look at here to update the G you're only allowed to look at H is before you in the permutation order but to update the H you're allowed to look at everything before including the position you're currently after so that's kind of the that's a it's an engineering solution to
1,412
1,438
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1412s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
the problem introduced by their augmentation I think it's a pretty pretty neat solution pretty cool so the rest of the paper here is incorporating ideas from transformer Excel so transformer Excel is one of these classic transformers that that is like this AR so this Auto regressive style of transformer but that has a few improvements over the classic vanilla
1,438
1,470
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1438s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
transformer and they incorporate a number of things here namely first of all they incorporate this memory thing so the memory thing allows you to input longer sequences let's say our our transformer input length is maximum of five tokens what the transformer Excel allows you to do is you input five tokens and then you save you do your transformer thing you encode it and they
1,470
1,500
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1470s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
save something into this memory and then when you input the next five tokens your transformer is then allowed to look at the memory of the last sequence right and also update it so that that that's kind of these this memo oxy Sawyer so you're always allowed to look at these men blocks from last sequence and then the hidden representations here of this sequence they will actually be stored in
1,500
1,528
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1500s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
the member lok for the next sequence this is kind of a trick to carry over information it's not the deep updating the memory part isn't learned with the objective to make the next prediction better but it's just some information it's a kind of gradient free information to provide to the next step and it apparently helps you can incorporate longer sequences into this transformer
1,528
1,558
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1528s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
excel so they take this over and implement this into excel net they also do relative position encodings relative segment and codings I won't go into this too much more here because it's not the main idea basically so they do experiments and they compared to birth architecture with the same basically same architecture the same number of parameters and/or the years and they beat Burt in all of these
1,558
1,592
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1558s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
kind of NLP tasks or most of I think they said in 20 they reach new state of the art in 18 NLP tasks so apparently their method works very well so what they do here is a last thing I find important is an ablation study of the effects of their improvements so they wear because kind of my problem is I never know like they have this new idea okay we do these random permutations but
1,592
1,627
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1592s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
then they also say oh and also we include memory from XL net and we do relative positioning coatings and so on to me these kind of papers of course you reach better numbers you get a new state of the art so it's kind of a landmark paper but to me a paper should more be like a single thing so whatever your idea is this your idea is these or drinks and whatever you need to do to
1,627
1,653
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1627s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
make that work okay fine but then why why the additional transformer Excel things it's it's really then hard to estimate how much of the improvement comes from your ID and how much of the improvement simply comes from the fact that you already put these other things actually have nothing to do with it so I appreciate these kind of analyses called ablation studies where they kind of try
1,653
1,680
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1653s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
to take away the memory and these things and kind of look at what it's doing to the model and use you see here kind of degrades down here as for example this con degrades as you take stuff away while still being more kind of more successful than Burt so that that I would say also yeah here is more unclear but also kind of seems to degrade a bit and while being more successful than
1,680
1,715
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1680s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
Bert I appreciate this there's some and of really trying to show that your gains really come from your new idea and not from some other stuff all right so the last thing I want to mention actually is this thing so someone claiming or calculating that it costs two hundred and forty five thousand dollars to train the Excel net model the way they describe it in the paper I'm sure that's
1,715
1,747
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1715s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
H5vpBCLo74U
gonna be brought down because it was brought down that like the time the train was brought down with Bert as well but this is just I mean this is crazy this is just training it um it it kind of gives large questions about the state of research and the ability for kind of let's say more academic players to participate in research on the one hand of course we like of course these
1,747
1,774
https://www.youtube.com/watch?v=H5vpBCLo74U&t=1747s
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://i.ytimg.com/vi/H…4U/hqdefault.jpg
_3eaVy8c-xk
machine learning it's a buzzword but I would also claim it's a lot more than just a buzzword how many have experience with machine learning it's a small portion area it's great to see when I started learning machine learning I found it difficult to understand how the different algorithm worked and what the main difference between them was I find it really difficult understand where I
0
25
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=0s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
should start to learn machine learning but in this lightning speech you hopefully will learn fundamental difference between different categories of algorithms this will helpful both beginners with also those who there who have a little experience with machine learning all right so real quick about myself my name is Joakim Lyon I'm a consultant here in Oslo for the Nordic consulting firm
25
49
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=25s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
knowit's and I've been fascinated with machine learning for the last couple of years doing some project both personally and professionally and I must say there are many different algorithms out there just take a look at this this is a small portion of really great algorithms so where they begin obviously some algorithms are more suited for certain problems that others and the result will
49
73
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=49s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
vary greatly of how well your algorithm is suited for a problem but luckily you can divide these algorithms into four different categories of machine learning so the four different categories is supervised learning and supervised learning semi-supervised learning and reinforcement learning and when you face a machine learning problem it is important to understand which category
73
94
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=73s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
it fits into so today we're going to explore these four different categories before we get through really good stuff I need to explain two key words there in machine learning you have something called features this is basically a property of your training data and a label is the output you get from your model after training it so you could say features input levels output it's
94
117
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=94s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
partially true because you can also have labels on your input data I'm gonna explain that with an example let's say you want the machine learning algorithm to estimate the height of a person based on age and gender then age and gender are features and the height you want to find is the label and if you have a training set with a lot of people with their height
117
136
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=117s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
corresponding to age and gender then you have a labeled training set so the first category is called supervised learning and in supervised learning you have training data that consists of a set of training examples you have a labeled training set and the basic idea is to find the most optimal model parameters to predict unknown labels on other objects let's look at a few examples
136
157
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=136s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
let's say I want to estimate the value of a used car based on age mark the mileage you name it then a machine learning algorithm can do this pretty well if you give it a training set with a lot of Sol cars with the corresponding value another example could be a mail it spammers it not spam a machine learning algorithm can do this if it has a large training sets a great
157
184
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=157s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
algorithm within the supervised domain is called decision trees the reason I picked this one is because it's more intuitive than most others so in decision trees you have nodes and in every node you choose the best split between a lot of features and you make this procedure recursively until we finish with the stopping criteria again I'm gonna illustrate this with an
184
204
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=184s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
example let's say you want to find out we should accept a new job offer the first thought might be well how much is the salary is it above some threshold if it's not you're definitely not gonna take the job but if it is then do you have to commute for a long while do they are free coffee we'd have a food table I don't know you might ask yourself questions like this but at some point
204
224
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=204s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
you can end up and of stopping leaf and I'm gonna either accept or decline the job offer so that was supervised learning next category is called unsupervised learning and then unsupervised learning you only have input data and no corresponding output variables you have no labels in your training sets and the goal phone supports learning is to model the underlying structure or distribution in
224
246
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=224s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
the data in order to learn more about the data so the algorithms are left on their own to discover and present the interesting structures in the data let's look at a couple examples let's see you want to group customers by their purchasing behavior people buy item 810 to buy item B then obviously you should recommend these items to people interested in one of them
246
267
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=246s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
another example could be in Netflix Netflix video recommendation system it recommends TV series movies and whatnot and they do this by using a series of unsupervised learning algorithms a great algorithm here is called k-means not for Netflix maybe but for other purposes and here we try to divide all the data into K clusters and you select random K points in new clusters and in the
267
293
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=267s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
Centers and the cluster of other objects are defined by the closest cluster Center you tune this algorithm by selecting the K number of clusters you can use this algorithm for many things let's say you have not hotel chain anyone to open a new hotel in a city where do you place your hotel hopefully you start off by looking at potential sites gather a lot of data
293
312
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=293s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
like we see close to downtown the restaurants nearby is it easy to get to the hotel and so on and maybe hopefully from all this data and algorithm can find clusters in this data to show interesting spots for your tab so most unsupervised learning third category of problems falls between unsupervised and supervised problems and it's called semi-supervised learning
312
335
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=312s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
here we have partially labeled data and many real problems fall into this category because it can be really expensive or at least time-consuming to try to label all the data let's save a million pictures and you're gonna label them takes too much time unlabeled data however is cheap and usually easy to collect and store so here a mixture of techniques from the
335
357
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=335s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
unsupervised unsupervised domain can be used an example here could be as I already mentioned if you have a photo archive you might label some of the images like there's a cat in this picture people skiing topless person on the beach I don't know and from this label pictures you have a lot of unlabeled pictures and you can try to label those with an algorithm so
357
379
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=357s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
the last category of problems fall into reinforcement learning and it's not like any of the supervised learning algorithms because you don't have any label data and you don't have any unlabeled data usually you don't have any training data the idea is to create a software agent and it's got some states and it's gonna perform some action in environments the environment is gonna either punish
379
402
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=379s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
it or reward it somehow and they can end up in new state and you do this recursively and you can imagine this by saying you're a robot and you wake up and in a strange place you can perform activities and you're gonna get rewards from the environment so after you get more rewards you get more clever and your actions get more complex and you're training to behave the most effective
402
424
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=402s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
way on each step so this is kind of a human way to learn and human way to think and we made some incredible progress within the reinforcement domain last years as first speaker mentioned alphago was a great great example here they managed to be the best player in the Google ingo and the reason why I bring this up is because it made some moves that humanity has never seen
424
446
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=424s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
before and they now teach some of the moves it did during the game at go schools in China they have go schools in China which is surprising and I find it really interesting that humans can now learn from machine and not just the other way around another really cool example I think is from pretty recent time from open AI they managed to create an AI that could beat some of the best
446
466
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=446s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
players in the world in dota 2 and dota 2 is a real-time game so the world was quite shocked to see that this happen already we fought this was the years and years until it could happen because it's vastly more complex than traditional board games like chess and this is my personal dream project I'm really hoping I can beat myself but by creating an AI that can beat me in Mario Kart and not
466
489
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=466s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
to brag but I'm quite good at Mario Kart so I'm not sure if I'm program skills is cute enough we'll see so hopefully the last 10 minutes you learned what's here in supervised learning all there is labeled and the algorithm learns to predict the output from the input data when unsupervised learning all there is unlabeled and the algorithm learns the inherent structure from the input data
489
511
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=489s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
_3eaVy8c-xk
semi-supervised learning have some data labeled some unlabeled mostly unlabeled and a mixture of supervised and unsupervised techniques can be used reinforcement learning is an area of machine learning concerned with how a soft region out to take action in environment so as to maximize some notion of cumulative reward so as I said in the beginning machine learning might
511
532
https://www.youtube.com/watch?v=_3eaVy8c-xk&t=511s
Machine learning algorithms, choosing the correct algorithm for your problem - Joakim Lehn
https://i.ytimg.com/vi/_…xk/hqdefault.jpg
rk7fIhCH8Gc
well uh hello and welcome to this uh richard m carp distinguished lecture uh my name is peter barton i'm the associate director of the simons institute for the theory of computing uh thanks for joining us we established the richard m cup series to celebrate the role of simon's institute founding director dick carp in establishing the field of theoretical computer science
0
27
https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=0s
Insights on Gradient-Based Algorithms in High-Dimensional Learning
https://i.ytimg.com/vi/r…axresdefault.jpg
rk7fIhCH8Gc
formulating central problems and contributing amazing results in the areas of computational complexity and algorithms the series features visionary leaders in tcs and is geared towards a broad scientific audience we're grateful to the many contributors to the richard m carp fund who've made this series possible so i'm delighted to welcome our speaker today lenka stevarova
27
51
https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=27s
Insights on Gradient-Based Algorithms in High-Dimensional Learning
https://i.ytimg.com/vi/r…axresdefault.jpg
rk7fIhCH8Gc
lenka is a researcher at cnrs working in the institute of theoretical physics in cea uh paris clay she has a background in physics and is famous for the application of methods of statistical physics to problems in machine learning and signal processing in inference and optimization link is the recipient of the cnrs bronze medal in 2014 the philippe meyer prize in theoretical
51
74
https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=51s
Insights on Gradient-Based Algorithms in High-Dimensional Learning
https://i.ytimg.com/vi/r…axresdefault.jpg
rk7fIhCH8Gc
physics in 2016 and the irene julio curie prize in 2018. the talk today is entitled insights on gradient-based algorithms in high-dimensional learning so please join me in welcoming lenka's devarova thank you peter and i will share my screen so that you see the slides i prefer it and i'm really really honored to be giving this lecture especially given the influence that you know being part
74
107
https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=74s
Insights on Gradient-Based Algorithms in High-Dimensional Learning
https://i.ytimg.com/vi/r…axresdefault.jpg
rk7fIhCH8Gc
of one of the programs at simon's institute four years ago it had on my career and i enjoyed it so immensely and it's amazing what the simuls institute is doing so first thing i should do is to correct my affiliation so it's only a second seminar i'm giving and a third week i'm spending at my new affiliation that is epfl so not anymore in france but in a neighboring country
107
130
https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=107s
Insights on Gradient-Based Algorithms in High-Dimensional Learning
https://i.ytimg.com/vi/r…axresdefault.jpg
rk7fIhCH8Gc
switzerland and i will be telling you about work that i you know i have recently did a lecture in this simon's institute bootcamp for the program of this semester where kind of a lot of the works that seemed like a statistical physics voodoo maybe 20 years ago actually have been established rigorously and part of the program is about it and it's pretty and it's
130
158
https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=130s
Insights on Gradient-Based Algorithms in High-Dimensional Learning
https://i.ytimg.com/vi/r…axresdefault.jpg
rk7fIhCH8Gc
very exciting so for this very special lecture i decided to go back to results from physics where most of it is not established vigorously and is waiting for the mathematical inputs and works and and that's something that was going on in the past two years with the list of collaborators that i give here the the main among them are the two students highlighted in
158
183
https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=158s
Insights on Gradient-Based Algorithms in High-Dimensional Learning
https://i.ytimg.com/vi/r…axresdefault.jpg
rk7fIhCH8Gc
in blue stefano mannelly sarah and francesca and stefano is among the panelists so if you have some clarification questions or questions you can he's able to answer them even during the talk without interrupting it so please don't hesitate so this is the list of six papers from the past two years on which this talk is based and the talk will be about gradient
183
208
https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=183s
Insights on Gradient-Based Algorithms in High-Dimensional Learning
https://i.ytimg.com/vi/r…axresdefault.jpg
rk7fIhCH8Gc
descent based algorithms or stochastic gratities and base algorithms that you know pictorially are the workhorse of machine learning that is really everywhere these days so they are really worth understanding and studying in in more detail in particular in deep learning we have the empirical observation that local or even global minima with bad generalization error actually
208
232
https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=208s
Insights on Gradient-Based Algorithms in High-Dimensional Learning
https://i.ytimg.com/vi/r…axresdefault.jpg
rk7fIhCH8Gc
do exist there are many kind of works uh going towards showing like something like that empirically one of them that i like quite a bit is this this paper by dimitry akioptas and his collaborators where he starts by interpolating and fitting random labels in the neural network and then he puts back the real labels little by little and he shows that the gradient descent
232
256
https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=232s
Insights on Gradient-Based Algorithms in High-Dimensional Learning
https://i.ytimg.com/vi/r…axresdefault.jpg
rk7fIhCH8Gc
actually doesn't go that far away from the point where it interpolated random labels and it generalizes pretty bad much worse than it would if you just initialize it randomly so that really tells us something notable about how this optimization landscape looks like and we really need to understand how comes that the gradient-based algorithms initialize randomly
256
278
https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=256s
Insights on Gradient-Based Algorithms in High-Dimensional Learning
https://i.ytimg.com/vi/r…axresdefault.jpg
rk7fIhCH8Gc
are able to avoid the bad minimum and so the goal here you know it's it's pretty much clear these days that this will this cannot happen just by studying the landscape that it really what matters is you know even the initialization so what matters is the whole trajectory that the algorithm is taking so we want to understand the trajectory and these non-convex high dimensional
278
301
https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=278s
Insights on Gradient-Based Algorithms in High-Dimensional Learning
https://i.ytimg.com/vi/r…axresdefault.jpg
rk7fIhCH8Gc
problems and just two points to make to set the talk you know in practice the number of samples is limited so i don't want to be working in some limit where the number of samples is is unreasonably large and also constants do matter so i don't want to be talking working with with some rates without log factors and with arbitrary constants so in order to be able to do something
301
326
https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=301s
Insights on Gradient-Based Algorithms in High-Dimensional Learning
https://i.ytimg.com/vi/r…axresdefault.jpg
rk7fIhCH8Gc
like that to keep in mind finite sample complexity and constants i need to make some simplification somewhere so for the purpose of you know the work that i'm describing in this talk this will be on the side of the data so i will not be assuming any kind of really generic data set i will be working with synthetic models for data for which we can say something
326
352
https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=326s
Insights on Gradient-Based Algorithms in High-Dimensional Learning
https://i.ytimg.com/vi/r…axresdefault.jpg
rk7fIhCH8Gc
so the first such model on which will be the first say 20 minutes of the talk will be the spiked matrix tensor model which is the optimization you can think of optimizing the loss function that is written here it has two parts one so so the variable over which you are optimizing is the x that is living on an n-dimensional sphere and n will be large that will be the limit we will be
352
378
https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=352s
Insights on Gradient-Based Algorithms in High-Dimensional Learning
https://i.ytimg.com/vi/r…axresdefault.jpg
rk7fIhCH8Gc
interested in high dimensional and then the way the loss function depends on the x is through the matrix y that is created from some ground through x star plus a load of noise so x star x star transpose more precisely and it also depends on this tensor t that has order p and is created by taking an outer product p times of the same vector x star and adding a lot of noise and then the goal
378
408
https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=378s
Insights on Gradient-Based Algorithms in High-Dimensional Learning
https://i.ytimg.com/vi/r…axresdefault.jpg
rk7fIhCH8Gc
of the you know interference problem here is to find back the vector x star by minimizing the loss function written over here so why this model so that also kind of sets again what what i'm aiming to to achieve so this small because it's high dimensional and on convex that's kind of what makes a study of gradient descent non-trivial it's an interference problem meaning
408
436
https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=408s
Insights on Gradient-Based Algorithms in High-Dimensional Learning
https://i.ytimg.com/vi/r…axresdefault.jpg
rk7fIhCH8Gc
that what we are interested in is a correlation with the ground true signal x star we are not really interested in the optimization problem per sec so this is similar to the machine learning with neural networks where we always solve it by optimization but we are really interested in the generalization error in something slightly different than the value of the
436
458
https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=436s
Insights on Gradient-Based Algorithms in High-Dimensional Learning
https://i.ytimg.com/vi/r…axresdefault.jpg
rk7fIhCH8Gc
loss function itself and the third and fourth point is that this model has interesting computational properties and the dynamics of the gradient is unsolvable and this is something that i will show you to persuade you of that so the statistical physics must come at some point in and this is where it does so you just rewrite the same model with the noises of the with the variances of the two
458
485
https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=458s
Insights on Gradient-Based Algorithms in High-Dimensional Learning
https://i.ytimg.com/vi/r…axresdefault.jpg