video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
q7PjrmGNx5A
and they do an analysis of this where they show yes in fact it is [Music] so i think we've already seen this at the beginning that the noisy student is significantly uh more robust to these perturbations and they also test this to adversarial perturbations so right here you can see that the original model drops pretty quickly as you increase the epsilon the epsilon
1,851
1,878
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1851s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
q7PjrmGNx5A
is kind of the strength of the adversarial perturbation and the noisy the original model drops very quickly to you know fairly low accuracy while as the [Music] noisy student training uh drops much much less quickly now this um is another testament to the fact that what you do i think what's happening is you have your data space right and you have your data points in
1,878
1,906
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1878s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
q7PjrmGNx5A
it now when you do the like normal data augmentation what you'll do is you not only force the model to predict those points correctly but you'll sort of make a bit of a cloud around them and you force the model to predict that cloud correctly now if you introduce more data and you do even more noise what you do is you'll make these clouds kind of larger and that means the model is more
1,906
1,937
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1906s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
q7PjrmGNx5A
robust to any sort of perturbations in these clouds right and and that means it's probably also going to be more robust to adversarial perturbations so that's sort of how you can think of this uh this introduction of noise uh to make it more generalizable how does this generalize better so if you think of this data point right here if i'm looking to generalize that means you know i have
1,937
1,961
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1937s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
q7PjrmGNx5A
this iid data set so probably my test data is going to be related to the training data so i might get a data point that's fairly close to that data point and generalizing means i classify it correctly now if this cloud is very small like it is here my decision boundary could be like here right and even though the tres test data set is fairly close to the original
1,961
1,987
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1961s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
q7PjrmGNx5A
training data point it's it won't it will be classified uh incorrectly however if my original cloud during training is larger you can see if i train a model it can maybe put the decision boundary here and then my test data point will be included in on that same side so that's kind of the idea behind generalizing better of course that's a vast simplification and also to say that
1,987
2,013
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1987s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
q7PjrmGNx5A
this here is an fgsm attack so this is kind of the weakest attack in the adversarial perturbation spectrum they do say under a stronger attack pgd which is a fairly strong attack with 10 iterations at epsilon equals 16. noisy student training improves efficient net l2's accuracy from 1.1 percent to 4.4 percent and not this um like you know 1.1 percent really means the
2,013
2,044
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2013s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
q7PjrmGNx5A
model is almost like dead this is lower this is like random performance and 4.4 is still a bit above random performance but um yeah you could probably you could probably get there by simply using any sort of noise in that case but still you can see that it is more robust to especially to natural distortions and therefore it generalizes better as i said they do quite a bit of drop
2,044
2,077
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2044s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
q7PjrmGNx5A
sorry not drop out ablation studies to figure out where exactly um the performance comes from and the answer is it pretty much comes from all the things that they've described so here you can see um the in the effect of that extra data set and you can see pretty much with that extra data set all the all the situations improve here you can see what do you what is
2,077
2,103
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2077s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
q7PjrmGNx5A
happening when you do not augment the student uh when you do not date to augment you can immediately see that the accuracy drops and then when you do not augment and also don't use these model noises then the performance drops again and lastly when you use the teacher but you noise the teacher you can see also here the performance is dropping from the original
2,103
2,128
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2103s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
q7PjrmGNx5A
um quite a bit so all of these things kind of contribute and they do much more ablations and they have listed their findings here so using a large teacher model with better performance leads to better results so you know as the original teacher you should use as good as possible a teacher model you can find second a large amount of unlabeled data is necessary for better performance
2,128
2,158
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2128s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
q7PjrmGNx5A
okay so if you want to do this you better get a large large amount of extra data because that's one thing that makes the student perform better soft pseudo labels work better than hard pseudo labels for out of domain data in certain cases fourth a large student model is important to enable the student to learn a more more powerful model okay so because usually this um
2,158
2,186
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2158s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
q7PjrmGNx5A
knowledge distillation is what it this this is usually called knowledge distillation if you use a teacher model to train a student model and it is often used when the student model is smaller than the teacher because you want to kind of become more efficient you from so the teacher is large you make the student small and you usually sacrifice some accuracy and here they say if you want to gain
2,186
2,209
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2186s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
q7PjrmGNx5A
some accuracy you need a large student model it can't be like a small one number five data balancing is useful for small models number six joint training on label data and unlabeled data outperforms the pipeline that first pre-trains with unlabeled data and then fine-tunes on labeled data so this is in contrast to like what people have done before in the self-supervised learning
2,209
2,237
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2209s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
q7PjrmGNx5A
and so on where it's always kind of pre-training then fine-tuning or in the in the transfer learning setting seven using a large ratio between unlabeled batch size and label batch size enables models to train longer on non-labeled data to it to achieve a higher accuracy okay we've already seen that they have used that and number eight training the student from scratch
2,237
2,261
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2237s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
q7PjrmGNx5A
is sometimes better than initializing the student with the teacher and the student initialized with the teachers still requires a large number of training epochs to perform well this is fairly interesting because it kind of alludes to the fact that the minima in weight space if so if this is of course the case if the student model is the same as the teacher model so in like iteration two or three
2,261
2,287
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2261s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
q7PjrmGNx5A
or what not um it means that you know in weight space if we look at you know you might want to start the student here and the minimum is right here and you might want to think that if i learn the same thing then the minima are fairly close together right so the the teacher's minima might be here and the student minima might be fairly close so it might be beneficial if i if i start not
2,287
2,315
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2287s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
q7PjrmGNx5A
over here but actually start at the teacher's minimum but this doesn't always seem to be the case and that is a fairly interesting observation because it kind of means that we're talking about different minima here we're talking about the student model learning different things and that's what we've discussed already the student model kind of learns to be robust and that's
2,315
2,337
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2315s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
q7PjrmGNx5A
probably a minimum that's fairly far away in weight space at least in in a sort of energy landscape weight space uh might be the case that it needs to actually overcome kind of a a hill here even though the minimum might be close there's lots of research in like how minima are distributed in these weight spaces which i don't want to go into right here but it is a fairly interesting
2,337
2,362
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2337s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
q7PjrmGNx5A
observation that it's not always helpful to initialize the teacher sorry the student at the teacher's optimum okay so this was the paper and you know this is this is the type of research where i do appreciate kind of the these large labs taking it on because they have the resources to do all of these ablations all of these different models cross them with these giant data sets
2,362
2,389
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2362s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
q7PjrmGNx5A
and so on which i guess university labs just would not have and this is a fairly um thorough paper really investigating which parts of the pipeline you know do something and which ones don't and usually i i'm fairly critical of pipelines that have like 50 billion tricks um because you never know where the improvement exactly is coming from but you can sort of mitigate that
2,389
2,416
https://www.youtube.com/watch?v=q7PjrmGNx5A&t=2389s
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
https://i.ytimg.com/vi/q…axresdefault.jpg
WVPE62Gk3EM
hi there today we'll look at big bird transformers for longer sequences by manil zaire and guru garuganesh at al of google research so this paper on a high level proposes to replace the quadratic attention mechanism in transformers by a mix of random attention windowed attention and selective global attention therefore achieving a complexity of linear memory requirement instead of
0
28
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=0s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
quadratic memory requirement and as a result of that they can process longer sequences than traditional transformers like bert and achieve better results in some nlp tasks and they also evaluate on genomics tasks so we'll go through this paper a bit look a bit at the proof because they give a theoretical kind of guarantee that their random attention mechanism can still
28
54
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=28s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
be touring complete and can still achieve the same things as a full attention mechanism but we'll also look at the drawbacks i sort of have mixed feelings about this paper and i think i'll voice my concerns as we go through here but first let's look at the paper let's look at the architecture and i think this is actually a pretty cool paper for the empirical progression of the field
54
80
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=54s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
to process longer sequences with transformers as always if you like content like this uh feel free to share it around uh leave a like and tell me in the comments what you think about the paper and about what i think whatever you just just uh go nuts all right so the basic uh the basic premise right here is that the transformers they've been pretty impactful especially in nlp so
80
111
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=80s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
they say transformer based models such as bert have been one of the most successful deep learning models for nlp unfortunately one of their core limitations is the quadratic dependency mainly in terms of memory on the sequence length due to their full attention mechanism so really briefly the full attention mechanism and i've done you know numerous videos about attention
111
133
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=111s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
mechanism bert attention is all you need and so on so if you want a detailed explanation of what that is just go look up the corresponding videos but briefly what you'll have in nlp is a set of tokens a sequence of tokens as an input and you want to transform them layer after layer into sort of a a higher order representation of that same sequence and for that you build
133
160
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=133s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
these layers out of nodes and you have as many nodes usually as you have tokens in the sequence and the next set of so each token is represented by a vector at the beginning and each layer transforms this sequence as i said into sort of a higher level representation so you want the vector of this token right here um to be a better representation than the vector
160
186
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=160s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
was right here and you do that by incorporating information from all the other tokens into that particular vector now as i said this is called an attention mechanism and we don't actually have to go into how it works right here but you can see pretty clearly that if you want to do this for every token you need to have information routed from every token to every token like from
186
213
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=186s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
here to here from here to here and so on and this is just one token and then you need to do it for this token and for this token and for this token so what you'll ultimately get if n is your sequence length you'll get some n squared amount of computation and memory requirements for this so this is a problem and usually this means that you know this sequence length in bert
213
236
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=213s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
this is limited to something like 512 tokens which is okay for some applications but if you want to summarize you know entire articles entire books even or do question answering with lots of context it's not really enough so people have been thinking about how to scale this input how to scale this and of course the main culprit is this quadratic tension mechanism because if you you
236
264
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=236s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
know double the 512 you need you know four times the amount of compute and memory so how does this paper go about reducing that quadratic dependency the goal right here is of course to get this to some o of n right because then as we double the input length we simply need to double the compute requirements and that would be fantastic and that's what this paper
264
289
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=264s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
does and it does so without you know sacrificing the properties of the transformer so here's the architecture that big bird proposes by the way big bird another character from sesame street i guess will continue the naming here after elmo and bert you know i i'm i'm waiting for the model that's the count um yeah that's going to be a fun model but so big bird basically has three different types
289
323
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=289s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
of attention and here these are adjacency matrices in this attention mechanism so here is the input layer and the output layer is right here so that basically means that node i right here would be connected well sorry that's not a straight line would be connected to this particular node and also to this particular node so we're now trying if we have node i right here we're now trying to
323
351
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=323s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
not connect it to all of these nodes but we'll say we'll just select sum at random and then connect it to that okay this is what we call random attention and you can pretty clearly see if you connect each of the i nodes to r equals two to two random nodes then you don't have an n squared anymore but you'll have a like an o of r times n which you know if r is a constant is
351
383
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=351s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
an o of n attention mechanism okay so the main goal between the random attention mechanism is that for each query basically you select random tokens that you attend to and that random number is a fixed number that's not dependent on the sequence length and the paper is a little bit unclear about whether or not those random ones are the same for every sequence or are switched up or
383
415
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=383s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
are the same for every layer or are switched up but they formulate all of this as sort of in sort of a graph in sort of a random graph so there they formulate the attention mechanism in form of a graph so if we transform all of these nodes into a graph a full attention mechanism would mean that each graph each node is connected to each of the other nodes right
415
440
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=415s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
fully connected graph i don't maybe that's it so that would be a full attention mechanism and then they say well if we just have random connections between these things then there are some theorems from graph theory that say that each random walk in this graph is going to um so this graph is going to mix pretty quickly so i can get from each node to each other node
440
468
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=440s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
by a random walk in a logarithmic time and this random walk which basically means that you go from here to here this would be one layer of the transformer and then if you want to go from here to here that would you would have to do that in the next layer so this formulation as a random graph leads me to believe that layer after layer the random attention pattern is going to be
468
495
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=468s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
the same but also the formulation of the paper leads me to believe that the this random attention differs from sequence to sequence so i believe what's happening is that they you know get a new sequence then they decide on this pattern right here once and then they use this layer after layer the same pattern again so you can see that um in the traditional attention information can
495
525
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=495s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
basically throw flow from each of the nodes to each other node in one single step right because each node is connected to each other node you see this in the graph right here however if we only select a subset then you know it needs to if if i want to go from as i said from here to here then i need to do it in two steps and therefore i need two layers and that's going to be the culprit of
525
553
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=525s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
this method here and you know while it is mentioned in the paper it's sort of i feel at least that's my my assessment of this paper it's kind of swept under the rug a little bit i mean they do have a theorem that clearly says we can construct an example of a task that in the full attention setting can be solved with a single step so a single layer that in our
553
580
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=553s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
random attention setting needs a lot of layers so a lot of steps but you know the rest of the paper is sort of shaky on on this thing but nevertheless you can see how the random attention can if you have enough layers do the same information routing as the full attention okay however this is not a property of the random attention and we'll see this in the next thing
580
609
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=580s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
right here so the next ingredient that this paper uses is window attention and you can see over here that big bird is ultimately going to be a combination of the three types of attention which will uh which we are looking at here so window attention basically means that each each eye each token at the if position is going to attend to itself of course so here is
609
634
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=609s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
i but it is also going to attend to its neighbors so here is i minus 1 and here is i plus 1. and this is a you know this is a window size w that you can that is a parameter but also it is a constant and therefore um you again go from n squared to w times n which you know is o of n if w is a constant and this might be familiar to you because we've already seen this in the
634
665
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=634s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
long former paper so i've made a video or base i think even two videos on the long former which used exactly the window attention in combination with the global attention and uh if you want to know more about that go watch these videos but the new thing in big bird right here is this re edition of the random attention again the the window here in is is has exactly the same properties
665
696
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=665s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
as the random attention so you have instead of a fully connected graph you have a sparsely connected graph now if you have random attention the sparsely connected graph is like like the one on the right but if you have a windowed attention you can it is kind of not randomly connected but each node is connected to its neighbors like this and you can also see that if i want to
696
723
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=696s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
go from this node to this node right here i can't do it in one step but i can do it in two steps i go here and i go here so in the terms of the attention layers if i want to go from node one to node three i have to do it in two steps because each node is only connected to its neighbors so the connection patterns would sort of look like this so i have to go from one to two and then in the next
723
754
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=723s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
layer from two to three so the paper basically makes up for the lack of full attention by uh adding layers and you also might recognize this from a convolution operation like this basically because it is a convolution operation right in a convolution each node a only aggregates input from its neighbors for the next layer and then we know that as we go up the layers the de facto window that
754
785
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=754s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
each node looks at is going to be like a cone kind of like this so this is very similar to how a convolutional neural network works and the reasoning is very similar because the reasoning is well in a sentence the most important words for any given word are probably going to be its neighbors like the words around it and as you go up the layers you branch out
785
809
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=785s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
more and more but ultimately the this neighborhood principle holds in nlp as well so again we already saw this in the long former but that's the reason behind the window attention and that's the second ingredient and then the third ingredient is this global attention now the global attention uh is selected tokens that are so important and that's you know fixed by the
809
836
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=809s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
developers that are so important that they are they are connected to everything else so for example in these transformers you often have what's you know this kind of cls token so this is a special token that you prepend to some piece of text and the output of this token is going to be your classification output because you don't want to bind your classification if you need to classify
836
865
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=836s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
the entire sequence you don't want to bind that decision to one particular word what you want to do is you want to have an extra token and that's this cls token that kind of aggregates information from all of this so layer after layer layer after layer you'll have so if we go here layer after layer we have this one special node and in each step every single other node is able to send
865
892
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=865s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
information right here to this node and receive information from this node okay so now uh as a result of this as you as you may be able to see every single uh every single path is kind of a maximum length of two because if i want to go from any node to any other node i can simply you know send information to this global node and then the global node in the next
892
922
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=892s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
step can send information to whatever other node and that is a property that they use in their proof that this tension mechanism is as sort of as powerful as the classic full attention mechanism and we'll go through that in one second but first i hope this was clear that this combination of random attention window attention and global attention is what is called big
922
950
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=922s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
bird okay they have some engineering tricks that go along with this but in concept you can imagine big bird being long former plus these random attention right here and you know as an engineer as an nlp engineer that makes kind of total sense i you know i totally believe that a the introduction the addition of these random attention of these random attention
950
977
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=950s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
patterns can absolutely help your classification or whatever your nlp tasks because you know more attention better and i also am completely willing to believe that you know using the full attention matrix while it is of course more accurate it won't hurt too much to leave some of that attention away because essentially all the path lengths are just becoming too or even with the random attention
977
1,005
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=977s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
are really short or logarithmic to route information from a node to some other node so the loss that you incur is kind of in a logarithmic scale in terms of performance while the gain that you make is sort of in a in a quadratic or like a linear scale you go from quadratic to linear and that seems to me like a good empirical trade-off all right however the the proofs here the
1,005
1,035
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1005s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
proof of um of how how these how these things are constructed are a little bit i don't know so what they do in the proof that this function can sort of a is a universal approximator people have already shown that full attention mechanisms are universal approximators um so they show here that this sparse attention mechanism is also a universal approximator they make big use of star
1,035
1,067
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1035s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
graphs what they say is okay if we have a star graph which is one node connected right here to every other node this is a star graph if we have a star graph we can achieve the same thing than with a full graph a full graph is where every node is connected to every other node but as i already said what they need for this is multiple layers of this star graph so
1,067
1,094
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1067s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
and that has to do with the fact that if i want to route information i basically have to go via this middle node right here and there's an additional complication because this middle node in our case right here is only one node i can't route information at the same t like i can't have this routing right here at the same time that i have this routing right here like
1,094
1,121
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1094s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
going from here to here because i only have one middle node and i kind of this is not how the like this is very dumb math but uh maybe you have to imagine that there is one memory slot and you can only use that one memory slot at the same time for one of these things so essentially what you'll have to do is you'll have to do the green thing first and then in the
1,121
1,146
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1121s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
next step you'll have to do the blue thing second and then so these are now pairwise routing between nodes but ultimately what an attention mechanism does is it does everything to everything right in a single layer it routes information from all the nodes to all the other nodes and to achieve that so you need multiple rounds of this and it turns out that in the worst case
1,146
1,170
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1146s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
you actually need n rounds of this so you know you trade off your you go from n square to n uh memory and compute requirements in a single layer but in the worst case you need n layers to recover the the power of the full trend of the full transformer and that is the last one of their theoretical results right here so first they prove universal approximations
1,170
1,199
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1170s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
and second they prove turing completeness these two properties have been proven for full attention mechanisms and third they prove that there are tasks where you actually do need n layers to solve them with their limited attention um so you know i'm not sure but i i feel you can make any sort of polynomial uh algorithm into a linear algorithm like this like i
1,199
1,229
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1199s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
have a i have like a cool sorting algorithm right so if this is my sequence that i want to sort what i can do is i can simply you know take a random subset of them uh like this this and this and then kind of go and and sort them and then put them like i send them to the to the global memory like this i sort them and then i put them back right and if i do this for enough
1,229
1,259
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1229s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
if i do this for enough rounds okay you know if i do this for enough rounds you know at the worst case i need n rounds to sort my or log n rounds if i do it smartly but you know in you know the single step here is uh the single step is just o of n so i have now an o of n sorting algorithm i you know i have my sort of a bit of wary to express things like that
1,259
1,286
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1259s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
and um yeah but you know it is from an empirical standpoint i absolutely believe that this uh this is enough now my second quarrel right here is that if you look at the proof first of all what it makes use is this star graph and the star graph corresponds to the global attention so that's not much to do with the random attention though they use the random intention in their proof but
1,286
1,315
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1286s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
i at least believe that it would be possible with the global attention only and then the second thing is if you look at the parameters that they use for the um for the experiments and i've already set this in the long former video so in the long former video it turned out that if you look at how big this window attention is it turns out that it you're still well you know the original
1,315
1,345
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1315s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
bert attended to 512 tokens and then you look at the window and the window was still 512 tokens it's just that the global attention was even more so ultimately they ended up using more memory than the original bird and here if i look at the parameters of their um thing and they have multiple experiments right here and i believe this is the the base version so this is the base
1,345
1,373
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1345s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
version they also have this large version but here this is the 12 layer um version and you can see they have this block length and we'll get into the block length in one second but then you can see that their window size is three times the block length the number of random tokens is three times the block length and the number of global tokens is two times the block
1,373
1,398
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1373s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
length so that results in eight times b so 8 times 64 is you know can i calculate this or am i stupid uh it's 512 yes actually calculated this before so this is 512 tokens so you know you you go from from bert that has 512 tokens and attends to 512 tokens to also attending to 512 tokens of course the advantage here is that they now have 4 009 and 96 sequence length so they have the
1,398
1,443
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1398s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
freedom to not attend to as many tokens as they have in the input length but you know to put it in perspective this here uses more memory and more compute on it on its face than bert because bert attends to as many tokens but has a smaller input sequence and you know i i there's sort of a thing where in order to make these sparse attention things work you have to go pretty pretty you
1,443
1,479
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1443s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
know high in the number of things you attend to you can leave away some but it's not like you can you know scale up orders of magnitude of your input sequence length so that's this promise of linear attention is sort of it's kind of fulfilled but not there yet the second thing i i would like to point out is that in a lot of cases the number of random tokens is actually set to zero
1,479
1,504
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1479s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
so really making use i believe of these of the of the global of the number of global tokens so it's that seems a bit strange in that they continuously refer to their random attention mechanism um but then in a lot of experiments they don't actually have a random attention mechanism i believe they have to do that because that's kind of what makes them different from the long
1,504
1,531
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1504s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
former in principle but still um yeah so the last novelty let's say is an engineering novelty in that they now always consider not single for example they don't consider single random attention they always consider these in blocks and that's because our current hardware is really bad at sparse stuff really bad at single indexing gathering single things so if you can do everything in blocks you
1,531
1,562
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1531s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
basically get you get these blocks almost for free so it takes only marginally longer to retrieve this full two by two block right here than it would to retrieve the single uh instance right here of course that means you have you know four times you still use four times more memory but it is not four times slower than the original thing so you can use these blocks uh right here
1,562
1,589
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1562s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
you can do it for the random attention you can do it for the window attention as you can see here so you break this window pattern a little bit into blocks and that makes it a lot faster or that speeds up i get the speed up almost for free and then they make another approximation in that the way they do this windowing is and now let's just go really briefly so
1,589
1,617
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1589s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
you can see right here that it would be very cumbersome to gather so what we need we're just going to focus this this dotted thing right here is a bit confusing so you want to attend to these things and these you can just get out with a matrix slice really easy but then you want to attend to this kind of blocky thing right here from the window attention right like this thing
1,617
1,646
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1617s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
and this is hard to get out because you'd have to kind of index each row individually and that's very slow so what they do there is this matrix roll operation where you can sort of roll the axis around so what you'll do is you'll take this thing right here and you put it to the left right here and you'll take for example this thing right here and you'll put it to the right or
1,646
1,672
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1646s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
no like it's it's up and down but in essence that's what you do and you can you can fold all of this blue stuff into a rectangular matrix if you know if you can see right here so you kind of roll this back roll this back roll this forward and you replace whatever is missing by these now this again gives you some inaccuracies because this block right here was never intended
1,672
1,702
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1672s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
to be attended to and all of a sudden you see you have the k6 in here so it gives you a bit of inaccuracies at the edges of the sequence but you can take that you know you can take that hit for the increased performance that you gain by now having a rectangular matrix tpus are really efficient at this not as efficient as this and then the only thing that's really slow is gathering these
1,702
1,730
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1702s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
random blocks right here but also by having the same amount of random blocks per input token what you'll do is you'll end up with just one of these columns right here or you know r of these columns and that again gives you a rectangular matrix so this thing right here you can process very very efficiently using a tpu and you know the mistakes you make are basically
1,730
1,756
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1730s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
this thing right here and this thing right here because those weren't intended and are at the edges of the sequence so these were the the tricks of big bird to quickly uh summarize uh big bird is basically taking a transformer saying well why do we need all of this attention all of this full attention maybe we only need some of that and can already do a big job a good job
1,756
1,785
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1756s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
especially you know considering the attention mechanism goes over multiple layers so we don't need a routing from each token to each token we we can make up for not having a fully connected graph by simply running multiple layers so their sparsity is first of all you have this random attention which i believe changes from sequence to sequence but stays within or among the layers of
1,785
1,813
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1785s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
the same sequence then you have the window attention with the reasoning so the reasoning behind the random attention is that if you have a randomly connected graph the path lengths are on average logarithmic so you can route information efficiently the reasoning behind the window attention is that probably neighbor information is very important and that has been shown
1,813
1,836
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1813s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
empirically and then the global attention the reasoning behind this is that some of the tokens that are fixed by the developers are so important that it it's very beneficial that each other node is connected to them and that they are connected to each other node the result of that is the big bird attention mechanism which is basically long former which already had these two
1,836
1,860
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1836s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
plus the random attention this achieves a linear linear complexity in terms of of memory and compute though linear has to be qualified a bit because it's modified by the window size by the number of random attention tokens by the number of global tokens and in practice often ends up being you know fairly large-ish and also the the theoretical guarantees now come with the fact that you need
1,860
1,895
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1860s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
multiple layers in the worst case you need sequence length amount of layers which you know in the worst case would result right back into a quadratic requirement for memory and compute they do some engineering some engineering tricks right here and their results are pretty good so the results in various tasks and we'll we'll look at some of the tasks right here
1,895
1,922
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1895s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
so these are def set results using base size models for example where you can see they do outperform basic roberta models they outperform long former which may mean that the random attention is useful but you know in these things it also always may just mean that you've thrown more compute at it um at least i'm not really looking that they outperform the models because as
1,922
1,951
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1922s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
you can see right here if they compare to state of the art and you know granted these are models that have been trained specifically for these tasks and are you know crafted and engineered and big bird manages to big bird manages to hold itself against them in a lot of tasks and even gets state of the art on some what i'm more interested in is that it you know it can reach good numbers it
1,951
1,976
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1951s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
doesn't necessarily have to be state of the art but it can reach good numbers which tells me that okay probably the the empirical hit that i take by not having the full attention is you know is justifiable by the speed up and memory savings i do get um yeah especially when result when you see results mixed like this you know sometimes the other model is good and sometimes the
1,976
2,004
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=1976s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
big bird is good on different variations and so on i would not you know i would not make a big deal out of the fact that it is state of the art i get that the authors have to do that i would do so as well but you know um you know don't don't think that this is the like the best thing now uh it's very probable they just thrown also a lot of compute at it what is cool is
2,004
2,029
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=2004s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
WVPE62Gk3EM
they do uh some genomics experiments so not only do they have nlp state of the art but also they go into genomics and experiment with data there don't want to go into that because you know ultimately it's another task and that we leave the papers about the architecture all right so that was big bird i hope you enjoyed this video and learned i learned something certainly
2,029
2,058
https://www.youtube.com/watch?v=WVPE62Gk3EM&t=2029s
Big Bird: Transformers for Longer Sequences (Paper Explained)
https://i.ytimg.com/vi/W…axresdefault.jpg
YBlNQK0Ao6g
okay I'm sure many of you have already seen this because it was rather widely announced but the open AI team has announced a new model that produces pictures instead of text so as you can see right here on the Left you'll always see like a half a picture and on the right is the ground truth so they took this picture they simply cut the bottom half right here and then they let the model sort of
0
30
https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=0s
Image GPT: Generative Pretraining from Pixels (Paper Explained)
https://i.ytimg.com/vi/Y…axresdefault.jpg
YBlNQK0Ao6g
imagine what they cut away and what it comes up with is pretty cool I have to say like look at the birds like this is just awesome but the special thing about this isn't that it simply completes pictures the special thing about it is it does it one pixel by pixel so basically it goes at this pixel right here and asks ok what's that pixel and then what's that pixel and then what's
30
59
https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=30s
Image GPT: Generative Pretraining from Pixels (Paper Explained)
https://i.ytimg.com/vi/Y…axresdefault.jpg
YBlNQK0Ao6g
that pixel and so on so it is basically a like a language model but four pixels in that it goes over the images in order basically like this or like always from left to right left to right left to right and it has no clue of the spatial relations between the pixels it needs to learn that by itself as opposed to a convolutional neural network which is specifically designed such that if you
59
92
https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=59s
Image GPT: Generative Pretraining from Pixels (Paper Explained)
https://i.ytimg.com/vi/Y…axresdefault.jpg
YBlNQK0Ao6g
want to predict this pixel right here then it's specifically designed to say ok the most important information is probably around that pixel and then some like other important information is while around that pixel so cnn's are built with this in mind whereas this model right here which is also known as image GPT isn't doesn't have any of that it's simply a transformer model that goes over these
92
121
https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=92s
Image GPT: Generative Pretraining from Pixels (Paper Explained)
https://i.ytimg.com/vi/Y…axresdefault.jpg
YBlNQK0Ao6g
pixels one by one and we'll see how that's done there are some more examples right here particularly cool is the cat and you see that there is the beginning of this little white thing here which is this card and the completions of the model yes very interesting the model can of course as a language model can also sample by itself just random images you sample them once
121
155
https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=121s
Image GPT: Generative Pretraining from Pixels (Paper Explained)
https://i.ytimg.com/vi/Y…axresdefault.jpg
YBlNQK0Ao6g
through and this is what it comes up with so these are pretty good quality images for a model that just produces one pixel by one pixel now this idea of one pixel by pixel isn't new this has been around before but the investigation here is basically how much can we how far can we push these generative models for pre-training hi there this is Yannick from post-production I've
155
185
https://www.youtube.com/watch?v=YBlNQK0Ao6g&t=155s
Image GPT: Generative Pretraining from Pixels (Paper Explained)
https://i.ytimg.com/vi/Y…axresdefault.jpg