video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
-9evrZnBorM
and then you kind of pre-trained in this model here is a here's an illustration of some extra things so what they do is they they first this is the input up here so the first token is this CLS token which is kind of the star token and then this is the first sentence then the step is the separator of two sentences then this is the second sentence and then again I said look up
1,086
1,122
https://www.youtube.com/watch?v=-9evrZnBorM&t=1086s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
to these hashtags in a second but first they say okay first we have the token embeddings so they kind of start with with the original concept of word vectors at the very basis because you need to start with actually going into a vector space to use these models but they don't they they then then in kind of transform these through the transform layers they also use segments embeddings
1,122
1,150
https://www.youtube.com/watch?v=-9evrZnBorM&t=1122s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
and segments embeddings as you can see here is simply a kind of a binary label EI being the label for the first sentence and E being the label for today the second sentence so just the model can differentiate which one's the first which one's the second because it's kind of hard to learn for a transformer architecture that the tokens kind of separate the sentences so you can want
1,150
1,177
https://www.youtube.com/watch?v=-9evrZnBorM&t=1150s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
to help it and the last thing is positional embeddings and we've also already talked about these in attention is all you need this is where you can kind of the model since it's a transformer it doesn't go step by step it doesn't go one of them so it's kind of hard for the model to make that how far things are apart from each other how far to tokens if they're neighbors
1,177
1,203
https://www.youtube.com/watch?v=-9evrZnBorM&t=1177s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
or if they're really far apart and these positional embeddings kind of help the model I decide if two tokens are close to each other in input infer if they're right they're just neighbors or if they are actually really far apart all right so this is this is how the kind of first input is constructed out of these embeddings and then it's fed through these transformer layers as we saw with
1,203
1,230
https://www.youtube.com/watch?v=-9evrZnBorM&t=1203s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
the mask that lime task and his next task I want to quickly get to these hashtags what what they mean so the input here is separated into word pieces so called work pieces and what that is is so in language processing tasks you have kind of a choice you have you have a choice of how to tokenize your input so what let's look at a sentence here subscribe to PewDiePie so
1,230
1,275
https://www.youtube.com/watch?v=-9evrZnBorM&t=1230s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
this is a sentence and the sentence is rather let's say world wise complicated so why matter language one will have problem with this so first you need to tokenize this sentence alright so what most people do is they say okay here are the word boundaries we're not tokenize this into three segments first is subscribe to piggyback okay so three things and each of these now needs a a
1,275
1,302
https://www.youtube.com/watch?v=-9evrZnBorM&t=1275s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
word vector associated with it now to the thing is the word vectors let's assume you have them pre-trained or something in any case you need a big table a big big table and this goes down here where for each word a the two I you you have a Keter associated with it right so you need to keep this in your model and as you can as you know English has a lot of words here so this table is gonna be
1,302
1,340
https://www.youtube.com/watch?v=-9evrZnBorM&t=1302s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
really big and the problem is how do you make this table right okay you could make it kind of dynamically and so on but in general you're gonna create this table with all the words you know and that's going to be too big because English has so many words and then you can say alright we'll only take the top whatever is used in 90% of the language which turns out to be it's kind of
1,340
1,372
https://www.youtube.com/watch?v=-9evrZnBorM&t=1340s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
burrito distributed so it turns out to be like 5 percent of the words are used in 90 percent of the language so you just take these but then you're gonna have the problem ok here 2 2 is not a problem why not 2 is used super often that we're gonna have it at the very top somewhere and we're gonna go back to it subscribe is it's already it's not so common right
1,372
1,398
https://www.youtube.com/watch?v=-9evrZnBorM&t=1372s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
so maybe you have a word for it somewhere down but then pewdiepie is a name so there is no there's not even a word like that's not even a word it's it's just so what you what you usually do what people usually do is they have this out of vocabulary token and then they have a vector associated somewhere here without of vocabulary token is a whatever I don't know what it
1,398
1,427
https://www.youtube.com/watch?v=-9evrZnBorM&t=1398s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
is I just don't like that I don't have it in my vocabulary and the model kind of deals that that's kind of it's not it's not really ideal especially if you don't want to generate language also your model tends to generate out of tabular tokens if you allow that if you don't allow that you have a problem during training so it's all kind of messy what's the alternative the
1,427
1,449
https://www.youtube.com/watch?v=-9evrZnBorM&t=1427s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
alternative is to go character level so let's look at character level in character level you say all right my words are obviously made of actors and characters I'm just gonna split it each character right here the white space can be a character too so I'm gonna split at each character and then I'm simply going to have a bone vector for each character and there's
1,449
1,478
https://www.youtube.com/watch?v=-9evrZnBorM&t=1449s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
only like 20-something six of those and um so I can keep 26 vectors but this tends to be rather problematic because a character by itself having a meaning that you know that can be encapsulated by a vector is kind of its kind of shady because the character character by itself usually doesn't mean and it doesn't have a meaning so what's the solution here the solution is to go in
1,478
1,506
https://www.youtube.com/watch?v=-9evrZnBorM&t=1478s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
between the solution is to say well let's actually go forward pieces and you can kind of think of them as syllables but you can you can split you can make them in a way that you have a fixed size vocabulary say okay I have 4,000 entry places in my big table it's I can afford 4,000 size table so first of all I'm going to have for each character ABCDE and so on I'm going to have a vector but
1,506
1,542
https://www.youtube.com/watch?v=-9evrZnBorM&t=1506s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
then only have 26 have 3,000 some left I'm going to have also the most common words now a is already here but maybe I can have two and from and so the most common words they also get there and then for the other things I'm going to split the words maybe in a sub scribe right so these are two syllables and sub can be free kind of a prefix to many things and I only need then 1 1 so I've
1,542
1,576
https://www.youtube.com/watch?v=-9evrZnBorM&t=1542s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
sub here sub I only need one vector for that and then the rest if described scribe is by the way also a word so I can have that but if scribe weren't in my vocabulary I can divide scribe Len up into into characters and then describe them with the so basically I can mix and match here I can sub it that's that I have that and then scribe I don't have it I don't have
1,576
1,602
https://www.youtube.com/watch?v=-9evrZnBorM&t=1576s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
any of the pieces but so I can just use the character the character so this would would be sub and then s C or I II so these these would be the tokens that I work with now as as my input and this these tags here so this is what would happen to PewDiePie you could simply split along each character so you basic this kind of an interpolation between the token model
1,602
1,637
https://www.youtube.com/watch?v=-9evrZnBorM&t=1602s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
and the character model and it's really neat and it usually works quite well the as I said the the hashtag sign here simply means that these two have original in one word and now this this in here is just a word piece token this is a really good example where where where piece come in because play by itself is a word and that can make play yang instead of having an own vector for
1,637
1,668
https://www.youtube.com/watch?v=-9evrZnBorM&t=1637s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
that I can divide it into play which already has a meaning and presumably playing and play would have similar meaning so it makes sense to have to play as a so that's the token singled-out ear and then in as is as a suffix also makes sense to have a token for that in my table and then I said we have these two tokens here and that probably already gives me more information than
1,668
1,693
https://www.youtube.com/watch?v=-9evrZnBorM&t=1668s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
simply having the word playing right by the way you should subscribe to PewDiePie just FYI alright let's go on so we we do workpiece tokenization we do the masked language model we do the next sentence prediction pre-training what do we have now we have a model that can really really well predict some masked words now how do we use it they evaluate on these I believe it's 11
1,693
1,731
https://www.youtube.com/watch?v=-9evrZnBorM&t=1693s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
tasks 11 different tasks of or is it I don't know how many it is it is a lot with the same model so this pre trend model then how claim can be fine-tuned to do all of these tasks and it gets up it goes like state-of-the-art on everyone it's crazy so how do they fight in it so the easiest tasks are the one are the so-called sequence level tasks where you basically have the sequence and you're
1,731
1,769
https://www.youtube.com/watch?v=-9evrZnBorM&t=1731s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
you're about to predict one class label for the entire sequence so here where the sentence pair classification tasks for example the task we saw before the is next task but there is more sophisticated tasks that you need kind of supervised data for and so with supervised that you'd have a class level that you trained on so what do you do is let's look at one of them ever MN l I
1,769
1,799
https://www.youtube.com/watch?v=-9evrZnBorM&t=1769s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
they had it up here nope here multi-genre natural language inference crowd-sourced entailment classification task so given a pair of sentences the goal is to predict whether the second sentence is an entailment contradiction or neutral with respect to the first one all right two sentences and you're about to predict which one of these three labels it is so you put the
1,799
1,828
https://www.youtube.com/watch?v=-9evrZnBorM&t=1799s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
two cent sentences here Burke can already take two sentences as an input as we saw right the the embeddings are the the a and b embeddings and the position of endings are left out of the picture here but they would be added to it and these these would be the embeddings for it and then you pass this through the verb monel and this is the final layer and what they do is they
1,828
1,856
https://www.youtube.com/watch?v=-9evrZnBorM&t=1828s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
simply take now the the embedding the final embedding for this first one course into this starter token and they simply put a single layer of classification so basically a logistic regression on it and that's how they then get a class label so if this is whatever let's say this is this just gives you here a hidden vector of 512 dimensions 512 and you have three labels to output here 1 2
1,856
1,889
https://www.youtube.com/watch?v=-9evrZnBorM&t=1856s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
3 you simply need a a matrix that's 512 by 3 of size and these are the these are the weights that you would then have to train in addition the vert subvert is pre trained and you have to simply only now learn these weights of course they also kind of fine-tune the entire vertol but that's really fine-tuning the only thing you have to learn from scratch is is this these weights here that's pretty
1,889
1,926
https://www.youtube.com/watch?v=-9evrZnBorM&t=1889s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
first of all it's pretty neat because you can be very quick at learning new tasks because you simply start from the pre-trade vert and then you go and learn a single class for a layer on top and astonishingly this works extremely well for these tasks a bit of a a bit of a more challenging task is this year squat is a question-answering task and we're gonna jump down here where they explain
1,926
1,958
https://www.youtube.com/watch?v=-9evrZnBorM&t=1926s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
the task so you have an input question oops you have an input question and then for question is where the water droplets collide with ice crystals to form precipitation and you have an input paragraph which is kind of a paragraph from Wikipedia page and you know that the answer is somewhere in this paragraph right the data set is constructed such that the answer is in
1,958
1,986
https://www.youtube.com/watch?v=-9evrZnBorM&t=1958s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
the paragraph so the entire referees precipitation forms as small as smaller droplets call ask the collision with other raindrops ice crystals within a cloud so you the question is where do water droplets collide perform precipitation the answer here is within a cloud so that's this this thing here so usually what squad models do is they they predict the spam
1,986
2,017
https://www.youtube.com/watch?v=-9evrZnBorM&t=1986s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
they predict where it's the start of the answer and where is the end of the answer that's also what kind of birds train to do so in order to do this what you do is again you already have the ability to input two sequences so we've trained with two sentences but here they say well you say oh well our first sequence is going to be the question our second sequence is going to be the
2,017
2,044
https://www.youtube.com/watch?v=-9evrZnBorM&t=2017s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
entire power paragraph from Wikipedia and then for each output for each output for the output of each token remember there's as many outputs as there's inputs because the transformer will always transform to the same length of sequence for each token in the output we classify it is this token the start token and or is this token the end token or is this token none of all now what
2,044
2,080
https://www.youtube.com/watch?v=-9evrZnBorM&t=2044s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
they do effectively is that here each each one outputs each one is a vector and they as we said at the beginning of finding out which ones the subject now here we have two queries namely query one which is is this to start let's call it query s and query e is is this the end token so these are two queries and I'm going to just produce compute the inner product of each query with each of
2,080
2,110
https://www.youtube.com/watch?v=-9evrZnBorM&t=2080s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
these outputs right and over my sequence here this is gonna give me a distribution so start for start maybe this token is not much than this token is a lot and so on the other solution there's I've tokens and for the end not so much not so probable not so probable very probable not suppose so what you get when I get is from these inner products is a distribution over which ones start
2,110
2,145
https://www.youtube.com/watch?v=-9evrZnBorM&t=2110s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
in which ones to end ok this one's probably a start and this one's probably the end so that's how you predict the span and again what you have to ultimately learn is these these queries here and so not that much and this is named entity recognition and named entity recognition you have a sentence and you're supposed to recognize named entities like up here we saw subscribe
2,145
2,181
https://www.youtube.com/watch?v=-9evrZnBorM&t=2145s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
to PewDiePie and the named entity would be PewDiePie right this is a name and you're supposed to recognize that this is a name and they do it the same same way that they do the squat basically or a similar way sorry they basically for each of the outputs here they simply classify whether or not is it's part of an M entity or not so what they have to do is they have to simply train if they you
2,181
2,220
https://www.youtube.com/watch?v=-9evrZnBorM&t=2181s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
also have different labels for which kind of entity is this this is like a person and this is this is no entity so if you have ten of the labels then each for each thing you would classify it into one of ten classes so you need a classifier of input size versus number of classes that's all you have to train in addition to pre to fine-tuning vert itself alright so they kind of evaluate
2,220
2,253
https://www.youtube.com/watch?v=-9evrZnBorM&t=2220s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
on all of these tasks they get super duper numbers on all of them here burped large winds on pretty much everything and this model is big just saying and they trained it on TP use which is available in kind of Google cloud infrastructure so far they've trained it on a lot of data so - - away it's it's kind of expected that you would outperform but it's very surprising that you outperform
2,253
2,295
https://www.youtube.com/watch?v=-9evrZnBorM&t=2253s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
everyone else by this much and they've done a lot of kind of Appalachian studies where they show that it's really due to the fact that they do this left and right context they they take into account the left and right context of given token when doing the the attention that it's that that's why it's better so here for example they compare the bird base model and they say okay what if we
2,295
2,327
https://www.youtube.com/watch?v=-9evrZnBorM&t=2295s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
don't do the NSP the next sentence prediction tasks then you can see the numbers they already kind of they drop on these tasks and what if we then additionally do only left-to-right training and the numbers they drop really seriously again you see sometimes here for example you see a pretty serious drop in the number also here so there really seems to be real value in
2,327
2,359
https://www.youtube.com/watch?v=-9evrZnBorM&t=2327s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
-9evrZnBorM
doing this kind of left and right context attention so it's not just about the model size and the amount of data that's basically what they show here and this is really cool that the paper actually shows this because usually people have an idea and they throw a lot more resources at it and they're better you never know why and this is pretty cool that they actually show all right
2,359
2,384
https://www.youtube.com/watch?v=-9evrZnBorM&t=2359s
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://i.ytimg.com/vi/-…axresdefault.jpg
a0f07M2uj_A
hi there today we're looking at back propagation in the brain by Timothy Lilly corrupt Adam Santoro Luke Morris Colin Ackerman and Geoffrey Hinton so this is a bit of an unusual paper for the machine learning community but nevertheless it's interesting and let's be honest at least half of our interest comes from the fact that Geoffrey Hinton is one of the authors of this paper so
0
28
https://www.youtube.com/watch?v=a0f07M2uj_A&t=0s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
this is a paper that basically proposes a hypothesis on how the algorithm of back propagation works in the brain because previously there has been a lot of evidence against there being something like back propagation in the brain so the question is how do neural networks in the brain learn and they they say there there can be many different ways that neural networks
28
60
https://www.youtube.com/watch?v=a0f07M2uj_A&t=28s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
learn and they list them up in in this kind of diagram where you have a network and it maps from input to output by having these weighted connections between neurons so the input is two-dimensional and then it maps using these weights to a three-dimensional hidden layer and usually there is a nonlinear function somewhere at the output here of these so they they do a weighted sum of the
60
92
https://www.youtube.com/watch?v=a0f07M2uj_A&t=60s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
inputs and then they do a nonlinear nonlinear function and then they propagate that signal to the next layer and till then to finally to the output all right so how do these networks learn the one way of learning is called hebbian learning the interesting thing here is that it requires no feedback from the outside world basically what you want to do in hebbian learning is
92
119
https://www.youtube.com/watch?v=a0f07M2uj_A&t=92s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
you want to update the connections such that they kind of match their own previous outputs or even increase their own previous outputs so you propagate a signal and then maybe this neuron spikes really hard and this Spike's really low then if you propagate the signal again right then you want to match that those those activations or if you if you properly similar signals no
119
148
https://www.youtube.com/watch?v=a0f07M2uj_A&t=119s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
feedback required so basically it's a self amplifying or self dampening process the ultimately though you want to learn something about the world and that means you have to have some some feedback from outside right so with feedback what we mean is usually that the output here let's look this way the output here is goes into the world let's say this is a motor neuron right you do
148
180
https://www.youtube.com/watch?v=a0f07M2uj_A&t=148s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
something with your arm like you hammer on a nail and then you either hit the nail or you don't let's say you don't hit the nail so after it looks like crooked there you have feedback right so feedback usually in the form of some sort of error signal right so feedback it can be like this was good or this was bad or it can be this was a bit too much to the left or so on the important part
180
213
https://www.youtube.com/watch?v=a0f07M2uj_A&t=180s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
is you get kind of one number of feedback right how bad you were and now your goal is to adjust all of the individual neurons or weights between neurons such that the error will be lower so in hebbian learning there is no feedback it's just simply a self reinforcing pattern activation machine in the first in these kind of first instances of perturbation learning what
213
245
https://www.youtube.com/watch?v=a0f07M2uj_A&t=213s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
you'll have is you'll have one single feedback and that you can see this is a diffuse cloud here what you're basically saying is that every single neuron is kind of punished let's say the the feedback here was negative one that means every single neuron is is punished for that so how you can imagine something if you have your input X and you map it through through your function
245
277
https://www.youtube.com/watch?v=a0f07M2uj_A&t=245s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
f then the function f has a way to w1 and so on right so you map X through it right and then you get feedback of negative 1 and then you map X with a little bit of noise plus M right da-da-da-dah and you get a feedback of negative 2 right then you you that means that the direction of this noise was probably a bad direction so ultimately you want to update X into
277
313
https://www.youtube.com/watch?v=a0f07M2uj_A&t=277s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
the direction of negative that noise by modulated of course by by some some factor here that's that it kind of tells you how bad it was so this could be the negative 2 minus negative 1 now that makes big sense No yes that would be no it would be negative 1 minus negative nevermind so basically with a scalar feedback you simply tell each neuron what it did right or sorry if if the entire network
313
353
https://www.youtube.com/watch?v=a0f07M2uj_A&t=313s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
right the entire network did right or wrong so the entire network will lead to this feedback you don't have accountability of the individual neurons all you can say is that whatever I'm doing here is wrong and whatever I'm doing here is right so I'm gonna do more of the right things now in back propagation it is very different right in back propagation what you'll do is
353
378
https://www.youtube.com/watch?v=a0f07M2uj_A&t=353s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
you'll have your feedback here let's say that's negative 1 and then you do a reverse computation so the forward computation in this case was this weighted sum of this layer now usually layer wise reverse computation which means that you know how this function here this output came to be out of the out of the inputs and that means you can inverse and you can do an
378
408
https://www.youtube.com/watch?v=a0f07M2uj_A&t=378s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
inverse propagation of the error signal which is of course the gradient so this would be your your you you would derive your error by the inputs to the layer right so this basically tells in the back propagation algorithm you can exactly determine if you are this node how do I have to adjust my input weights how do I have to adjust them in order to make this number here go down right and
408
444
https://www.youtube.com/watch?v=a0f07M2uj_A&t=408s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
then because you always propagate the error according to that what you'll have in each in each layer is basically a vector target so it's no longer just one number but each layer now has a target of vectors and it says okay these are the outputs that would be beneficial please this layer please change your outputs in the direction of negative two negative three plus four so you see this
444
473
https://www.youtube.com/watch?v=a0f07M2uj_A&t=444s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
is so the negative two would be this unit the negative three would be this unit and the plus four would be this unit so each unit is instructed individually to say please this is the direction that each unit should change in in order to make this number go lower you see how this is much more information than the perturbation learning in the perturbation learning
473
496
https://www.youtube.com/watch?v=a0f07M2uj_A&t=473s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
all the units simply know well the four was bad and now is better so let's you know change a bit and here you have detailed instructions for each unit because of the back propagation algorithm so ultimately people have kind of thought that since back propagation wasn't really possible with biological neurons that the brain might be doing something like perturbation learning but
496
526
https://www.youtube.com/watch?v=a0f07M2uj_A&t=496s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
this paper argues that something like back propagation is not only possible but likely in the brain and they proposed this kind of backdrop like learning with the feedback network so they basically concern all the they differentiate hard between these two regimes here in this hand you have the scalar feedback which means that the entire network gets one number as a feedback and the each neuron
526
558
https://www.youtube.com/watch?v=a0f07M2uj_A&t=526s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
just gets that number and here you have vector feedback where each neuron gets an individual instruction of how to update and they achieve this not by back propagation because still the original formulation of back prop as we use it in neural networks is not biologically plausible but they achieve this with this backdrop like learning with the feedback network and we'll see how this
558
586
https://www.youtube.com/watch?v=a0f07M2uj_A&t=558s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
does but in in essence this feedback network is constructed such that it can give each neuron in the forward pass here detailed instructions on how to update itself right so yeah they have a little bit of a diagram here of if you do hebbian if this if this is an error landscape if you do have you in learning you basically you don't care about the error you're just reinforcing yourself
586
617
https://www.youtube.com/watch?v=a0f07M2uj_A&t=586s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
if you do perturbation learning then you it's very slow because you don't have a detailed signal you just you just rely on this one number it's kind of if you were to update every single neuron in your neural network with reinforcement learning considering the output the of the neural networks or the error considering that the reward not using back row and then with back probably
617
644
https://www.youtube.com/watch?v=a0f07M2uj_A&t=617s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
have a much smoother much faster optimization trajectory so they looked at this and they they come to some some conclusions first of all so here's here's back prop basically saying back prop as we said you have the forward pass and there you simply compute these weighted averages and you you also pass them usually through some sort of nonlinear activation right and the cool
644
680
https://www.youtube.com/watch?v=a0f07M2uj_A&t=644s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
thing about this is in artificial neural networks is that once the error comes in you can exactly reverse that so you can do a backward pass of errors where you can propagate these errors through because you know it's kind of invertible the function doesn't have to be invertible but that the gradients will flow backwards if you know how the forward pass was computed so first of
680
710
https://www.youtube.com/watch?v=a0f07M2uj_A&t=680s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
all they go into a discussion of back prop in the brain how can we even expect that and one cool piece of evidence is where I find is that they cite several examples where they use artificial neural networks to learn the same tasks as humans right and or as as animal brains and then I have no clue how how they measure any of this but then they compare the hidden representations of
710
746
https://www.youtube.com/watch?v=a0f07M2uj_A&t=710s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
the living neural networks and the artificial neural networks and it turns out that the these the networks that were trained with backpropagation x' then networks that were not trained with backdrop so basically that means if you train a network with backprop it matches the biological networks much closer in how they form their hidden representations and they they do a
746
782
https://www.youtube.com/watch?v=a0f07M2uj_A&t=746s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
number they cite the number of experiments here that show this so this gives you very good evidence that if the hidden representations they look as if they had been computed by backdrop and not by any of these scaler update algorithms so it is conceivable that we find backprop in the brain that's why they go here next they go into problems with backdrops so basically why why
782
814
https://www.youtube.com/watch?v=a0f07M2uj_A&t=782s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
would we why so far have we believed that back prop isn't happening in the brain so now let's I want to highlight two factors here that that I find a thinker suffice state they have more but first of all back prop demands synaptic symmetry in the forward and backward paths right so basically if you have a neuron and it has output to another neuron what you need to be able to do is
814
845
https://www.youtube.com/watch?v=a0f07M2uj_A&t=814s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
to pass back information along that neuron so it kind of has to be a symmetric connection idea of the forward and the backward pass and these need to be exact right and this is just not if you know how neurons are structured they have kind of input dendrites and then there's this accent act action potential and along the axon the signal travels and the back traveling of the signal
845
875
https://www.youtube.com/watch?v=a0f07M2uj_A&t=845s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
just I think is very is very very very slow if even possible and so it's generally not invertible or inverse compute capable so this is one reason why that prop seems unlikely and then the second reason here is error signals are signed and potentially extreme valued and i want to add to that they also just talk about this somewhere that error signals are of a different type
875
906
https://www.youtube.com/watch?v=a0f07M2uj_A&t=875s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
right that's a different type so first let's see what signed error signals are signed yes we need to be able to adjust neurons in a specific directions right if you look at again what we've drawn before here we said here this is how these neurons must must update so the first neuron must must decrease by two this must decrease by three and this must increase by four now in
906
941
https://www.youtube.com/watch?v=a0f07M2uj_A&t=906s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
background we need this but in if if we assume that there is something like a reverse computation or signaling here happening then we still have the problem that usually these output signals are in the form of spiking rates which means that over time right so if a neuron wants to if a neuron has zero activation there's just no signal but if a neuron has a high activation it spikes a lot if
941
977
https://www.youtube.com/watch?v=a0f07M2uj_A&t=941s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
has a low activation it kind of spikes sometimes well what he can do is negative spike right like zero is as low as it goes so the the thought that there are signed information in in the backward pass is inconceivable even if you have something like a second so you can imagine here instead of this backward connection because of the symmetry problem we have some kind of
977
1,003
https://www.youtube.com/watch?v=a0f07M2uj_A&t=977s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
second neural network that goes in this direction still you'd have the problem that here you can only have positive signal or a zero and they might be extreme valued which okay it can't be really encoded with the spiking because they are they're limited in the range they can assume but they are also of a different type and I'm what I mean by that is basically if you think of this
1,003
1,030
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1003s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
as a programming problem then the forward passes here are our activations right and the backward passes here they are deltas so in the backward passes view either propagate deltas or you propagate kind of directions so the activations are sort of impulses whereas the backward signals are this isn't how you need to change their their gradients ultimately so it's fundamentally a different type
1,030
1,067
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1030s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
of data that is propagated along would be propagated along these directions and that makes it very unlikely because we are not aware as this paper says that the that neural networks that neurons can kind of switch the data type that they're they're transmitting all right so then the paper goes into their end grad hypothesis and what this is the hypothesis basically states that the
1,067
1,100
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1067s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
brain could implement something like neural networks by using by using an approximate backdrop like algorithm based on autoencoders and I want to jump straight into the algorithm no actually first they do talk about autoencoders which which I find very interesting so if you think of autoencoders what is an autoencoder an autoencoder is a network that basically starts out with an input
1,100
1,132
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1100s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
layer and then has a bunch of hidden layers and at the end it tries to reconstruct its own input right so you feed a data in here you get data out here and then your error the error signal it will be your difference to your original input now the usually when we train autoencoders in deep learning we also train this by back prop right we see then this error here and this goes
1,132
1,167
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1132s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
back but if you just think of single layer autoencoders so um let's let's go over here single layer auto-encoder with let's say the the same number of the same number of units in this in this layer what you'll have is so this this is input this is output and this is the hidden layer right you'll have a weight matrix here and you'll probably have some sort of
1,167
1,201
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1167s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
nonlinear function and then you have another weight matrix here and they call them W and B another way to draw this is I have weight matrix going up then I have a nonlinear function going transforming this into this signal and then I have the be going back right so I'm drawing I'm drawing it in two different ways up here or over here and with the second way you can see that it
1,201
1,232
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1201s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
is kind of a forward backward algorithm where now the error if you look at what is the error here the error is the difference between this and this and the difference between this and this and the difference between this and this right and you can train an autoencoder simply by saying W please make sure that the that the the the input here gets mapped closer to the output and to be
1,232
1,270
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1232s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
the same thing this will become clear in a second so but basically sorry this I mean the the hidden representations you'll see basically the idea is that you can train an autoencoder only by using local update rules you don't have to do back prop and that's what this algorithm is proposing namely if you think of a stack of autoencoders this this this transforming one hidden
1,270
1,306
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1270s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
representation into the next right this is the feed-forward function what you can do is you first of all you can assume that for each of these functions here you have a perfect inverse right you can you can perfectly compute the inverse function that's this this G here of course this doesn't exist but assume you have it what you then could do is you could if
1,306
1,337
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1306s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
if you knew in one layer and on the top layer of course you know if you knew that okay I got this from my forward pass but I would like to have this this is my desired output right so in the output layer you get this this is your error signal if you knew you you you could compute an error right here this is what you do in the output right now in back prop we would back propagate
1,337
1,368
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1337s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
this error along the layers but now we don't do this instead of what we do is we use this G function to invert the F function right and by that what we'll say is what hidden representation in layer two what should the hidden representation have been that in order for us to obtain this thing right so the the claim here is if in layer two we had had H two as a hidden representation
1,368
1,406
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1368s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
then we would have landed exactly where we want it right that's what this G function does because here we use F so had we had F h2 and used F on it we would be exactly where we want instead we had h2 here and used F on it and then we landed here where we don't want so this is where we want we would want to be in layer two and this is where we were so again we can compute an error
1,406
1,440
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1406s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
here again instead of back propagating that error what we'll do is we'll use the inverse of the forward function in order to back propagate our desired hidden representation and you can see there is of course a relationship to the true back prop here but the the important distinction is we are not trying to back propagate the error signal we're trying to invert the desired hidden states of the
1,440
1,469
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1440s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
network and then in each layer we can compute from the forward pass we can compute the difference to the desired hidden state and thereby compute an error signal and now we have achieved what we wanted we want an algorithm that doesn't do back prop that only uses local information in order to compute the error signal that it needs to adjust and by local I mean information in the
1,469
1,499
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1469s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
same layer and also the data type that is propagated by F is activations right of hidden representations and by G is also activations of hidden representations both of them are always positive can be encoded by spiking neurons and so on so this algorithm achieves what we want they go bit into detail how the actual error update here can be achieved and apparently neurons
1,499
1,530
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1499s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
can achieve you know in the same layer to to adjust themselves to a given desired activation so this algorithm achieves it of course we don't have this G we don't have it and therefore we need to go a bit more complicated what they introduces the this following algorithm the goals are the same but now we assume we do not have a perfect inverse but we have something that is a bit like an
1,530
1,563
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1530s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
inverse so we have an approximate inverse and they basically suggest if we have an approximate inverse we can do the phone so G G is now an approximate inverse to F what we can do is this is our input signal right we use F to map it forward to this and so on all the way up until we get our true or error right here this is our error from the environment right this is the nail being
1,563
1,590
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1563s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
wrong and then we do two applications of G right so this is an application of F we do to applet of g1 we applied g2 this to what we got in the forward pass right and this now gives us a measure of how bad our inverse is right so if G is now an approximate inverse and this now we see here oh okay we we had a ch2 in the forward pass and we basically forward passed and then went through our inverse
1,590
1,627
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1590s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
and we didn't land quite exactly where we started but we know that okay this this is basically the difference between our our inverse our forward inverse H and our true H and then we also back project using G again the desired outcome so we invert the desired outcome here now before we have adjusted directly these two right because we said this is what we got this is what we want
1,627
1,663
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1627s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
but now we include for the fact that G isn't a perfect inverse and our assumption is that G here probably makes about the same mistakes as G here so what we'll do is we'll take this vector right here and apply it here in order to achieve this thing and this thing is now the corrected thing our corrected to desired hidden representation correct for the fact that we don't have a
1,663
1,693
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1663s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
perfect inverse and now again we have our error here that we can locally adjust again all the signals propagated here here and here are just neural activations and all the information required to update a layer of neurons is now contained within that layer of neurons right and and this goes back through the network so this is how they achieve how they achieve this this is a
1,693
1,726
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1693s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
bit of a close-up look and here are the computations to do this so basically for the forward updates you want to adjust W into the direction of the H minus the H tilde and the H tilde in this case would be this the the hidden representation that you would like to have so you will update your forward forward weights into the direction such that your hidden representations are
1,726
1,756
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1726s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
closer sorry that your forward haven representation is closer to your backward hidden representation and the backward updates now your goal is to get a more a better to make G so sir W here is our W or the weight of F and B or the weights of G so in the backward updates your goal is to make G a better inverse right so what you'll do is again you'll take the difference between now you see
1,756
1,792
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1756s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
the difference here here here right not the same error so here you will you in the W update use what we labeled error here in the G update you use this error here so this is the error of G so when you update the function G you want to make these two closer together such that G becomes a better inverse right because you're dealing with an approximate inverse you still need to obtain that
1,792
1,825
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1792s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
approximate inverse end and this here is how you learn it this algorithm now achieves what we wanted right local updates data types check signed check and so on I hope this was enough clear in essence is pretty simple but it's pretty cool how they work around this they call this a different story with propagation and not these these kind of papers I don't think they
1,825
1,859
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1825s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg
a0f07M2uj_A
invented this maybe I'm not sure maybe they did maybe they didn't and this paper just kind of frames it in this hypothesis it is unclear to me I am not familiar with this kind of papers so sorry if I miss attribute something here all right then they go into into how could these things be implemented biologically and they go for some evidence and they also state that we used to look at neurons
1,859
1,894
https://www.youtube.com/watch?v=a0f07M2uj_A&t=1859s
Backpropagation and the brain
https://i.ytimg.com/vi/a…_A/hqdefault.jpg