video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
v2GRWzIhaqQ | time step each time step during the episode we update the weights so how are they going to be updated let's contrast this first to classic reinforcement learning so in classic reinforcement learning we would keep these weights the same during the entire episode and then at the end of the episode right we keep those the same and at the end of the episode we'll get a reward | 660 | 682 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=660s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | and then we'll go back we'll look back and say how do we need to change the weights such that in the next episode the reward will be higher and in again in classic reinforcement learning for example in policy gradient methods you will actually calculate a gradient with respect to these weights right here actually let's let's go into that later when we contrast evolutionary methods | 682 | 706 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=682s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | so the important part right here is that we change the weights in each time step so how do we change the weights of course we don't have access to the reward right in order to change the weights the reward is going to come into play when we change the rules to change the weights but during the episode we don't have the reward at least we assume we only get kind of | 706 | 726 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=706s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | the reward at the end so we need a different uh method and the method is going to be the following right here the important things in this formula are going to be so how do we change the weights that's dependent on two quantities that appear during each time step o i and oj and these are going to be the outputs of neuron i and neuron j so how do we change the | 726 | 754 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=726s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | connection that's going to be dependent on the output of neuron i which is here called the presynaptic output and the output of neuron j which is going to be the post synaptic output the rule the kind of mantra here is the fire together wire together means that if two neurons are active at the same time regularly then they probably should be connected together because they already correlate | 754 | 784 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=754s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | and you can see right here that there is a term in this formula that is o i times o j so this here is the correlation between or the covariance um or just the product if if we're exact between these two neurons and if they are both active regularly then this quantity is going to be high and if they're both not active regularly that or if one is active and the other one | 784 | 812 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=784s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | isn't that quantity is going to be low and the a parameter here specifies how the weights are updated in response to this so the a b c d and eta parameters right here are these are the learned parameters these are going to be your learned rules to update the weights so these change once after once per learning step so once per so after the episode is done you're | 812 | 840 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=812s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | going to change these capital constants right here including the eta which is the learning rate these things right here these are per step so this is each step gives you a different oi and oj and then you'll adjust the weight based on that you'll see that these constants here they are per weight so for each weight in this neural network we learn a separate rule | 840 | 866 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=840s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | of how to update that particular weight so the algorithm can it can basically decide for a particular weight you can decide well if these two things fire together often i want to update my weight very heavily in response to that okay so if the a is very high that means the connection responds very thoroughly to when the two neurons fire together that is not the same as to say that | 866 | 897 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=866s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | connection should always be very strong it's dependent on the input so only when this quantity is high should the network or should the weight be updated and the a parameter modulates how well it's updated or how um how how strongly it's updated it can also be negative it can be zero basically meaning that you know it doesn't matter if they fire together i don't want to update the weight | 897 | 924 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=897s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | this particular weight in response to that so you can see that you can learn these rules that can adapt to different inputs because all of the changes the delta here is dependent on the inputs so on the correlation but also on the different inputs themselves and then there is also a constant right here okay this it's as you can see it's a linear function of the inputs of the oi and oj | 924 | 955 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=924s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | and their product so i hope this is clear that the these have been these habian rules you learn abcd and eta and that gives rise to an adaptive network that can change and reconfigure itself over the course of an episode depending on the inputs and one of the things right here and we'll get to how you actually learn the rules itself in a second but one of the things right here is very | 955 | 985 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=955s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | visible as i said in this first experiment where it reconfigures itself continuously but also in this experiment with this quadruped right here so this quarter pad usually it's you know you simply walk in a direction that's your reward and rl is perfectly fine at this as well however this is a bit of a has a bit of a trick to it namely you are always in one of three | 985 | 1,010 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=985s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | situations either you have an undamaged quarter pad or its kind of left leg front left leg is damaged or its front right leg is damaged okay and you don't tell the you simply sample these situations uh uniformly and you don't tell the algorithm which situation it is in now if you look at if you compare two methods one where you directly learn the weights you learn a fixed | 1,010 | 1,040 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1010s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | policy to solve you know this is one task right this is one task and all of these three things appear with equal probability so you have to learn one policy to make all of this work if you learn the weights directly and um you don't have a power like there's no doubt that like a powerful rl approach could deal with this task but if in this case if you just put a | 1,040 | 1,066 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1040s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | standard weight learner with this same number of the same size of policy as the hebian they compare to if you put a weight learner on it it will not be able to solve this task satisfactorily what it will do is it will say well i need one set of rules that make me walk as far as possible as often as possible so if you can see at the table i'm already showing you the results | 1,066 | 1,093 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1066s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | right here the table right here if you have these static weights you can see that it's performing pretty well in two out of three situations right so it what it basically does it says okay um here is what where there's damage what it does is it says i'm going to learn to walk with my left leg using my left front leg that means when i have no damage or damage to the | 1,093 | 1,122 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1093s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | right front leg i'm just fine and i'm just going to take the hit basically where i have damage to the left front leg because i'm it's just going to suck so they solved they solve this like walk more than 100 steps so that doesn't it since it can only learn a fixed policy it um basically discards the case where there's damage to the left front leg it takes that hit | 1,122 | 1,147 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1122s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | in order to be better in the other two methods you can see it's outperforming the hebian rule in the other two methods but this shows you kind of the the difference and the power that these hebian rules or these generally neural neuroplasticity might have because the having one is perfectly capable of at least in part adapting to the different situations now you can see | 1,147 | 1,175 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1147s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | that is not symmetric also the hebian rules they learn to you know there's 860 and there's 440 of a thing that should actually be symmetric we do expect a drop when there's damage but um it's not symmetric which means that also the hebian rules they kind of randomly focus on one over the other but at least they're able in some degree to adapt to both and that's because | 1,175 | 1,204 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1175s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | it depending on the input you know it has a rule in there that basically says well if the if the back left leg and the front right leg you know if they fire together then i want to um if they if they fire together the sensors that show me that they're moving if they fire together i'm going to wire them together because that's how i walk you know front right back left and then the other way around | 1,204 | 1,230 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1204s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | and if that's not the case i'm not going to wire them together so that would be the situation where you have damage instead if they are not wired together i'm going to and can do this in the next layer of the neural network wire these other two things together you know if if the first thing is not the case i'm going to wire these other two things together to make up for that loss and | 1,230 | 1,253 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1230s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | there you can see there is kind of this logic built into the network now again i know you can do this with learning a fixed policy you can achieve the same effects the point here is just to show that um given kind of the same size networks and so on there you that there might be there might be like a qualitative difference in certain situations again by no means | 1,253 | 1,279 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1253s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | this is meant to out compete rl or anything like this okay so we'll we went there now how are these rules actually learned and there we have to again make a distinction that is completely separate from the hebbian non-habian way okay so the heavy and non-habian distinction was do we learn the weights of the policy network directly or do we learn the rules to update the | 1,279 | 1,307 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1279s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | weights now the question is whatever we learn how do we learn it and again we have to draw the distinction this time between i'm going to say classic or even though the terminology is not really correct classic rl and evolutionary methods okay so in classic rl what i would do is i would use my weights in order to obtain a reward and then i would update my weights | 1,307 | 1,337 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1307s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | so my delta w would be proportional to the gradient of w of the reward okay so in the classic rl especially this is a policy gradient method right now so i use my policy my weights to get the reward and then i would calculate a gradient and you know usually the reward isn't differentiable so you have this uh reinforced trick in order to pull the reward out and you you can | 1,337 | 1,366 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1337s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | read all of this up if you look at policy gradient uh the basic policy gradient methods but this here tells me i need a gradient usually this is going to be the reward times the gradient of my fw of my input so what this means is what this means is that if my reward is high then i i just want to know what do i need to do to make more of what i just did okay and the gradient ensures that for | 1,366 | 1,405 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1366s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | every single weight in your neural network you know what to do so the gradient means that i have an exact handle on how do i need to change this weight how do i need to change that weight how do i need to change this weight in order if the reward is high and because of this multiplication here i want to make more of what i just did and the gradient tells me how | 1,405 | 1,430 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1405s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | if the reward is low on the other hand i want to make less of what i just did but also the gradient tells me how that can be achieved i simply go into the other direction than i would if the reward is high in evolutionary methods we don't have we don't do this gradient calculation okay now there can be advantages to not doing radian calculation sometimes back propagation simply isn't | 1,430 | 1,455 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1430s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | possible even if it is possible and this is maybe the case where we are now what we need to learn in our case is these rules to update the rules and imagine you have an episode and that's kind of episode so you have step step step step and in each step these rules are applied right in each of these steps the rules are applied and at the end you get a reward so what | 1,455 | 1,480 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1455s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | you need to do is to back propagate that reward through all the steps and then through all the rules okay and that might be just computationally not feasible or the rules the rules right here are pretty um pretty easy but the rules might not be differentiable you actually have the same problem in general in classic rl as well but you know you can cut off time steps and so on there are various | 1,480 | 1,507 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1480s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | hacks in any case there can be advantages to not having that gradient and evolutionary methods are a way to do that in evolutionary method usually you are don't train one agent you train a population of agents so you have a bunch of these uh neural network agents in here and the way you update the neural network agent is you simply let them run you know you let them run your app | 1,507 | 1,534 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1507s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | the episode so this is your w one of them you let them run the episode they get a reward and then you can do multiple things so this depends on the evolutionary method so you can either pick out the best performing agent or you can update each agent according to some rule the goal here is simply to basically you always want to take your weights you want to add some noise to them | 1,534 | 1,564 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1534s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | and you want to see does it get better or worse if it gets better good if it gets worse not good okay the difference is without the gradient you don't have a handle on how do you need to change each individual weight all you can do is basically random walk and observe what happens and if the random walk is you know turns out to be good you go more into that direction of that random | 1,564 | 1,587 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1564s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | walk so it's sort of a sort of a poor poor man's gradient method in these evolutionary methods again completely independent of what we learn you can use the evolution evolutionary method to learn the fixed weights and that's what actually what happens in the table i've shown you uh below or you can use the other evolutionary method to learn the hebbian update rules | 1,587 | 1,611 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1587s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | as well you can use rl to learn the fixed weight or the update rules in this paper they use evolutionary methods to learn the hebian update rules and they compare mostly with using evolutionary methods to learn the fixed weights okay the exact evolutionary step they use right here is the following so ht here is going to be the thing that you learn you know as compared to w being the | 1,611 | 1,639 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1611s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | network weights h is going to be the hebian weights since we learn the hebian weights so how they'll update um each agent is going to be they'll take the hebian weights and this this here is how you update right this is your delta h how do you update the heavy and weights well what you do is you you perform n random perturbations so i take my weights and i add noise i just add noise okay so i i'm | 1,639 | 1,672 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1639s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | here and i just make a bunch of versions of it and then i observe how well are these versions doing so how well are my random perturbations doing this is going to be the fitness fi right here is going to be the fitness and then i'm just going to perform a weighted average so this is my weighted average of these new solutions okay so if this solution here did pretty well | 1,672 | 1,701 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1672s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | and this solution did pretty poorly i want to walk you know in this direction and then again i do the same thing here from here i do a bunch of perturbations and maybe this one did pretty well and this one did pretty poorly i want to walk in this direction and so on okay so that's how you you'll change the um you'll change weights or rules or whatever you want in an | 1,701 | 1,729 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1701s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | evolutionary method it's you know it's pretty easy it's easier than reinforcement learning no back prop no nothing basically black box optimizer there are more complicated evolutionary methods but no we don't go into those here right now okay so again i've already shown you these results now i said these static weights are also with evolutionary method they also report what you would get with | 1,729 | 1,760 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1729s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | like a rl approach like ppo you would get kind of the same thing as they get um as they get here so oh sorry this is not the same as the table yeah i was confused for for a second this here is for the car environment okay this is this vision based environment so with their method they get like an 870 rewards with the hebian based approach with the static weight but still | 1,760 | 1,794 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1760s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | evolutionary method they get a much lower reward in fact the hebbian based approach is about the same as you get here with an rl algorithm and as we said url algorithm more complicated and if you use like a if you use like a state-of-the-art rl algorithm not just ppo you get a bit of a better performance but not that much if you look at if you look at the actual | 1,794 | 1,822 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1794s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | numbers so you know pretty cool to see that again this is not outperforming anything this is simply showing that um you can do that they do a number of experiments where they go in the episode and they kind of change stuff in the episode and one cool thing here is that they go and you know this is an episode so at the episode you start with a random network each time in | 1,822 | 1,851 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1822s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | this hebian setting and then pretty quickly the rules adapt for a high performing right so it it starts to walk it reconfigures itself and starts to walk the reward here again it doesn't have access to that but we can measure it of course and then at this step a right here they simply go to the weights and zero them out so they just delete these weights right here | 1,851 | 1,877 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1851s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | and only 10 time steps later it has reconfigured itself as you can see right here in order to walk again so 10 time steps later reconfigures itself reconfigures itself and after a short while right here it's back to its kind of original performance as you can see so that's i i'd say that's fairly um fairly impressive uh in this very short amount of time able to recover from such | 1,877 | 1,907 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1877s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | and such an intervention if you do this i mean of course if you do this to your policy network that's statically learned it's going to be garbage but i guess the fair comparison would be to delete the habian rules themselves and you know so it's not like it's not like this can adapt to new situations or something like this this is still learned for particular environments | 1,907 | 1,932 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1907s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | right but the the point here is that you learn the rules and this is kind of a study on neuroplasticity now my question actually would be why this diagonal pattern appears and i have not seen a like a clear explanation um especially is this anti-diagonal pattern it's not so much here in the output layer right this is the output layer there are what 21 actions or so and this | 1,932 | 1,960 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1932s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | one is this this dimension um so not that much there but there seems to be this rule and this is not the case at the beginning right you saw the beginning you saw at the beginning it was pretty random matrix so why why yeah here pretty random and then there's this diagonal pattern i don't know why if you know let me know i mean it's anti-diagonal maybe it it is | 1,960 | 1,990 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1960s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | actually diagonal and the forward the fully connected layer is just defined as something like wt times x and um but maybe this also depends on the random initialization but there is no inherent way why a particular neuron would you know care about sending information to like the same height of neuron on the other side or is there i don't know i'm so is this a property of the evolutionary | 1,990 | 2,025 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=1990s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | or of the learning rules it seems not because the learning rules don't depend on the position i'm genuinely confused about this and maybe you know maybe they've written it somewhere and i've just overlooked it though i they they do reference it they say oh there's this diagonal pattern appearing but i don't think they ever say why it is diagonal um okay i might just be i might just be | 2,025 | 2,054 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2025s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | real dumb yeah so they also you know they do some more experiments they show for example that if you just have random hebbian coefficients then your algorithm just jumps around kind of um in in weight space around the zero point however if you actually learn these having coefficients as they do you have like this clear attractor here and you have these kind of | 2,054 | 2,078 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2054s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | oscillating curves uh when you know when when you do that and you can see here in the different situations where things are damaged and so on so all in all i think it's a pretty interesting study and i think this neuroplasticity is it's a different way you know it's unclear to say if it will ever deliver the the performance that rl delivers but certainly there are situations where | 2,078 | 2,105 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2078s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | such plasticity is desired and if we can also combine this with greater generalization performance then you know we have agents that can quickly kind of reconfigure and a lot of work by these this kind of open-ended learning community also plays into these roles all in all pretty pretty cool uh non-standard way of doing things last thing the broader impact statement uh every now | 2,105 | 2,133 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2105s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | and then we'll look at a broader impact statement since these are new just to get kind of an overview of what they look like so they say the ethics computer societal consequence of this book are hard to predict but likely similar to other work dealing with more adaptive agents and robots in particular by giving the robot stability to still function when injured could make it easier from them | 2,133 | 2,154 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2133s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | being deployed in areas that have both a positive and negative impact on society okay well again this it's it's not really giving robots the ability to still function when they're injured i first i thought first i thought okay they train it when it's fully functioning but then they damage it during test time but as i understand it as i understand the paper they already train it with the damaged | 2,154 | 2,184 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2154s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | versions they just don't tell the algorithm in which version it is right now so um it's not the same as being able to work when injured unless you've specifically trained for it in this case again i could be wrong about this yeah in the very long term robots that can adapt could help in industrial automation or help to care for the elderly on the other hand more adaptive robots could also be more | 2,184 | 2,213 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2184s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | easily used for military applications the approach presented is papers far from being deployed in these areas but is important to discuss its potential long-term consequences early on now okay so let's evaluate the broader impact statement let's well the first check to do is always to simply replace um whatever their method is with the word technology okay so let's do that | 2,213 | 2,241 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2213s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | in the very long term technology could help in industrial automation or help to care for the elderly check on the other hand technology could also be more easily used for military application check the technology is far from being deployed in these areas okay i guess some technology isn't but advanced technology yeah so again the rule for broader impact statements seem to be | 2,241 | 2,271 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2241s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | you take whatever your method is and you go up until uh you find you know you're basically at technology or something equivalent uh because no one actually i've never seen a broader impact statement that writes about the actual thing in the paper they always go up like one layer or two and then it basically regresses to technology even even though very few papers actually would be able | 2,271 | 2,300 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2271s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | to discuss their particular thing but you know um and that and then in terms of guidelines on broader impact statement this one is missing there's there's always this um the holy trifecta so the holy trifecta is you go like a you know like you're a you're a catholic uh you go with your finger to your head chest left and right and you say technology good technology bad technology biased okay | 2,300 | 2,327 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2300s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
v2GRWzIhaqQ | so if you want to write a broader impact statement go up the layers technology good bad bias and we're missing the bias here so that's you know i'm just following what these guidelines two broader impact statements are i don't make the rules i'm sorry the the hebbians make the rules apparently um i'm not having okay i've i hope you've enjoyed this paper and this video | 2,327 | 2,353 | https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=2327s | Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) | |
hg2Q_O5b9w4 | hi there today we're going to look at curl contrastive unsupervised representations for reinforcement learning by Aravind Sreenivas Michel Laskin and Petra Biel so this is a general framework for unsupervised representation learning for our L so let's untangle the title a little bit it is for reinforcement learning which it if you don't know what reinforcement | 0 | 28 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=0s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | learning is I've done a bunch of videos on are L afraid works so it's for general reinforcement learning that means it can be paired with almost any RL algorithm out there so we're not going to you know dive into specific or allowed rooms today it is unsupervised which means it doesn't need any sort of labels and it also doesn't need a reward signal forum RL which is pretty cool | 28 | 58 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=28s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | because usually the entire RL pipelines rely on some sort of a reward or auxiliary reward signal now there is a training objective here but it doesn't have to do with the RL reward and then in the it is learning representations which means it learns it learns intermediate representations of the input data that is useful and in the end it is contrastive and that is the the | 58 | 86 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=58s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | kind of secret sauce in here the training objective it's what's called contrastive learning and that's what we're going to spend most of our time on today exploring what that means alright so here's the general framework you can see it down here sorry about that so you can see that reinforcement learning is just a box which is we don't care about the RL algorithm you use that's just you | 86 | 116 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=86s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | know what what comes at the end what comes at the beginning oh here is the observation so the observation in an RL algorithm is kind of fundamental now if someone explains RL to you or reinforcement learning usually what they'll say is there is some kind of actor and there is some kind of environment right and the environment will give you an observation right | 116 | 143 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=116s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | observation Oh which is some sort of let's say here is an image right so in this in this RL framework specifically the examples they give are of image based reinforcement learning so let's say the Atari game where you have this little spaceship here and there are meteorites up here and you need to shoot them so there is a little shot here right you need to shoot those meteorites | 143 | 173 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=143s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | right so this is the observation oh and then as an age as an actor you have to come up with some sort of action and the actions here can be something like moved to the left move to the right press the button that you know does the shooting so you have to come up with an action somehow given this observation and then the environment will give you back a reward along with the next observation | 173 | 199 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=173s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | like the next frame of the game and you're gonna have to come up with another action in response to that and the environments going to give you back another reward and the next observation and so on so what you want to do is you want to find a mapping from observation to action such that your reward is going to be as high as possible right this is the fundamental problem of RL and | 199 | 225 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=199s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | usually what people do is they take this act this mapping here from observation to action to be some sort of function some sort of function that is parameterised maybe and nowadays of course it's often a neural network but you're trying to learn given the input observation what output action you need to do and you can think of the same here so you have this input observation up | 225 | 252 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=225s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | here and down here after the reinforcement learning the output is going to be an action right and so this this function we talked about up here is usually implemented sorry is usually implement as you put the observation into the r.l framework and then the RL framework learns this f of theta function to give you an action now here you can see the pipeline is a bit different we don't | 252 | 279 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=252s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | want to shove the observation in directly right we don't want the observation directly but what we put into the RL framework is this Q thing now the Q is supposed to be a representation of the observation and a useful representation so if we think of this of this game here of this Atari game up here what could be the what could be a useful representation if if I | 279 | 308 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=279s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | had to craft one by hand how would I construct a useful representation keep in mind the representation the goal is to have a representation of the observation that is more useful to the RL algorithm than just the pure pixels of the image right so if I have to craft a representation let's say it's a vector right let's say our our our representations need to be vectors what | 308 | 336 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=308s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | I would do is I would probably take the x and y coordinates of the little spaceship right x and y and put it in the vector that's pretty useful and then I would probably take the x and y coordinates of the meteorites that are around right let's say there are maximum two XY XY here I would probably take the angle right the angle where my spaceship is pointing to that should be pretty | 336 | 371 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=336s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | useful because if I shoot I want to know where I shoot right so theta here and then probably maybe the X and y coordinate of the of the shot here of the red shot that I fired if there is one right also going to put that into my representation so x and y and maybe Delta X Delta Y something like this right so you can see if I had to handcraft something if I I can pretty | 371 | 401 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=371s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | much guarantee that if I put in this representation right here into the RL algorithm but put this in here it would turn out guaranteed it would turn out to be a better or L agent that learns faster than if I put in the original observation which is the the pixel image of the game right because of course in order to play the game correctly in order to play the game to win you need | 401 | 434 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=401s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | to extract this information right you need to get our there's something like a spaceship there's something like meteorites this is all things that are elegant doesn't know her say and would have to learn from the pixels right but if I already give it the information that is useful it can learn much faster all right so you can see if I handcraft a good representation it's pretty easy | 434 | 458 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=434s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | for the RL algorithm to improve now we want to come up with a framework that automatically comes up with a good representation right so it alleviates the RL algorithm here that reinforcement it alleviates that from learn from having to learn a good representation right it already is burdened with learning the what a good action is in any given situation right we want to | 458 | 487 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=458s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | alleviate it of the burden to also extract useful information from the from the observation space right so how do we do this this is Q here is supposed to be exactly that it's supposed to be a good representation but not one that we handcrafted but a used with a technique that can be employed pretty much everywhere and the goal sorry that the secret sauce here is this contrastive | 487 | 520 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=487s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | loss thing okay this bombed contrastive learning is this this kind of magic thing that will make us good representations so what is contrastive learning in this case I'm going to explain in this case for this kind of image based for image based reinforcement learning but just for image based neural networks how can we come up with a contrastive loss so you see there's kind | 520 | 554 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=520s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | of a two pipeline thing going on here there is like this and this and then one of them is going to be the good encoding all right so let's check it out let's say we have this image that we had before right draw it again this little spaceship this and this and so right and we want to we want to do this what we need to do is we need to produce three different things from it we need to | 554 | 595 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=554s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | produce an anchor what's called an anchor so we need to produce a positive sample positive sample and we need to produce negative samples let's just go with one negative sample for now right so the goal is to come up with a task that where we produce our own labels right so we want since we're training a encoder and the encoder is a neural network that's parametrized we need some | 595 | 627 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=595s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | sort of loss function so the goal is to come up with a method where we can create our own labels to a task but that we construct the task in a way such that the neural network has no choice but learn something meaningful even though we made the task of ourselves all right I hope this was kind of clear so how are we gonna do this our method of choice here is going to be random cropping now | 627 | 655 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=627s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | random cropping means that I just I take an image right and I crop a a piece from it so a smaller piece from the image I just take a view inside the image so in case of the anchor right I'm gonna draw the same picture here bear with me I'm gonna draw the same picture here a couple of times this is all supposed to be the same picture and with the negative sample I'm just gonna leave it | 655 | 685 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=655s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | empty for now there are two meteorites two meteorites shot shot right so for the anchor we're going to actually not random crop but center crop right so we're going to take here the center image right so the assumption is kind of that if I Center if I Center crop I won't lose you know too much of the image I can actually make the crop bigger such that almost everything of | 685 | 719 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=685s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | the image is somewhat contained in this and that yeah all right so this is going to be my anchor and then the positive sample is going to be a random crop of the same image so I'm just randomly going to select a same size same size section from that image let's say this is up right here all right and the negative sample is going to be around the crop from a different image right so | 719 | 753 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=719s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | a different image might be from the same game right but might be there is a meteorite here right and there is no shot I don't I don't shoot and I'm going to take a random crop from this let's say I'm going to take a random crop here let's put a meteorite here as well just for fun all right so these are going to be our three samples and now the question is going to be if I give the | 753 | 789 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=753s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | anchor to the neural network I'm going to say I give you the anchor right but I'm also going to give you this and this thing and I'm not going to give any of this I'm just going to give whatever I cropped right so just just these things so I asked the neural network neural network I give you the anchor now which one of these two which one of these two crops comes from the | 789 | 826 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=789s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | same image right so as human you look at this and if you just see the center crop you see oh okay down here there's this this tip of this thing and then there's the shot right and in relation to the shot there is a meteor here right and then you look at the second one and you say okay I don't see the spaceship but there's the same relation here from the shot to the meteor and I can kind of see | 826 | 852 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=826s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | the meteor up here and this also fits with that right and the the spaceship must be you know down here somewhere and then I go over here and I try to do the same thing is okay here's the meteor and you know it it might be it might be in the original image it might be over here somewhere so that's possible I don't see it right that's possible but then there should be there should be a shot right | 852 | 883 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=852s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | somewhere here or sorry further up oops T there should be a shot somewhere here right I'm pretty sure because there's there's one over here and I don't see it right so I am fairly sure mr. tasks asked her that this image here is the positive sample while this image here is the negative sample right so this is the task that you ask of the neural network give it the anchor and | 883 | 913 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=883s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | you ask which one of the of these two comes from the same image right this is called contrastive learning now is a bit more complicated in that of course what you do is you encode these things using neural networks and then so each of the things you encode so the anchor you're going to encode all of these things using a neural network right and then this is what's going to | 913 | 950 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=913s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | become the query and these are becoming the keys so key one or key two and then you're going to feed it always two of them into a bilinear product right the bilinear product is simply you can think of it as an inner product in a perturbed space that you can learn so you're going to have this you have these two here these go into q WK one and then these two here sorry this and this go into q w | 950 | 985 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=950s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | k 2 now W here is a learnable parameter right so you have some freedom and then you basically take whichever one of those two is highest right so this might be this high and this might only be this high and then you say aha cool this one's higher so this one must be the positive right and you train the W specifically to make this higher to make the positive ones higher and the | 985 | 1,016 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=985s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | negative ones a lower so this is a supervised learning task right where these things here are going to be the lockets or or the so their inner product but you basically then pick the one that is highest as a in a soft max way and they put this in the paper so if we go down here the objective that they use to do the contrastive learning is this one so as you can see it's a soft max like | 1,016 | 1,051 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1016s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | in multi-class classification of the inner product the bilinear product with the positive samples over the bilinear product with the positive samples plus the bilinear product with all of the negative samples so you're going to come up with more than one negative sample all right now the only thing left that we don't have here is that the encoding how you're going to come from the image | 1,051 | 1,082 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1051s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | space to this space here is going to be slightly different and depending on whether you're talking on the anchor or on the what what are called the keys the things you compare to and this is out of a kind of a stability criterion you already maybe you don't you know like something like double q-learning or things like this it sometimes when you train with your own thing so in | 1,082 | 1,112 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1082s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | q-learning you're kind of trying to to come up with an actor and a critic or it's not the same thing but you're kind of using the same neural network twice in in your in your setup and then you compare the output stored to each other which isn't you know it leads to instability so in our case we took it three times here or multiple times especially for the same objective here | 1,112 | 1,147 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1112s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | we have twice something that was encoded by the same neural networking isn't the two sides of this by linear product so if we were to use the same neural network that tends to be somewhat unstable so we have different neural networks one that will encode the query which is this F Q + 1 which will encode the keys sorry F ok now we don't want to learn to neural networks and that's why | 1,147 | 1,176 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1147s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | |
hg2Q_O5b9w4 | there's a bit of a compromise where we say it is the same neural network but but basically this one is the one we learn and then we always every now and then we transfer over the parameters to that one and in fact each step we transfer over the parameters and do an exponential moving average with the parameters of this momentum encoder from the step before so the | 1,176 | 1,209 | https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1176s | CURL: Contrastive Unsupervised Representations for Reinforcement Learning |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.