video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
YqvhDPd1UEw
example of state learning in action going relatively quickly earth-2 just gonna give it a lot of different ideas across we've covered the beta tae and one of the early lectures beta V is a very solid encoder we'll put a coefficient beta in front of the KL loss on the prior and by making that collision beta bigger than one effective what we're doing is we're trying to make
3,383
3,405
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3383s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
the latent variable Z maximally independent so we're trying to find a disentangled representation of the scene and so the thinking here is that well if we want to find something that we think of our state from raw pixel values and probably we need to find something that's really strongly disentangled and so it's putting that prior into it and they show that by having this beta V you
3,405
3,429
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3405s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
actually get much better transfer so they train a beta vini and then do Q learning and bit in a network that takes the embeddings from the beta ba and compare it with regular Q learning and so on the left here we see what happens in the training environments the training environments regular Chi learning and Darla's which is few learning with the beta V representation
3,429
3,455
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3429s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
learning do about equally well but when we look at a new task related task but looks very different by doing the representation line which is shown at the bottom right we have to get much better performance top left is actually not getting a job done it's not collecting these yellow targets whereas bottom varieties we look at at collecting yellow targets and what's
3,455
3,477
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3455s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
changed while the walls in the background have changed to pink rather than green and the ground has changed the blue rather than yellow so a relatively small change originally QN this doesn't do representation learning per se hasn't learned those notions whereas stability has somehow learned a representation that allows it to transfer here at zero shop to this new environment much better
3,477
3,502
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3477s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
it's not our idea of representing first state and dynamics we looked up Gantz and I think lecture four of this class now if you just train again what happens is that you just transfer each frame independently what we want this we want to learn but there's intentions that are consistent over time as we're going to do is we're gonna have a discriminator here that looks at two
3,502
3,526
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3502s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
consecutive observations and decides whether those two consecutive observations are consecutive observations from the real world and welcome secular observations generated by a generator and so if a generator here trying to generate fake sequence of observations trying to fool the discriminator and at convergence what that means that this generator is trying
3,526
3,548
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3526s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
to generate observation sequences that are indistinguishable from real-world observation sequences once you have done you can use that generator as a simulator and learn that simulator or planning that similar in this case we did planning to try to achieve goals we will see on the right here is we didn't actually did this for rope manipulation so on the left is the initial spin of
3,548
3,571
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3548s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
the road on the right design end state of the rope and we see us with causal info again it thinks that these are the interpolated states so it thinks that this is a sequence of states you have to go through to go from the initial state to the end state same for the next one next one next one compare that with VC GM which we also which would currently just doesn't look
3,571
3,591
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3571s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
at transitions just looked at individual frames we see that the interpolation here doesn't necessarily lead to intermediate states that are that meaningful for a robot to try to follow that you know sequence of intermediate states and rope configurations to get some start to go and so we're able to by training in Foca which looks at realism of transitions rather than just realism
3,591
3,611
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3591s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
of individual frames is able to learn a dynamics model in a latent space that we can use for robot to make plants now one of the first things we covered was the world models which showed that you can learn a latent space and then learn all right on top of the latent space for dynamics and then learn a linear control on top of that of course that's a very new you think it's
3,611
3,637
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3611s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
almost surprising it works in there what's so interesting that it actually does work in a range of environments but hopefully it's not not likely to be the final answer to keep it that simple and so here's a paper called planet learning latent announced models from pixels lesson planning in it so what's what's new here is after learned laden space 10-ounce model it's actually risk is not
3,637
3,662
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3637s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
deploying a policy of learning it's using a planner look using look ahead as a function of which sequence of actions do I get the most reward taking that first action that sequence of actions repeat and here learns the latent space encoding together with learning D dynamics also is joint learning of encoding and dynamics recently has been an improver that is called dreamer from
3,662
3,687
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3662s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
the same office roughly and what they show is that you can actually run a limb planning in in latent space you can actually train a active critic and Leyton space simulator and that'll actually do better than i'm the planet he also showed that the dynamics model you learn it's better in these environments to learning stochastic dynamics model rather than in the
3,687
3,714
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3687s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
domestic dynamics model and that there's a two big differences between planned a dreamer going from planning to learning active critic agent and using a stochastic model now so far we talked about latent space models and directly learning to control in the latent space there is also work that actually goes back the image space and so here are some example executions by robot moving
3,714
3,746
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3714s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
objects to target locations and this is done by this the system here learned a video prediction model so i learn as a function of action the robot takes what will be the next frame i see and i long to the next action will be the next frame RC and so forth once you have a action conditional video prediction model and if you have a target frame or target property that you want to achieve
3,746
3,770
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3746s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
you can now use this action traditional bigger prediction model as your simulator and this can give really good results actually some examples shown here the slide get the downside of this that often planning to take a long time because to generate an actual traditional video prediction it can be fairly expensive we need to generate actually many of them because you're
3,770
3,795
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3770s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
trying different sequence of actions to see which one might work the best and then after you find one that might work the best it might be a sequence of ten actions you take the first of those ten actions and you repeat that whole process and so these things tend to be not as real time as some of the other things we looked at but it's very surprising how all this works you can do
3,795
3,815
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3795s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
full a traditional video fliction and manipulate objects that way now one thing you might wonder is it's all good and well to do full detailed video prediction but is it always meaningful imagine you drop the bottle of water class ball of water drops on the floor how are you gonna do video prediction 1/2 for what happens there very very hard I mean you you're never
3,815
3,847
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3815s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
gonna have access to all the details of the state of the water in the ball all the little defects that might be in the mid you know water bottle materials and so forth that will determine how exactly this thing fractures the best you can be able to do is probably why I think it's gonna break into a lot of pieces and pieces of different sizes and you know maybe the the net tongue stays together
3,847
3,870
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3847s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
because it doesn't hit the ground it's the bottom that's hitting the ground and so forth and you also don't need the details like to make decisions you just need to know it's going to break and so what you could do say hey instead of learning a fully dynamics model and say I need to learn just what the future will look like be able to predict that you say hey what if I can predict what
3,870
3,894
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3870s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
action I took for example seeing this shattered bottle and say well the action taken was dropping the bottle and so if I can go from I can make that prediction then I can also understand you want to achieve a certain goal what action might leave me there and not with me there this is called inverse dynamics and so that's at the core of many other dynamics models that being learned
3,894
3,915
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3894s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
throughout that for dynamics learn an inverse dynamics more effectively like learning a goal condition action strategy so no is it a paper here what if it said as follows it says we want to learn a Ford model and latent space I want to sleep in space - of course in fact will represent the things that matter but if all we care about is live in space predictions then the problem is
3,915
3,942
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3915s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
that maybe we'll make our little space always zero and we picked always zero we're always good but we don't have anything interesting and so they're gonna say well we want to learn a little space we would complete the next latent state but to avoid it being all zeroes or any other way of being degenerate we're going to require that from the latest state of the next time
3,942
3,965
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3942s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
50 plus one and the light instead of the current times et which we offered to predict the action that was taken at time T and so we went to dynamics models at the same time we're learning a inverse dynamics in a fluid dynamics model at the same time in this light in space this is applied to learn to poke object so well you see here on the left is data collection you can set this up
3,965
3,997
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3965s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
for autonomous data collection on the right where you see is the learned control so it's learned that the Namek law and now I can look at the current state and look at the goal state and it can do a prediction of which action is going to help the most to get close to that goal state and can repeatedly do that well it finally reaches something very close to the goal State okay now
3,997
4,026
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3997s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
reinforced planning is about reward so far ration mostly ignored the rewards when we learned representations and so we'll switch that up now let's not just learn to predict next state but also learn to predict future reward kind of first paper down or first recent paper that looked at this and the deep reinforcement in convicts is a predictor on paper so enter learning and planning and what
4,026
4,050
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4026s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
they said is well it's difficult to know what needs to go into the latent state and so because we don't really know what it has to when laid instead and we don't necessarily want to reconstruct the full observation because that's just so many things to reconstruct them we really want to focus on the essence well if what we care about is getting high reward should we just focus on predicting
4,050
4,074
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4050s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
future rewards for every sequence of actions we can predict the future reward well we should be good to go then we can just thick secretive action that leaves the highest feature award and we're good to go predictor on did this for some relatively simple environments showing here billiards as a function of which actually you tink how old the billiards balls and up and I should have pretty
4,074
4,096
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4074s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
well on that task and also they looked at it for maze navigation now the most threesome is all the scores that you might have heard of that builds on top of these very directly is mu 0 mu 0 is also learning a blatant dynamics model that predicts rewards and doesn't worry about reconstruction and so this one here doesn't just given one action in the beginning what's the sequence of
4,096
4,121
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4096s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
latent states that allow me to predict reward in the future and use 0 same thing but now action conditional and was able to solve a very wide range of game situations I'm on a variation is successful feature so you might say it's enough predicting reward which is just one number what if reward consists of many components gave a clear about location of the robot maybe I care about energy
4,121
4,149
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4121s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
expend maybe I care about other things these are all features and so the idea here is then if I had a set of features that relate the reward why not learn to predict well learn a latent space model that allows me to predict the future sequence of features encountered we looked at this ourselves in the comics of navigation actually so when you have a robot that's navigating a world it
4,149
4,177
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4149s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
does some convolutional processing of its observations then they'll be some lsdm because when you're navigating you currently see something we want also remember things you've seen in the past it's in memory here and then some that should try to predict observations features of observations they might in the future for example might have a pollution or something like that here's
4,177
4,199
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4177s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
this system in action so we're gonna so what we have here let me fast forward this a little bit to the experimental setup so what we see here is this is inside a simulator actually for now but also real world experiments coming later you see the kind of visual inputs this is processing and it's trying to predict things about speed hiding collision those are features it's trying to
4,199
4,240
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4199s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
predict so I put it know this many steps in the future well my heading B will my speed be my collision be based on what I see right now and based on the actions I will take any intervening time through that is able to learn in a total internal representation of how the world works but most importantly how the world works as it relates to features that matter
4,240
4,260
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4240s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
for navigation versus try to learn everything about the world which might be a lot to learn relative to what you actually need to learn be successful at your task and so based on its able to learn to navigate these environments pretty well then so that the real robot so here we have the actual robot that's going to learn to navigate the hallways in quarry all over at the electrical
4,260
4,283
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4260s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
engine electrical engineering building at Berkeley so we see here and actually when it's still learning has a lot of collisions but it learns to predict that it learns something say if I think if I see this take that sequence of actions I will have a collision in five time steps or my heading will change in that way and so forth and so after training its internalize a lot of how the world works
4,283
4,306
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4283s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
now I can plan against a transition well I need to act now let's go to this is test let's learn now we can see that it's learned to avoid collisions and in terms of what it's doing it it knows to predict as a function of the actions taking whatever which is likely to happen or not well heading it might end up with and then take actions accordingly and again the reason I'm showing all these videos
4,306
4,333
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4306s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
here is because as you see different approaches are testable very different environments and this by no means a converged research field and there's a lot of variation how things get tested and by looking at how its tested to give you a sense of how complex an environment a certain approach might be able to handle now a natural question you might have is well this is all great there's all these
4,333
4,357
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4333s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
different ways of learning representations but could we come up with a way of optimally representing the world what would that even mean what does it mean to have an optimal reclamation of the world well there's some worried especially trying to get up this so here are some fairly theoretical to be fair references on trying to understand what it means the popularization of the world and one
4,357
4,378
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4357s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
thing you'll often see come back is his word oma morphism and when it refers to is that essentially you have the real world you have a simulator and you want it to be the case ad if you go from real world to some weight and space simulator so you have a one-to-one match that's happening you go from from lit from real world to this latent space representation at that point you
4,378
4,400
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4378s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
simulate in both worlds and then after a while he tried to map back and see if it still corresponds but homomorphism would mean that you had still the correspondents many steps if he's are any number of steps in the future and so that would be kind of a by simulation homomorphism type approach and the question of course becomes what's the minimal life in space that you need to
4,400
4,423
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4400s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
be able to do that just the more middle that laden spaces the less variables you want to do was as a reinforcement winner or a planner who tried to learn achieve good reward in the environment now one thing that's very well-known in traditional controls is something called separation principle and separation principle in traditional control says the following and it's not well it's
4,423
4,448
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4423s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
very specific snare it says if I have a linear dynamical system and I have noisy observations of this state so I don't have access to state I only have noisy observations and these noisy observations are linear functions of the state so linear dynamics observations linear function of this state then to do optimal control in this environment where I don't have full
4,448
4,483
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4448s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
access to the state all I need to do is to find the optimal estimator of state which will be a common filter and use data out with my best estimate of the state at every time combine them in the optimal controller assuming I have full access to state so the separation panel says I could several enzyme an estimator and a controller design them separately and then combine and that's actually optimal
4,483
4,510
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4483s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
and that's actually very related things we've been talking about one of the representation I wanted to control want to be the case I would do the right can representation when decision comes out of it and just be used with optimal control we get the optimal result and so you some work now trying to look at what you have a nonlinear system might apply deep neural networks and so
4,510
4,529
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4510s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
forth what does it mean to have you know optimal estimation of state from your observations and how you know when is that compatible with your control and so forth so very interesting theoretical direction if you're more Theory inclined so another way to think of it is to say well shouldn't I just think about it end to end so often in deep learning you have kind of two paths one path is
4,529
4,558
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4529s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
you're gonna try to design something and the other pattern you say hey I'm just think about the result that one is the result that one let me define a loss function on the result I want and then training a staff instead of putting all the modules in more detail together myself so in this case what it might mean well instead of learning representation for a dynamic smaller and
4,558
4,581
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4558s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
then bolt it on a planter or bolting on a reinforcement agent why not say hey when I learned my dynamics model I should train it end to end such that what I learned is maximal compatible with a planner that I will use in the future so this goes a little bit back to the early thing we cover the embed to control we said if we can learn a linear dynamics model in latent space planning
4,581
4,606
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4581s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
comes easy you're gonna say what a feel a more general plan and my mom and so that general planner might work well in a wide range of situations now can we learn a representation that if we combine it with a more general planner together they function well if so then we learned a good representation so when we did this and some early work validation that works led by then post hoc Aviv tomorrow
4,606
4,634
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4606s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
now professor at Technion we showed that actually the validation a very common way of doing planning for toddler Markov decision processes actually this validation process can be turned into a neural network representation and so we can then bolt this validation network onto a representation Learning Network and optimize them together to try to get good performance out turning image input
4,634
4,666
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4634s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
into a representation on which validation runs and the encoding of image input will need to be such down the validation process actually gives good results and we even gave the validation process and flexibility to learn parts of that which showed that this way you can actually get very good performance on planning tasks they might say well planning with visual inputs
4,666
4,688
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4666s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
shouldn't just choose you just be able to learn a confident that just kind of looks at it and makes the right decision well it turns out really if sometimes what we're doing here is building a very strong prior intercom by building the validation aspect into it that's a bit like why do we use a confident we'll use a continent to encode translation invariance and once we can learn more
4,688
4,708
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4688s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
efficiently than if we were to use a fully connected Network it's kind of the same idea here we're learning that work that should solve a control problem that under the hood uses planning well then we should just put that planning into network the planning structure into the network so we can learn it all and when and now one question that has often come up in this in this context as well she
4,708
4,733
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4708s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
we ever do pixel level video prediction that's a good question I mean awfully you're just looking at noise and what's the point in trying to predict that what really matters is predicting the things that effect so how do you do that more directly so we're going to use plan ability as a criterion for representation learning now so validation that works as I just
4,733
4,759
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4733s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
described go into a little more detail it says have an observation observation it's turn goes into a module that outputs a value function which is how good a certain state is they put that out for every every state that you can hang on in parallel then an attention mechanism will look at the currents observation and understand which of all these possible stages
4,759
4,792
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4759s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
should index into then make a decision on what to do in the current state the value iteration module looks at essentially validation cannot remember that sense what it does it needs to look at reward and dynamics model and then this and rewarding dynamic smaller can do a recurrent calculation to get out the value of each state so this is just a recurrent calculation repeating
4,792
4,817
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4792s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
applying the same operation so some recurring moment and it's a return on that work with local calculation because states next to each other can be visited from each other and happen to show off in this dynamic programming calculation so turns out that there is a recurrent component is enough to represent validation calculation but we don't need to do it with some evaluation which only
4,817
4,840
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4817s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
applies to situation we can have a tabular representation of the world which means for very relatively small discrete states places he knows more generally so we're looking at here is universal planning Network universal fine network says okay we have an observation we want to achieve a goal observation we take our initial observation we're turning to the late
4,840
4,861
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4840s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
instead we're gonna encoded then we take an action new let instead looks we take an action new let me say we're not actually late in state and so forth taken on articulated state and after that series of actions we want our little state here to match up the delays in state of the goal of the rich that water key so we can do is within search over actions that will get us close and
4,861
4,886
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4861s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
so if we had already trained this live in space dynamics model all we would need to do is to optimize this sequence of actions and if this is a continuous space we can optimize the sequence of actions and back to the dishes a look around standard vacuum Gatien define a sequence of actions that optimizes how close we'll get to the goal so that's the planning part assuming we
4,886
4,914
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4886s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
have this dance model we can run back obligation to play how do you get the dynamics model well here's we're going to do we're going to learn the dynamics model such that so we're going to try to find parameters in this dynamics model such that if we use those parameters to run this optimization to find actions then the sequence of actions we find corresponds to what was shown in a
4,914
4,941
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4914s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
demonstration that we're given so we're given a demonstration a sequence of actions and we'll have an imitation loss and that will say we want to be able to imitate the sequence of actions by writing this very specific process of optimizing with aggregation our sequence of actions against a dynamics model that we're going to learn once we have learned that the nameks model this way
4,941
4,964
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4941s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
what it means is then onwards we can learn we can use this latent space dynamics model to find sequence of actions to optimize how close we get to some other goal in the future so benefit here is that internalization that our inductive bias than just learning some backbone black box for imitation now she also learns a metric in this abstract space that's useful for reinforced learning in
4,964
4,987
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4964s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
the future so we're comparing us with a reactive imitation learning it just says okay I need to just imitate a sequence of actions but this black box known that doesn't know that when you imitate probably the demonstrator had a goal and you're trying to find something of actions that it keeps that goal so it doesn't have that inductive bias it's not I do as well and something closes
4,987
5,010
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4987s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
the architecture we use is something that's also a recurrent neural network but doesn't have the internal optimization process in the inner loop to find a sequence of actions that optimizes how close we get to a goal so task we looked at here was some maze navigation tasks and also reaching between obstacles to a target the courtesy here on the horizontal axis
5,010
5,035
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5010s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
number of demonstrations the bring boxes is the average test success rate oh it seems Universal planning networks outperforms the baselines that I just described but that means that building that inductive bias helps significantly in learning to solve these problems now note that you can says well what did it actually learn we said to build an inductive bias we
5,035
5,061
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5035s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
say with building inductive bias to learn to plan in that inner loop but is it really learn to plan here's experiment doing say what if we learn with 40 iterations of gradient descent to find a civil actions and then we test with a very number of planning steps meaning we vary the number of Grandison steps in the inner loop when we do plan if our thing is doing planning then the
5,061
5,090
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5061s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
hope is that by writing more planning iterations it would keep refining the plan and end up with a better plan than if it always access to 40 iterations that's indeed what we see here after the horizontal actually we increase the number of planning steps the test success rate goes up for the same amount same training same training just different number of planning steps of
5,090
5,110
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5090s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
tests on so this indicates that likely is something like planning is really happening under the hood and if you plan longer you can do better nothing that happens is when you do this you learn a representation that ties into how an agent should make decisions that representation can be used by a reinforcement learning agent to learn more quickly what makes me a force wink
5,110
5,134
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5110s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
typically hard is that the reward is sparse but if you map your world into this latent space in that latent space where you're running this optimizer grading descent to find good actions well again bring descend assumes that there's some smoothness so once you've learned that we can space where there are smoothness you can optimize against that probably means that in that latent
5,134
5,157
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5134s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
space distances are more meaningful I think now do reinforce learning against distances in that waking space you're doing it against the reward that's better that's not sparse but it's dense and it's giving a signal locally on whether you're improving or not improving on what you're doing and so we showed in a wide range of environments I did indeed reinforcement learning can be
5,157
5,176
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5157s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
a lot more effective when using the distance in Laytonsville earning in the process I just described but then you do reinforcement in a new environment example we did imitation in three link and 4 link environments switch to a 5 link environment Rand reinforced wanting the file of environments faced and the latent space there is used for reward shaping and I guess you learn a lot more
5,176
5,201
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5176s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
quickly same thing here where the initial learning happened with a point mass and a to sub-point mass and then actually have the controller and robot and thanks to these shaping that comes from learning the slave representation where distances are meaningful learning can be a lot more efficient okay so at this point we've covered quite a few different ways of combining representation learning
5,201
5,230
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5201s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
with reinforced learning to be more efficient and the general theme so far has been that or at least in our state for positions been done raw pixels sure it has the information but it's embedded in a very high dimensional space is million megapixel image million dimensional input we wanted in a more compact position with and learn against more efficiently and all these ascribe
5,230
5,256
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5230s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
observations available to state and state actually the next date and so forth all tried to get a handle of that problem now nothing you might observe is down what we covered so far is fairly complex is a wide range of ideas at play and so the question we asked ourselves recently is is it possible with a relatively simple idea to maybe get a lot of leverage that we have seen here
5,256
5,285
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5256s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
and let's take a look at that and see how far agree with relatively simple idea and actually we'll see out the form essentially all the approaches we've covered so far that doesn't mean the ideas and the approaches we've covered so far are not important so we're not important with colleges to skip them there's a lot of good ideas we've covered that we probably want to bring
5,285
5,303
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5285s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
into this next approach we're about to cover but what I'm about to cover curl will really focus on simplicity and see how far I can get with something very simple our stunning exhibition here was if you look at the learning curves the vertical axis here is reward and higher is better horizontal axis number of trials in this environment and so see like at the end here
5,303
5,323
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5303s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
1e a to a hundred million steps have been taken in this environment and so we see a blue learning curve here that learns very quickly and then green learning curves that take a long time to learn what's different blue learns from states green learns from pixels same thing here blue learns from stayed very flowers green from pixels not nearly so fast and this isn't this case the RL
5,323
5,348
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5323s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
algorithm is soft is a d4 PG which is still yard are a logger so if you think about the essence here reinforced winning is about learning to achieve goals and if the underlying space is low dimensional there is a low dimensional state shims will be able to recover that low dimensional state and then learn just as efficiently from pixels as from state and how might we do
5,348
5,375
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5348s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
that well we've seen a lot of success in past lectures with contrast of learning for computer vision in fact we saw with CTC that it was possible by using unlabeled data is on image net to constantly out the form learning with label data so unlabeled plus so there's the same amount of label data but the blue curve also has unlabeled data you see that the unlabeled data consistently
5,375
5,404
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5375s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
helps outperform having only access to that amount of labeled data then of course very recently Sinclair came out and as actually getting equally good performance has supervised learning on image net when using a linear classifier just a linear classifier on top of a self supervised representation so that means that almost all the learning happens in self supervision and then a little bit
5,404
5,433
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5404s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
of learning habitat the M of course to get the meaning of the labels but it just needed a linear classifier if that's the case then the hope is if we do something similar in reinforcement all we need to do is do something where we do representation learning that extracts the essence and I've gained a little bit of extra information the reward to do the rest of the learning so
5,433
5,456
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5433s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
would it simply or do it essentially said I have an image I'm going to turn into two versions of that same image and when I then embed them linear neural network the symbol networking the left hand the right upper channels then the embedding should be close as measured with some cosine similarity and of course over another image that I embed then I'm betting should be far away and
5,456
5,475
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5456s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
those are the negatives in the denominator so for more details and that of course go back to our self supervised learning lectures from a few weeks ago what's important here is done this is a very simple idea it's just saying turn an image into two images and the embedding should be close take a different image it's embedding should be far from this and what's surprising about this even
5,475
5,502
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5475s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
though it's relatively simple it enables representation learning that then on top of that all you need is a linear classifier to get a really good image that classification performance and they actually looked at many types of augmentations cropping cut out color surveilled filter Norris blur rotate and what they found is that crop matters the most and color matters quite a bit too
5,502
5,529
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5502s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
but really cropping is the one that matters the most so now the curl curl looks like a nice representation learning with RL so what did we do here we have our a replay buffer on which we normally would just run reinforcement learn and so we have our replay buffer we take on my observations now to replay buffer since this is a dynamical system we need to look at a sequence of frame and consider
5,529
5,555
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5529s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
that a single observation otherwise we cannot observe philosophy and a single frame acknowledge their velocity so we'll have a stack of sequential frames that we together consider a single observation let's pack of frames then gets undergoes did augmentation to that augmentation in this case two different crops then one goes into the query encoder I'm gonna go key and predators
5,555
5,578
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5555s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
could actually be the same or different you can choose and then ultimately you do two things with this in the top path it just goes to the reinforcements in law so if you run B for PG again or you run soft actor critic you run PPO and so forth that happens along the top path so what it means is along the top path you run your standard our logarithm the only thing that's changed is that we
5,578
5,604
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5578s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
take this rare replay buffer you do some data on English now in the bottom path you have another data images in the same frame' you have a contrast of loss so essentially the same loss not exactly same details but at a high level same as we saw in the Sinclair slide okay so a couple of things that were important to make this work Sinclair uses a cosine laws what we
5,604
5,633
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5604s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
found is that having a a weighting matrix here between the key McQuarrie is Ashley Borden then we'll see in the red curve the bilinear and waiting is us secretly outperform using just cosine the other thing we notice is that using momentum Indian and one of the encoder pass is very important to which was actually dawn actually saw herself as learning lecture in the moko
5,633
5,656
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5633s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
work we also have momentum and one of their past same thing was important here again big difference so once we do that we can see that curl outperformance both prior model based and model free state-of-the-art methods so we look at here is medians course on deep mind control one hundred kid you might control firing the kiss it's hundred can steps are firing the kid steps and so
5,656
5,682
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5656s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
it's checking really can you learn to bounce it's not about after one hundred million steps where you ask is about 100 thousand firing down steps where are you at and so we see here after winter and kid steps from state ship access to state this is how far you get curl on one hundred K steps is a little bit behind what you can do from state but no family K stuff is actually all the way
5,682
5,706
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5682s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
there so we see that we can learn almost as well from pixels as from stayed with curl for Kepner prior methods that also tried to learn from pixels we see that they consistently we're not doing nearly as well after firing the kid steps and Sam with after hundred clear steps so both after hundred K M cavity steps curl up of homes or prior our elephant pixels on deep mind control Sweden and
5,706
5,734
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5706s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
after getting very close to take this learning here we have the learning curves in gray we see state based learning and red we see curl we see that in many of these red is matching gray there are a few exceptions within most of them red matches fray meaning that with curl are elfin pixels can be almost as efficient as RL fringe state at least for these deep mind control tasks and
5,734
5,765
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5734s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
here we'll look at you know a table of results you see in boldface the winner compared with all prior methods favorite methods of learning from pixels and you see that consistently curl outperforms the other methods for the tyranny and hierarchy not just an average but on essentially all of the individual tasks except for that no one here and one here dark public iris with curl doesn't learn
5,765
5,794
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5765s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
as fast and we look at the details what happens there these are environments where the dynamics is fairly complex so this requires some more research with our hypothesis here has been that in those environments learning from pixels is particularly difficult because if you just look at the pixels the dynamic is not well captured in the sequence of frames you get to see for example if
5,794
5,819
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5794s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
contact forces matter a lot and it's you can easily read those off from pixels and so having access to state makes a pretty big difference in terms of being able to learn looking at the entire benchmark we are looking at median human on normalized score across 26 Atari games at 100k frames and we see that compared to Paris today our rainbow rainbow dqm simple and well at least
5,819
5,847
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5819s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
rainbow DQ and simple and rainbow DQ and curl seen every out performs prior and state-of-the-art and it's getting at about 25 percent of human normalized score here is a broken out for the individual games and curls outperforming proxy they are fairly consistently what's simple coming in first on it's still two of them so computers are all matched human data
5,847
5,873
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5847s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
efficiency it's good question human normalized algorithm score we see on the freeway and on Janus bond that we get pretty much the level of human efficiency for the other games is a little bit of a way to go but it is not rotates night and don't know it's already double-digit percentage performance relative to human on almost all of them okay so we looked at two
5,873
5,900
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5873s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg