video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
YqvhDPd1UEw
value the cumulative reward over time that's coming and policy PI action it should be take so both of those are outputs of the same network that's the basis of this edge then all the data gets put into a replay buffer and it's reused in other ways and the same neural net that is the a through C agent will give it multiple heads even more head so it has to make even more predictions and
820
844
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=820s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
by giving an additional prediction tasks if these prediction paths are related to learning to solve the original problem which is achieve high reward and these prediction tasks are real ended and hopefully it'll learn something that will transfer over to the real task who care about and be able to learn the real task more quickly so what are these auxiliary tasks the first one is
844
865
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=844s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
auxilary q functions so the idea here is you're given additional heads to the neural network that are q functions a q function is predicting for the current situation how much you work well I get in the future if I currently take a specific action so for each possible action I'll predict how much reward might I get now the interesting thing about Q function learning is that you
865
889
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=865s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
can do Q function learning of policy meaning you can try to solve one task but in the meantime do Q learning against the other task that has a different reward function and that's the key idea here we're gonna take reward functions that are not the ones we care about that are auxiliary reward functions that are easy to automatically extract from the environment and the Q&A
889
911
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=889s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
against those who word functions and by doing so the core structure the core of the neural net will learn things that are also useful for the task we actually care about okay so what's actually that a little deeper here so basically she agent is the core thing the sparse awards means you just I mean here's the cake from now on Laocoon you get this one T round the cake from
911
939
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=911s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
the real reward that we care about that's not enough we want more rewards that's exactly what this cue function thing is going to do we're going to define many other rewards and there's many other rewards are going to allow us to learn from a lot more signal and if you only had our one reward okay so this reward function that was defined here by the office of the paper is called a
939
961
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=939s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
pixel control reward function and so what they do is they turn what the agency's first-person vision of the maze into a courser this gives grayscale and a representation of what it's seeing and you get rewarded in this auxilary reward task for how much you're able to change the discourse pixel value so what does that mean if your agent turns into a direction where things are much brighter
961
993
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=961s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
than the direction they're facing right now then the pixel values will change a lot and that would be a high reward thing or other way around if right now things look very bright in a certain pixel in it it in turns and makes that pixel darker that would be a high reward again that's not what we actually care about but it's a very simple auxilary loss that we can impose and that we can
993
1,014
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=993s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
run q-learning against and so that's it also turns out that this is the one that mattered the most for improving the learning there are other Attilio losses with is the one that matters the most intuition here why this one matters the most is that in Q learning you are learning about the effect of your actions and so because you're learning about the effect of many
1,014
1,037
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1014s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
possible actions you could take would be the Q value if I turn to the left well did you follow that turn to the right what if we the Q value I look up look down and you're really learning something about how the world works and not just how the world works with how your actions interact is what happens in the world another usually loss is reward prediction so what you do here is for
1,037
1,058
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1037s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
your current pause that you're executing you can try to predict a future time steps how much real reward you're going to get so maybe you get rewards for eating an apple and so when you see an apple in the distance you should be able to predict that if you keep running forward for three more steps you'll get that Apple and so learning to predict that in three steps you gonna get that
1,058
1,079
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1058s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
Apple is an auxilary loss that's introduced here and then elastic zero loss is value function replay so it's saying from the current time how much reward am I going to get over the next few steps so this applies in theory in the base every see agent actually all right so if you look at results here this is undefined lab which is that 3d navigation environment where you collect
1,079
1,103
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1079s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
apples and other fruits as a reward we can look at different approaches so the bottom curve we are looking at is the base a tree C agent so that's the dark blue bottom curve and the hope is by having auxiliary losses we can do better if we incorporate all the ideas that we just covered you get the unreal agent you get this top lead curve here and now we see here is various operations to see
1,103
1,131
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1103s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
well which one of these matters the most which ones might not contribute very much what we see is if he does do the human 4 pixel control it's a loss that's the yellow curve you get almost all the juice of these area losses but if an addition you have the reward prediction and the value replayer you have yet a little better performance so actually another thing I want to highlight here
1,131
1,157
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1131s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
the butts on the top of the graph here says average of the top three aging so there's a way to evaluate things in the paper usually reinforcement learning because you need to explore and there's a lot of random is an exploration the results are somewhat unpredictable of meaning that some runs will be more lucky than other rooms it'll be high variance and so with the baby here they
1,157
1,176
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1157s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
pick the top three grunts might say why the top three that's a lot crazy shouldn't you look at the average performance anything like that yeah you could argue should look at the average performance it's what's done in most papers that our thinking here was then imagine what you're interested in is finding a policy and you have maybe a budget of doing 20 runs then
1,176
1,200
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1176s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
maybe what matters is what's the best one among those 20 runs or it could be a little more robust aberrancy you know how do the best three runs do and so an approach where the best three runs are consistently great then that's an approach where if you won't afford 20 runs total you'll have a good one among them and so that's a it's kind of a funny way to score things but it happens
1,200
1,222
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1200s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
to be how they do things in this paper so another thing that compared with which we as unsupervised learning students of course we're very clear about if you do pixel control why not do feature control why not cue function for kind of later layers in the network if for later lays in the network I want to see if I take an action you know can I change the future value and maybe layer
1,222
1,247
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1222s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
five or layer six instead of just pixel value change well we see a 3c plus feature control in green and it received plus pixel control in orange you can see the pixel control actually works better of course this might depend on the environment this might depend on exactly only architecture that the experiments that we're done in this paper showed that pixel control actually slightly
1,247
1,270
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1247s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
outperformed feature based control and again control here means it's the auxiliary loss using the auxiliary cue functions that the ultimate reward function that you're actually optimized for and score against on the vertical axis here is the real reward function of collecting the fruits in the maze then here are a couple of unsupervised RL baselines so what are some other things
1,270
1,293
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1270s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
we're looking at again a principal pixel control shown in yellow that's the top curve and both plots then input input change frequency just try to have an auxiliary law that says can I predict how what I see will change as a function of my action so that's really learning a dynamics model that's shown in blue and then shown in green is in Petri constructions that's a
1,293
1,316
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1293s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
bit like a Dae I have an input make a little representation reconstructed back out and so what we see is that these things that might seem maybe more natural and more advanced like infantry construction input change prediction are actually less effective than pixel control and of course I mean there could be many many factors have plenty here but the high level intuition that most
1,316
1,339
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1316s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
people have is that the reason these are clearly cute functions work so well is that what's happening here is that as we as we work with exilic q functions we are we are actually we're actually learning about not just how the world works which is the input change prediction but how we are able to affect what happens in the world and that's really what matters for learning to
1,339
1,372
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1339s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
achieve high reward than the task you care about now domain they looked at rather than first-person maze navigations Montezuma's in French on this month's range of famous at our game where expression is very difficult there are many many rooms in every room there's complicated things you have to do collecting keys jumping over things they make one mistake you're dead and
1,372
1,394
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1372s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
you start back at the beginning shows you that unreal outperforms it you see quite quite a bit a 3 CR at the bottom I think black is really not getting anywhere where the unreal approach is actually doing a lot better now let's take a look at this maze navigation agent in action so this is deep mind lab let's take a look at the agent plane you agents collecting the apples here not
1,394
1,432
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1394s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
collecting the lemons that's apparently it's not good in this particular game to collect the lemons and so this agent has learned to navigate mazes the way it can do that by de Rais because it has a lsdm which gives a memory and so you can remember places been before and things has tried before to more efficiently find the next new location where there might be a fruit it hasn't collected yet
1,432
1,458
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1432s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
and so the reason I'm showing these specific results here is because the space of well reinforced planning in general but especially representation and reinforcement learning does the evaluations aren't all in the same type of environments there's a lot of variation how these things get evaluated and so having a good feel for what these experiments actually look like is
1,458
1,481
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1458s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
important to get a sense for how advanced might this method really be and so we see here as well this first person navigation well that's pretty complicated so this might be a pretty advanced method that play here here we see a bit of an inside look into the agent itself where on the top right you see the pixel control Q values is something depending on which action that
1,481
1,504
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1481s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
take this for actions available how much how high will my Q value be which really is corresponds to an understanding of how the world works what will change in what I see as a function of actions I take all right so to summarize the unreal law since the original ATC loss which is a state policy gradient most value function loss then there is value replay loss which look seven replay
1,504
1,531
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1504s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
about her Valarie prediction and then there's the pixel control two functions for different course pixels in the view of the agent and then finally there's the reward prediction loss small opening in the word prediction they ensured was an equal portion of rewarding and non rewarding examples so a balanced training set the pixel control did split into 20 by 20 rid of cells all right so
1,531
1,560
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1531s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
in the entire results we see that unreal also helps over a 3 C but not nearly as much as in the deep mind lab environments but still a significant improvement the vertical axis here is human normalized to form a server the way deep mind is evaluating this is they look at what you missed in 13 Natalie for every game that's gonna be a different score because every game is
1,560
1,583
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1560s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
very different scoring system and then they normalize it across games another total score across all games how well the agent learns so it is across many many Atari games in terms of Atari games on average how fast is the learning curve go up so you cannot overfeeding onto one game or the other and do well on this score you need to be able to learn well on all of the games
1,583
1,604
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1583s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
to do well on this score alright and then also look here at robustness because there's many agents being trained and this top three curves on this le expose performance of all the agents here is a little evaluation of robustness and we'll see that you know there's a bit of decaying performance not all agents learn equally well but it's not that there's just one death as
1,604
1,626
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1604s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
well and then nobody else as well so this looks pretty good okay so that's the first thing I want to cover which is auxilary losses and unreal is a very good example of that does more work happening all the time in this space but that was kind of the big initial result that's showed this is something that can be very beneficial let's switch gears to state representation an ominous will
1,626
1,648
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1626s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
have many many subsections it turns out first one we have is how would go from observation to state so the the kind of paper that most people might be most familiar with is the world models paper by David ha and collaborators and here's very kind of a simple diagram showcasing what they investigated so what you have is you have an environment the environment leads to an observation in
1,648
1,678
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1648s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
this case pixel values and that can be very high dimensional fear but I'll take 100 by 100 image that's 10,000 pixels that's a very high dimensional input then they say well that's kinda Michelin we want our agent to work on something lower dimensional because we know under the hood there is a state of the world and the state of dog might be summarized wearing just a small set of numbers
1,678
1,701
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1678s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
maybe 10 20 30 numbers is enough so my agent shouldn't have to operate shouldn't do reinforcement on those 10,000 number input should be doing reinforced learning on dusty 30 number input and might be able to learn a lot more quickly because credit assignment should be easier if we only have to look at 30 numbers instead of 10,000 numbers and so it is a you risk a strain of
1,701
1,721
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1701s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
racial auto-encoder which will of course cover them in their release of this course to find a latent representation now from which we can reconstruct there is more but then we'll use the latent representation as the input to the reinforcement learning agent which now hopefully will be more efficient so will them do in this approach is train a recurrent neural network now learns to
1,721
1,746
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1721s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
predict the next latent state so what's gonna happen here is we're gonna learn a way to simulate how the world works in this recurrent neural network but not by directly simulating and pixel space but by simulating in its latent space which could go a lot faster if there's a lot lower dimension we don't have to render the world at all times we can just simulate how the latent variables evolve
1,746
1,773
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1746s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
over time of course this will also depend on the actions thing so it's a Mulligan's interaction and previous latent state generate the next life in state and of course you want it to be the case that that matches up with the actual next latent state that you're VA you would output when you get to observe the next live in state and then he actually gets fed into the environment
1,773
1,793
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1773s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
also and this so you have kind of two paths here at the actual environment path and you have the RNN prediction path and you hope that they line up or you training really to make this line up the thing in blue is called the world model is a thing that looks at the latent state see turns it into by looking at the action latent state turns it into next light instead alright so
1,793
1,821
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1793s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
they looked at this in the comics of car racing so in the left you see the environment that drains the roads you're supposed to stay on the road here the way they were rewarded shut up and race down this road as quickly as possible this is from pixel input so you get a lot of numbers as input and somehow you hope that would get turned into effectively an understanding of roughly
1,821
1,842
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1821s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
where the road is where your cars on that road and which direction it's facing your car and then understand how to steer it to go down the road as quickly as possible procedure that followed is that click 10,000 robots from a random policy and the trendler view to encode frames into Z space just thirty two dimensional z space so low dimensional compared to the pixel
1,842
1,864
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1842s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
open space then they train a ironing model to predict next lengthen state from previous latent state action and there's this additional hit late instead inside the Arnim then they use evolution but it's just one of many possible RL approaches to train a linear controller to maximize expected reward of a robot so step one two and three is the unsupervised learning that can happen
1,864
1,892
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1864s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
ahead of time and then you can run RL on that representation that you've learned so one thing that's real interesting here is that remember the Omnitech where yaw would say oh well you know reinforced learning is a cherry on the cake which is tiny compared to the cake and why are something for spent in the chair because it's not a lot of rewards it's just small amount of reward signal
1,892
1,919
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1892s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
the following york response to be bound a signal there's a lot of signal coming from self supervised learning and that's the foundation of the kick and so if you look at what's happening here the VMA neural network has four million parameters the RN and dynamics model network says no network has four hundred thousand parameters and then the controller the thing that is learned
1,919
1,940
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1919s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
with RL only has eight hundred something parameters there's a massive difference in that RL only has to learn a small number of parameters which Maps do it only have a less lesser amount of signal whereas the saw survived part has to learn most parameters millions of parameters and that's done from one to device theta okay so here's an example of an input frame the 64 by 64 pixels here and a
1,940
1,967
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1940s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
frame reconstruction which kind of roughly matches up not perfectly but it gets to just then here we have when we use just C or Z on h h is the ardennes in that state so it shows that it's important that the RNA and hidden state captures something important about the world let's look at results so what we see here in the table is scores obtained with the model described highest score
1,967
1,999
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1967s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
in this car is the environment compared to previous methods obviously in principle unlimited she be able to learn this too but when you limit the amount of time you get to train then using cell scribes learning to learn a representation combined with reinforcement to learn the control allow us to get seemingly higher scores than previous methods there were pure RL were
1,999
2,024
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1999s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
able to do so this is the model we looked at before one experiment we saw so far I had the car racing environment the second experiment where you have to dodge things being shot at you in a fist doom environment so the input will look something like we see on the left but sometimes you'll see fireballs coming at you when they're shooting at you and you got to dodge those fire bullets to get
2,024
2,053
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2024s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
to stay alive and get high reward same approach train of uni training our end world model then linear controller train with our L on top of that and so again this linear controller train on top of that is trained in the Arnon simulator itself so you don't need you don't need to simulate what things will look like rendering is often expensive computationally if you need to go all
2,053
2,084
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2053s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
the way to rendering to train your policy I'll take a lot longer to do the same number of rollouts their oldest thing that low dimensional latent space to train the policy so it's called doom take cover here's a higher resolution version of what this looks like if you were to play this game yourself same approach laid out here again unsupervised learning does all the stuff
2,084
2,107
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2084s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
at the top here millions of parameters learn then the RL only needs to learn about a thousand forever again beautiful association of the non-linear cake idea so here is what here's what this what this looks like one thing to to keep in mind here is that it actually sometimes you can you can have some quirky results where the simulator of the world allows you to do
2,107
2,138
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2107s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
things you can now do in the real world and so that's something to look out for that they're highlighting in on their website so if you go look at the kind of normal temperature higher temperature things you'll see some differences there so here are the results we have depending on the temperature different discrepancies so for low temperature we see a very high virtual score but the
2,138
2,161
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2138s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
actual score not so great for higher temperatures we have a closer match between the virtual score in the actual score so actually actually I should I quickly highlight what would meet with temperature here so typically in RL you have a policy that has stochastic output so you would have a distribution over actions and that solution over actions can have a temperature parameter in
2,161
2,196
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2161s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
terms of how much you favor your favorite action and so that temperature parameter if you make it small low close to zero then you'll always think your preferred out most preferred action when you have then you end up with a close to the domestic policy we have a close to domestic policy you can often explored quirks in your simulator whereas if you have some random is in your policy you
2,196
2,222
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2196s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
have a higher temperature if ourselves a little bit of randomness then you could not exploit the very specific quirks and lauren simulator because the randomness will prevent you from being able to go to that very very quirky path where you all of a sudden get a high score even though you know really you can't do that in the real world but your simulator has a small
2,222
2,241
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2222s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
little bug you won't be able to trigger that small little bug and that's what's going on here with temperature at higher temperature we are not able to exploit tiny little bugs my learned simulator we have to learn something more robust and that leads to a better match between performers in the real environment relative to the learned simulator ok so that was the world models paper by David
2,241
2,268
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2241s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
hein collaborators now one question you could ask yourself if we're going to learn a world model we're going to learn a simulator some Lincoln space simulator couldn't make sense to try to learn a latent space such that control becomes easier what I mean with that so if you look at the control literature some control problems are easy to solve some control problems are very hard to solve
2,268
2,297
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2268s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
and maybe we can map our pixel observations and world and amps and pixel space into a blatant space dynamics that satisfy certain properties that make the resulting control problem easier to solve a good example of this is linear dynamical systems if you have a linear dynamical system then the control problem tends to be relatively straightforward to solve so how about
2,297
2,326
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2297s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
this dinner this is what this paper we're going to cover here is going to do is hold on give me one second here [Music] let me cover something else here first so one thing that might happen is as you train the world model on your randomly collected data and then turn your policy and test it in the real world it might not always work the reason might not work is because the randomly collected
2,326
2,370
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2326s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
data might not not have been interesting enough to maybe cover the parts of the space where you would get high reward and so what then you'd want to do is iterate this process at this point you effectively a model-based reinforcement procedure you collect data you learn a model you find a policy in the learned model you deploy that policy you take a new data and prove your world model and
2,370
2,395
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2370s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
repeat so that's what they did and this gives for the carpal swing up and so after about twenty trations with this we would be able to learn to swing it up now a couple of other world models papers is the actions additional video prediction using deep networks in atari games just at the top here worth checking out model-based reinforced planning for atari it's another one
2,395
2,421
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2395s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
worth checking out and then learning the dynamics from for planning some pixels planet which we'll look at a little bit later also so if you want to look more closely at the specific it which is covered there's a really nice website world model start github dot which has the code which has many demos for you to play with the play with what actually the latent variables are doing in the
2,421
2,449
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2421s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
VAD and so forth for these alarms so highly recommend checking that out and here is a video of the chris doom cover in action so you get these fireballs coming at you and the agent has learned to get out of the way to not get killed all right so we looked at so far is we looked at how to go from observation to state and then learn a model in that latent state space
2,449
2,483
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2449s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
now we're gonna take a look at program division to state but also from state action to next state so and this was that earlier alluded to an hour i jumped the gun a little bit on this yes we're gonna now be representational and that's not ahead of time learning your position and pixel to hopefully state or something like state but that is already when it's doing representation looking
2,483
2,510
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2483s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
at the dynamics and so when we look at the dynamics to representation learning why not learn a representation where the dynamics is such that control becomes easier for example learn a representation such that in this new representation space that an Amex is linear because if the dynamics is linear then all of a sudden control becomes easy and you turn your original pixel
2,510
2,534
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2510s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
space problem might be highly nonlinear very complex to have a control methodology for into a latent space problem where it's just linear and very simple to solve that's the main idea behind is embed to control paper we're covering now so the existence they considered were pendulum card pull and three linked arm but again this is from pixel soda a pixel input
2,534
2,560
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2534s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
when the lid representation where hopefully dynamics is close to linear and hence control becomes easy so it's called stochastic control the methods they apply it's kind of a standard control method then you can apply to linear systems and embed to control will learn a latent space model using pressure on encoder while forcing a locally linear latent space dynamics
2,560
2,586
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2560s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
model so once you have a local inner model you can apply stochastic optimal control is an example that in action where once you have such a model is very easy to find the controller that brings you to a target and say a stable fixed point thanks to that controller or just to work well locally along this trajectory you seem to have linear dynamics models and in
2,586
2,609
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2586s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
fact the way this methods work is they tend to linearize the dynamics along trajectories but when if you learn a latent space model where it's already linear we already good to go and that linearization will not be an approximation or actually be the action model that you learn to that be nice to have a very big fit of your linear model to the absolute veneks so the costs are
2,609
2,631
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2609s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
often assumed to be quadratic so that's that's an assumption to make you know this class of problems called lqr problems later through out of control problems sometimes LQG problems if you also have some use to bestest be in there and these problems assume that you have linear dynamics and quadratic costs without a cost meaning there's a quadratic penalties for being away from
2,631
2,652
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2631s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
the state where you're supposed to be okay so of course we can't just throw from our original pixel observations to some space where the NamUs is linear and ignore the real-world dynamics esta map button my lab back out to real world to them so let's look at the complete loss function to look at first of all go to latent space see you need to be able to reconstruct the original language so Z
2,652
2,676
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2652s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
should not lose important information about what is happening or what's the situation here then we have this temporal aspect here and I'll Polly want to reach a goal and want to have long-term act prediction that in the end put the sequence of actions that are keys the goal it also predicts that's going to be the case so every step along the way we're gonna have prediction for
2,676
2,699
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2676s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
when and use linear models so prediction must be locally analyzable for all valid control magnitudes such that when we optimize our controls we get something then when it works in simulation also works in real world now we're going to force that to be true by learning a model that does this by construction so let's look at that model here's the next component we already
2,699
2,725
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2699s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
have our encoder decoder we have our control input u so in controls usually he was used for control input and reinforcement often a is used for action controls need for controls then we have our next latent state CP plus 1 now for this to be meaningful the same decoder should be able to reconstruct the image input at time T plus 1 if that's the case then that latent space dynamics was
2,725
2,750
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2725s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
correct okay so we're going to learn a locally linear model here in that transition to make that work okay then once we have all that in place we pretty much good to go we're going to use this model over long horizon C to make sure that actually that we don't just do this over one step we actually lay this out over and longer horizons and as we've trained the model we have
2,750
2,776
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2750s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
multi-step predictions over which we have this loss function you might say why do we need why do we need all this well it turns out that if you make a small mistake and your prediction for the next state then you might say nah Bob just a small mistake no big deal but if you make a small mistake the problem is that you land in a new latent state for which your model might not have been trained
2,776
2,800
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2776s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
and when you make the next prediction to go to time T plus 2 you're doing it from a time T plus 1 latent state that you're not familiar with that doesn't lie in your training distribution and now you might make a not so good prediction make it even worse and this is an accumulation of errors over time can lead to divergence and explicitly avoided any kind of simulations to run
2,800
2,822
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2800s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
over longer horizon need some mechanism to avoid that okay so one mechanism is to explicitly have a loss of a multi-step another mechanisms to ensure that your next state prediction comes from the correct label distribution so if you embed into my Tate unit Gaussian spins then after you do your next state prediction they should also that what you get there should come from
2,822
2,846
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2822s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
a Gaussian unit gosh and distribution to ensure that when you go from there to the next one you're ready to make your predictions all right so those are the components we have an autoencoder tutoring image X into latent state with accurate long-term vision of latent states because we ensure that the next latent state comes from the correct distribution a unit Gaussian just like
2,846
2,868
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2846s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
our auto encoder it forces it to be and then the prediction must be locally in erisa bowl so we don't get some fans in their network to predict the next flight instead from the current day and see if it has to be feasible with just a linear prediction okay so this is the full system that they proposed all the last term's shown at the bottom now let's take a look at how all this works so
2,868
2,890
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2868s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
they apply this to cart poll that showed a good amount of success in car pool and then here are some evaluations on on that showing that embed to control indeed can do in virtual pendulums swing up pretty well I can do carp or balance you can do three link arm so good results on three environments that they experiment with and here's this environments look like this is from
2,890
2,920
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2890s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
broad images so what we're watching is effectively also what the agencies the agent will often see the down sampling so they can actually look at the the environments themselves so really much on the left because worked on the right and here we have cardboard balancing in action and so this can use some idea of how capable this approach is so it does very well at the same time clearly these
2,920
2,950
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2920s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
environments they're not nearly as complicated as what we saw in the Unreal environments where it was deep mind lab navigation tasks versus these kind of 2d relatively low-resolution single robot that you fully control kind of tasks now in embed to control the idea was to have a single linear system and for your full of enemies that might be difficult but it's been shown in controls is that very
2,950
2,983
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2950s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
often even though your real system is highly nonlinear locally it can be linearized and so you might ask the question can we instead follow the same philosophy and I was in bed to control but instead of learning the single linear model can we learn just a collection of linear models in some way that allows us to apply time varying linear control methods which are also
2,983
3,006
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2983s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
extremely efficient and maybe have a richer set of environment that we can solve for because time varying linear models can cover more than just a single linear model can that's actually what we did in this work called solar showing an action on the right here we now have different linear models at different times and so we learn to embed into space where at each time a local linear
3,006
3,032
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3006s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
model can capture the transition very well so you still get initial random rollouts followed by learning representation and latent dynamics but now I'm not a simple linear model but the sequence of linear models and then from that once we've done that we can start doing a robot infer where we are in this thing's getting out of in your models find the sequence of controllers execute that get
3,032
3,062
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3032s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
new data and repeat the smaller base to draw in action a model-based reinforced waiting in action on the distant if we're setting where we make the latent space very efficient to find optimal policies and so might not succeed on the first time around so get the new data update the representation infer where we are in terms of linear dynamics models and trying to updated policy and repeat and
3,062
3,088
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3062s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
this can actually learn in about 20 minutes to stack it like a block learning it from from pixels as input okay so we looked at state representation which is how to go from broth acceleration to state learning ahead of time with a few in the world models paper that we looked at after we learned dynamics model and mapping from pixels who stayed at the same time and
3,088
3,111
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3088s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
maybe benefit from that now here's another way we can think about this which is we could think of putting some prior information so when we have pixels as inputs and we know that under the hood there is a state thing we know that state is just a bunch of real numbers so we did here in this papers is said okay when a plug data we're going to learn a latent representation which is curated by
3,111
3,141
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3111s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
sequels of column filters then we're going to apply a spatial softmax meaning we're going to look for each of these 16 filters where each filter is most active through a spatial softmax and output the corresponding coordinates those coordinates should allow us to reconstruct the original image because they captured essence they recorded to the objects in the scene if you know the
3,141
3,164
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3141s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
courts of the objects we've seen at least as the home you can reconstruct the scene and then once we have learned that representation we could learn to control with just a 32 dimensional input rather than needing to take in 240 by 240 input which is much higher dimensional and much more expensive to do reinforced fighting against there's actually capable of learning a pretty
3,164
3,190
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3164s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
wide range of skills here so here is the data collection so it's just randomly moving around collecting data that data is used to train that spatial auto encoder and sir then we learn we look actually we imprinted the goal situation and then we do reinforce learning in the feature space the thirty two dimensional feature space and learn in a relatively short amount of time how
3,190
3,223
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3190s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
to push the block to the target location it's not ready you can follow and how to go from image observations to state or something likes hearing kind of interesting method here them it actually doesn't bother reconstructing it says all we need to do is think about physics what is physics tell us well we're gonna want to find an encoding of state underlying state coming through the
3,223
3,253
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3223s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
observation fine here will be the big neural network that turns image observation into underlying state well what do we know about state we know that in physics then there will be coordinates and then derivatives of coordinates which are the velocities of these objects so there is a state variable corresponding to velocity and other severe I'll compare corresponding
3,253
3,275
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3253s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
to position and the change in position is velocity that's we know that velocity is derivative of position then what else do we know we know that when the world to be in different states we're going to need you know different state values so by default if we you know get random stuff presented to us we want the embeddings as in a field two different situations
3,275
3,301
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3275s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
to be far apart so that's what this law is is saying we want embeddings to be far apart but in all you do is play the embedding is far apart well then that's not enough to get any structure so then the next loss here says that four consecutive times the position state variables should be close it also says that between time T and t minus 1 the velocity state variables should be close
3,301
3,329
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3301s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
because philosophy cannot change quickly so this is saying acceleration people is going to be small on average an acceleration is gonna be small then conservation of momentum and our energy is captured in here and in the last part here Singer we need a representation where the actions are able to influence what the next state is going to be so wanted correlation between action and
3,329
3,356
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3329s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
state all right so it tested is on a couple of environments you know where they would just collect data in these environments pixel input and then learn a state representation that doesn't do reconstruction just try to satisfies those invariants that are expected to be no good loss functions based on physics and one pretty interesting state representations that way here's another
3,356
3,383
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3356s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg