video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
YqvhDPd1UEw
main directions in representation learning in reinforced going so far using auxiliary losses and doing things down if I couldn't come down to trying to recover on the line state with a self supervised type loss now there are only ways representation that I can help mainly an exploration which is one of the big challenges in reinforced learning and in Austin for
5,900
5,923
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5900s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
unsupervised feel discovery so let's look at those two now first we can help exploration is through exploration bonuses so what's the idea here in a tabular scenario meaning a very small reinforcement problem where the number of states you can visit you can count but say there's only you know a good grid world addition to being only one of sixteen squares that's it one of sixteen
5,923
5,946
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5923s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
possible states a very simple thing new is you give a bonus to the agent for visiting grid squares it hasn't been to before or hasn't been frequently before that encourages going and checking things out that you have don't have much experience with yet that can be very effective in is small environments but its impact of a large continuous state space is because in a large but they
5,946
5,969
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5946s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
infinite States build infinitely many splits well there's always more stuff you haven't seen so you need a different way of measuring what makes something new versus something already may be understood so one big breaker in the space wants to look at using generic model in this case a pixel CN n for density estimation so the idea here is you planet our game or the agents
5,969
5,996
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5969s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
playing at target you want to measure how often has the agent been in the stick but if you'd never special specific stage there's too many of them so still we're gonna do is women train a pixel CNN model on what you see on the screen and things you've seen so far the more often you've seen something the higher the log-likelihood under that pixel CNN model but when you let's say
5,996
6,020
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5996s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
enter a new room in this game first time you enter the new room the log likelihood of that new thing you see on the screen will be very very low it'll be a bad score then it's a signal that this is something you need to explore because you're unfamiliar with it as measured by the flow log likelihood score as you can effectively give exploration bonuses now based on the log
6,020
6,043
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6020s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
likelihood scores under your pixel CNN model that you're trained online has your age and this acting in the world there's a comparison here between using the odds versus just using random exploration and it helps a lot another way to do this you can train a variational honor encoder which leads to an embedding and then you can mount these embeddings into a hash table and
6,043
6,068
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6043s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
just do counting in that a hash table and that's something we did a couple years ago Amin helps a lot in terms of giving out the right kind of exploration incentives to explore difficult to explore environments more efficiently another thing you can do that kind of gets maybe more at the core of what you really want but it's a little more complicated to set up is for
6,068
6,089
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6068s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
information maximizing exploration so the idea here is the following when you are in a new situation what what makes a deal what makes it interesting about it being new well one way to measure this is to say hey if I'm in a situation where after taking Archie I cannot predict what's happening next very well then I'm not familiar with this so I should give a bonus for you know having
6,089
6,120
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6089s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
gone into unfamiliar territory that's called curiosity we'll cover that in a moment especially been pretty successful but actually it's also a little defective because if you just have something that's too passive in the world let's say you roll some dice well it's gonna be unpredictable so to make this more charitable one thing you can do is you can say hey I
6,120
6,142
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6120s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
don't want to be getting exploration bonuses when something is inherently unpredictable how I'm going to get them what it's something that's unpredictable because I have not learned enough yet about this and so the way we did this environment COK you can set up a dynamics model that you're learning and as you learn the dynamics model as Nydia that comes in you can see we actually
6,142
6,167
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6142s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
set up a a posterior over dynamics policy of the distribution over possible dynamics models as new data comes in you get that posterior if that updated posterior is very different from a previous posterior it means that you got interesting information it allows you to learn something about how the world works so that should give you an exploration bonus because you did something
6,167
6,188
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6167s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
interesting to learn about the world but when throwing the dice addition guys rolled many many times and then rolls again and you couldn't predict it because that's just awareness you cannot predict but your model for the dice will already say it's uniform you know over all possible outcomes that model will not see much update if any and you will not be given an exploration products and
6,188
6,208
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6188s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
so that's the idea in vine only get exploration bonuses when it updates your posterior over how the world works and again showing here that that helps a lot in terms of exploring more efficiently under the hood that's really self supervising type ideas for a dead and small ensembles or based on representations of the AMEX models and been given exploration bonuses based on
6,208
6,231
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6208s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
that the simple version of that is called curiosity we're more directly look at you know was something pretty cool or not pretty quiet more the domestic environment often that's actually enough and that's in a lot of success in many of these game environments another thing you could do with self that's learning a representation learning for exploration is to think about it in a more
6,231
6,255
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6231s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
deliberate way you could say hey it's notice about getting bonuses after it's being something new it should also be about thinking about what I should even do before I experience it can set a goal for myself that makes for a good goal when I'm trying to explore train goal again what's done is the idea is the following you have a in this case let's look at iteration 5 down here the other
6,255
6,280
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6255s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
a set of points that you've reached in this maze you start the bottom left you did a bunch of runs to reach a set of points and where you notice is that the way ascetic goals in the green area unable to consistently achieve your goals we accepted in the blue area it's high variance and some in the red area you should don't achieve your goals we can induce and say oh actually in the
6,280
6,308
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6280s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
future set my goals in the blue / red area cuz that's the frontier of what I know how to do and so how you're gonna do that you're gonna learn some kind of generative model to generate goals in that regime then go again did you ever have a cell network strained to them generate goals at the frontier of what you're capable of and this allows you to explore Avars much more efficiently
6,308
6,329
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6308s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
because Mary is setting goals to go to places at the frontier of your capability so you continue expanding your skills you can also do this with a various auto-encoder that's done in rig where the traditional auto-encoder is generating new goals it's those goals and I'm silly not this frontier in the same where they're essentially goals that are similar to things you've seen
6,329
6,351
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6329s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
in the past but the hope is that frequently enough you are the frontier that you learn relatively quickly no can also read way those goals based on how you know how much they're at the frontier measured in something called skew fit which is an expansion to this paper that sometimes changes the sampling in late in space to get closer to sampling from the frontier rather
6,351
6,374
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6351s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
than just from what you've seen in the past so brick itself here are some examples of this in action you see robots learning to breach and to push so that's the kind of thing that channel is pretty hard to explore for because normally a robot would just be waving in the air and so forth here you can you know set goals that relate to moving objects around and then it would be
6,374
6,400
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6374s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
inclined to move towards object and wisdom now another thing you can do in terms of exploration and leveraging chariot of models or once about smalls is skill transfer and this should remind you of how we initially motivated unsupervised learning or some of the motivation which was no transfer learning can be very effective with deep fuel nuts now would it be nice if it'll be
6,400
6,427
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6400s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
translating from a task that does not require labels on to a task that requires labels that's transfer from one surprise learning task to them fine tuned when a supervised task well similar ideas can be a planned reinforcement money so what's going on here so far we mostly talked about going from observations to state those kind of representation money but there's another
6,427
6,448
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6427s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
type ropes and fish line that matters for position learning around objectives behaviors tasks the question here is how do you supervisors and their learning for these things what's the contrast what's done now to explore you maybe put some noise in your action and that way you have some random behavior you might explore something interesting gonna take a long time
6,448
6,469
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6448s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
sometimes that's shown to be a bit more effective on your explore by putting random is on the whate near no network not only kind of consistently deviate in one way or the other so the good example for the thing on the right works better than thing on the left is let's say is posted for explore a hallway but when I'm left with random walk left right will take very long to get to the end of
6,469
6,490
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6469s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
the hallway and explore both ends of the hallway the one on the right would induce a bias to mark to the right and maybe with not a random perturbation and do some bars to go to the left and maybe after a couple of robots would have gone to both ends and that's it but it's still really counting on markets it's not it's not really using any more knowledge about experience from the past
6,490
6,512
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6490s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
to explore something new more quickly and that's where the question we're after can we use experience from the past to now learn to do something more quickly for example if you have been in environments shown on the left ear where when you're in the environment and you don't get to see the red dots the red dots are just for us but imagine we cannot see us or about the red dots and
6,512
6,533
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6512s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
any time you get dropped in environment the reward is I have a spot on that semi circle but you don't know which spot and so you have to go find that reward after a while you should realize done I should go to the semi circle and see which Pollan's semi circle has the reward and that will be a more efficient exploration than to just randomly walk around in this 2d world and then
6,533
6,554
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6533s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
randomly maybe run into the reward on that semi circle or shown on the right imagine they're supposed to push a block onto the red flat target in the back but you don't know which block you're supposed to push well you'd have a very good strategy saying I push the purple one mmm no reward okay I'm gonna try to push the green one you know her would try to push the
6,554
6,575
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6554s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
silent one Norway which is the yellow one I reward I push the yellow one again and keep collecting reward that's what we would do I see much but how do we get that kind of exploration behavior that's much more targeted than random motions into an edge and how to get to learn to do that well what we really want then is somehow a representation of behaviors for example pushing objects makes for an
6,575
6,600
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6575s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
interesting behavior that often relates to reward whereas random motion that where the gripper does not interact with objects will rarely be interesting and rarely lead to rewards that's the kind of thing we want to learn in our representation of behaviors it is one way we can do that this is supervised the bridge but just doesn't just set some context or not
6,600
6,621
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6600s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
ones about supervised for now but it will go from supervised and transfer to an unsupervised and transfer very soon imagine of many many tasks for each task you have a discrete indexing through the top which is turn into an embedding then I've been expended to the policy the currents data observation fed into the policy nopales take action if you train this policy for many many tasks at the
6,621
6,646
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6621s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
same time then it'll learn depending on what task representative with this index D to take a good action for that task but now the additional thing done here is that this latent code Z here is forced to come from a normal distribution what does that do the normal distribution means that even the future we don't know what the task is nobody tells us what the task is there might be
6,646
6,672
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6646s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
a new task we can actually sample from this distribution to get exploratory behavior so you say oh let's sample this you know sample Batsy and the possible still do something very directed something that relates to maybe interacting with objects as opposed to just some random chittering to make this even stronger the case there's a mutual information objective between each
6,672
6,694
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6672s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
directory and the latent variables see here so turns out there's actually help so you learn in a bunch of tasks this way and they have a new task and you explore by generating Latham code see and then someone you'll finally I can carry that actually leads to good behavior and you'll start collecting higher reward coming Bank is low less supervise where there's a little differently
6,694
6,717
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6694s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
let me canoes and say well let's not even have discreet tasks to mixing let's just have a late until it going in and when a policy that pays attention to the latent code while collecting your work why would that happen well there will still be many tasks under the hood but we're not telling it the indexes of the task we're just letting experience reward and so what I'll learn to do is
6,717
6,738
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6717s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
they'll learn to sample a see he would have got zv4 successful behavior with dust and it'll reinforce that see if it doesn't it'll have to sample a different Z and so forth so here's some tasks family is every dot in the semester Birkin spreads to a different task so we hope here that it would learn to associate different Z's with different spots in the semicircle such that when
6,738
6,764
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6738s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
it later explores by sampling different G's it would go to different spots in the semicircle I mean the one that's successful be able to reinforce that same for the wheeled robot here and here's the block pushing tasks looking at the learning curves we see that indeed by getting to because it getting to pre trained on this notion of indexing into tasks or a dispute over
6,764
6,788
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6764s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
tossed and then be able to explore by sampling possible tasks it's able to them in blue here learn very quickly to solve new tasks compared to other approaches the generated behaviors we see are also very explored at Reseda we explored their behaviors indeed respond to visiting the semicircle and this gives the wheeled robot in the mill here it's Thea walking robot on the
6,788
6,811
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6788s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
right is the block pushing what is look like in a human projet didn't do representation learning for exploration behaviors you and instead of having this nice push behavior should have just some jittery behavior of the robot gripper that wouldn't really interact with the blocks or get any block to the target area after it's done those exploratory behaviors of course the next single will
6,811
6,833
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6811s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
happens a policy grand update that will update the policy to essentially sample Z from a more focused distribution that focuses on the part of the session will correspond to the part a semi-circle what the target is or the block that needs to be pushed okay now what we did here was transfer from having a set of tasks to now solving a new task relatively quickly by having good
6,833
6,860
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6833s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
exploration behavior but we still needed to define a set of tasks and then transfer from that to question how is going to completely unsupervised school we just have the robot we're on its own to learn a range of behaviors and another test time Explorer in a meaningful way to Zone in on specific skill quickly take a look so it's actually multiple lines of work that
6,860
6,885
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6860s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
effectively do the same thing but try a different object there's but the same high-level idea so the hang of ideas we're still gonna have a policy PI that conditions his actions on the observation the current stayed near and a latent code which might or might not come from a discreet code bubbly has come from a latent critical to a normal distribution so we can resample this in the future this
6,885
6,909
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6885s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
will solving trajectories and so the way we're going to pre train this is by saying that there needs to be high mutual information between its directory that results from this policy and the latent code is acting based upon so you start you roll out at the beginning of you roll out your sample Z you keep z fixed for the entire roll out to get a trajectory you want the trajectory to
6,909
6,930
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6909s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
say something about whatever Z was that you used for this trajectory what does it mean that high missile bases in Turkey Thailand see what we measured in many ways and that's what these four different papers are the first paper which are actually discrete variable in a directory and the second paper looks at B and the final state the third paper looks at Z and every
6,930
6,954
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6930s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
intermediate state independently some together and then the fourth one looks at Z and the full trajectory as a whole and they all get fairly similar results actually so here's the third tapered Eisenbach a tall paper showing a range of behaviors that comes out of this when you apply this to the cheetah robot so for different disease you get different behaviors here I mean we see how our
6,954
6,981
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6954s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
mission information people use different Z's and the trajectories look very different to us may not indeed a different Z results in a very different directory and of course the beauteous ones is learn to check out all these behaviors for different Z's now at test time you need to do something else but they need to run out of certain speed either will be Z's that already
6,981
6,999
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6981s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
correspond to running forward and then you can fine-tune the Z directly around with to learn a policy it isn't to figure out the Z that will result in the behavior that you want here are some videos from the paper that make a model paper looking at and curating all kinds of different trajectories correspond to responding to all kinds of different latent variables see so we see pretty
6,999
7,029
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6999s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
same latent variable see same kind of directory gets output and here's some more videos well some of these cannot be played for some reason but here's a cheetah robot the eggs I'm a tall approaching so this kind of just not too sure that you know they camera at all might be better than the awesome burger dollar I think it's just a show that actually is very similar that so
7,029
7,062
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7029s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
the the difference in those four objectives might not be too important actually some limitations in this approach this coaster comes from an ATM at all paper is that when you have a humanoid which is very high dimensional compared to cheetah which is essentially just kind of stands up or runs on its head humanoid is high dimensional you try to find financial information
7,062
7,087
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7062s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
behaviors between Si and trajectories you can it can take a long time or it can have a lot of mutual information with all trajectories actually being on the ground because there's a lot of different things you can do on the ground and it's not something where you necessarily automatically get it to run around this running is very hard to learn where I was doing all kind of
7,087
7,107
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7087s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
different tricks on the ground is much much easier okay so let me summarize what we covered today would cover a lot of ground much more quickly than in most of our other lectures because this lecture here is more of a sampling of ideas of how representation learning and reinforcing have come together in theorists place and you know a very deep dive in any one
7,107
7,133
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7107s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
of them as we've done in previous lectures the big high level ideas are that we attain a neural network and beep reinforcement learning your mind is looking at auxilary losses and if those losses are related to your task well it might help you to learn more quickly than if you did not have those exact losses and of course the most economical paper there was the Unreal paper under
7,133
7,159
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7133s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
the hood a lot of disguise to state representation if we have high dimensional image inputs well hopefully under the hood in this task often there is a low dimensional state and so there's many things you can do to try to extract latent representation that is closer to state than there is no pixels once you're working without lady representation closer to state or maybe
7,159
7,182
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7159s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
even match to a state learning might go along more quickly and in fact we've seen that with the curl approach it's possible to learn almost as quickly from pixels as from spit it's not just about turning a raw sensor observation into a state there's other things you can do with representation lying in our own you can have it help with exploration you can have it help an exploration by
7,182
7,208
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7182s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
helping you generate exploration bonuses especially measured things that are new canonically and tabular environment she measured by you know visitation rates but in high dimensional spaces you'll always visit new states so you need to measure how different that new state is from past is which which you can do it to narrative models and my clearance another thing you can do in terms of
7,208
7,232
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7208s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
exploration is you can think about generic models for behaviors that are interesting such that mount exploration becomes a matter of behavior generation rather than random action all the time or you can learn to narrative models for goals that might be interesting to set and then set goals with your generic model for a reinforcement agent to try to achieve to expand its frontiers of
7,232
7,259
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7232s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
capabilities and I'm not if you can do is ultra by skill discovery don't suppose skill discovery what we do is we essentially have no reward at all in a pre training phase but the hope is that the agent nevertheless starts exhibiting interesting behaviors that are reusable that lead to reusable skills for future learning against rewards that we actually care about
7,259
7,286
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7259s
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
https://i.ytimg.com/vi/Y…axresdefault.jpg
HGYYEUSm-0Q
all right so welcome to the third tutorial session this one's on generative adversarial networks so it is actually is my great pleasure to introduce dr. Ian good fellow he did a masters and bachelors at Stanford University finishing there in 2009 at which point he moved to the University of Montreal where he did a PhD with yoshua bengio and I and after that he
0
26
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=0s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
moved to the Google brain group at that same year and after that he moved just recently earlier this year to the open AI where he currently is so I think that Ian is quite simply one of the most creative and influential researchers in our community today and I think that we have a room full of people ready to hear about a topic ganzar generative adversarial networks that he invented
26
54
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=26s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
two years ago in a bar in Montreal I might add is testament to that so yeah well so without further ado I give you good fellow yeah I forgot to mention he's requested that we have questions throughout so if you actually have a question just go to the mic and he'll maybe stop and try to answer your question I'll try not to do that again thank you very much for the
54
89
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=54s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
introduction Aaron thank you everybody for coming today let me tell you a little bit about the format here despite the size of the event I'd still like it to be a little bit interactive and let you feel like you can make the tutorial what you want it to be for yourself I believe a lot that the tutorial should be a chance for you to get some hands-on experience and and to feel like you're
89
111
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=89s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
building your own mastery of this subject so I've included three exercises that will appear throughout the presentation every time there's an exercise you can choose whether you want to work on it or not I'll give a little five-minute break since I know it's hard to pay attention to a presentation for two hours straight and if you'd like to work through the exercise you can work
111
130
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=111s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
through it otherwise just take a break and chat with your neighbors the basic topic of today's tutorial is really generative modeling in general it's impossible to describe generative ever salient works without contrasting them with some of the other approaches and describing some of the overall goals in this area that we're working on the basic idea of generative modeling is to
130
153
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=130s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
take a collection of training examples and form some representation of a probability distribution that explains where those training examples came from there are two basic things that you can do with a generative model one is you can take a collection of points and infer a density function that describes the probability distribution that generated them I show that in the upper
153
175
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=153s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
row of this slide where I have taken several points on a one-dimensional number line and fitted a Gaussian density to them that's what we usually think of when we describe generative modeling but there's another way that you can build a generative model which is to take a machine that observes many samples from a distribution and then is able to create more samples from that
175
196
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=175s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
same distribution generative adversarial networks primarily lie in the second category we're what we want to do is simply generate more samples rather than find the density function as a brief outline of the presentation today I'm first going to describe why we should study generative modeling at all it might seem a little bit silly to just make more images when we already have millions of
196
220
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=196s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
images lying around next I'll describe how generative models work in general and situate generative address all networks among the family of generative models explaining exactly what is different about them and other approaches then I'll describe in detail how generative adversarial networks work and I'll move on to special tips and tricks that practitioners have developed
220
241
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=220s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
that are less theoretically motivated but it seemed to work well in practice then I'll describe some research frontiers and I'll conclude by describing the latest state of the art and generative modeling which combines generative adverse health at works with other methods so the first section of this presentation is about why we should study generative models at all most of
241
262
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=241s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
the time and machine learning we use models that take an input and map that input to a single output that's really great for things like looking at an image and saying what kind of object is in that image or looking at a sentence and saying whether that sentence is positive or negative why exactly would you want to learn a distribution over different different training examples
262
283
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=262s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
well first off high dimensional probability distributions are an important object in many branches of engineering and applied math and this exercises our ability to manipulate them but more concretely there are several ways that we could imagine using generative models once we have perfected them one is that we could use the generative model to simulate possible
283
302
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=283s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
futures for reinforcement learning there are at least two different ways that you could use this one is you could train your agent in a simulated environment that's built entirely by the generative model rather than needing to build an environment by hand the advantage of using this simulated environment over the real world is that it could be more easily realized across many machines and
302
322
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=302s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
the mistakes in this environment are not as costly as if you actually make a mistake in the physical world and do real harm similarly an agent that is able to imagine future states of the world using a generative model can plan for the future by simulating many different ideas of plans that it could execute and testing which of them works out as best as possible there's a
322
346
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=322s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
paper on that subject with Chelsea Finn is the first author where we evaluated generative models on the robot pushing data set to start working toward this goal of using generative models to plan actions another major use of generative models is that they are able to handle missing data much more effectively than the standard input to output mappings of machine learning models that we usually
346
370
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=346s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
use generative models are able to fill in missing inputs and they're also able to learn when some of the labels in the data set are missing semi-supervised learning is a particularly useful application of generative modeling where we may have very few labeled inputs but by leveraging many more unlabeled examples we were able to obtain very good error rates on the test set many
370
396
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=370s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
other tasks also intrinsically require that we use multimodal outputs rather than mapping one input to a single output there are many possible outputs and the model needs to capture all of them and finally there are several tasks that just plain require realistic generation of images or audio waveforms as the actual specification of the task itself and these clearly require
396
420
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=396s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
generative modeling intrinsically one example of a task that requires multimodal outputs is predicting the next frame in a video because there are many different things that can happen in the next time step there are many different frames that can appear in a sequence after the current image because there are so many different things that can happen traditional approaches for predicting
420
445
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=420s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
the next video frame often become very blurry when they try to represent the distribution over the next frame using a single image many different possible next frame images are averaged together and result in a blurry mess I'm showing here some images from a paper by William Lauder and his collaborators that was published earlier this year on the Left I show you the ground truth image the
445
468
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=445s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
image that should be predicted next in a video of a 3d rendering of a rotated head in the middle I show you the image that is predicted when we take a traditional model that is trained using mean squared error because this mean squared error model is predicting many different possible futures and then averaging them together to hedge its bets we end up with a blurry image where
468
488
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=468s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
the eyes are not particularly crisply defined small variations in the amount that the head rotates can place the eyes in very different positions and we average all those different positions together we get a blurry image of the eyes likewise the ears on this person's head have more or less disappeared on the right I show you what happens when we bring in a more generative modeling
488
512
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=488s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
type approach and in particular when we use an adversarial loss to train the model in the image on the right the model has successfully predicted the presence of the ear and has successfully drawn a crisp image of the eyes with dark pixels in that area and sharp edges on the features of the eyes another task that intrinsically requires being able to generate good data is super
512
537
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=512s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
resolution of images in this example we begin with the original image on the left and then not pictured we down sample that image to about half its original resolution we then share several different ways of reconstructing the high resolution version of the image if we just use the bicubic interpolation method just a hand designed mathematical formula for what the pixels ought to be
537
563
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=537s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
based on sampling Theory we get a relatively blurry image that's shown second from the left the remaining two images show different ways of using machine learning to actually learn to create high resolution images that look like the data distribution so here the model is actually able to use its knowledge of what high resolution images look like to provide details that have
563
586
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=563s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
been lost in the down sampling process the new high resolution image may not be perfectly accurate and may not perfectly agree with reality but it at least looks like something that is plausible and is visually pleasing there are many different applications that involve interaction between a human being and an image generation process one of these is a collaboration between
586
612
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=586s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
Berkley and Adobe called I again or the I stands for interactive the basic idea of igon is that it assists a human to create artwork the human artist draws a few squiggly green lines and then a generative model is used to search over the space of possible images that resemble what the human has begun to draw even though the human doesn't have much artistic ability they can draw a
612
637
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=612s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
simple black triangle and it will be turned into a photo-quality Mountain this is such a popular area that they've actually been to papers on this subject that came out just in the last few months introspective adversarial networks also offer this ability to provide interactive photo editing and have demonstrated their results mostly in the context of editing faces so the same
637
663
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=637s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
idea still applies that a human can begin editing a photo and the generative model will automatically update the photo to keep it appearing realistic even though the human is making very poorly controlled mouse controlled movements that are not nearly as fine as would be needed to make nice photorealistic details there are also just a long tail of different applications that require generating
663
693
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=663s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
really good images a recent paper called image to image translation shows how conditional generative adversarial networks can be trained to implement many of these multimodal output distributions where an input can be mapped to many different possible outputs one example is taking sketches and turning them into photos in this case it's very easy to train the model
693
714
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=693s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
because photos can be converted to sketches just by using an edged extractor and that provides a very large training set for the mapping from sketch to image essentially in this case the generative model learns to invert the edge detection process even though the inverse has many possible inputs that respond to the same output and vice versa the same kind of model can also
714
742
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=714s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
convert aerial photographs into maps and can take descriptions of scenes in terms of which object category should appear at each pixel and turn them into photorealistic images so these are all several different reasons that we might want to study generative models ranging from the different kinds of mathematical abilities they force us to develop to the many different applications that we
742
763
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=742s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
can carry out once we have these kinds of models so next we might want your how exactly do generative models work and in particular how do generative adversarial networks compare in terms of the way that they work to other models it's easiest to compare many different models if I describe all of them as performing maximum likelihood there are in fact other approaches to generative modeling
763
787
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=763s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
besides maximum likelihood but for the purpose of making a nice crisp comparison of several different models I'm going to pretend that they all do maximum likelihood for the moment and the basic idea of maximum likelihood is that we write down a density function that the model describes that I represent with P model of X X is a vector describing the input and P model
787
808
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=787s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
of X is a distribution controlled by parameters theta that describes exactly where the data concentrates and where it is spread more thinly maximum likelihood consists in measuring the log probability that this density function assigns to all the training data points and adjusting the parameters theta to increase that probability the way that different models go about accomplishing
808
833
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=808s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
this is what makes the models different from each other so among all the different models that can be described as implementing maximum likelihood we can draw them in a family tree where the first place where this tree forks is we asked whether the model represents the data with the density with an explicit function or not so when we have an explicit density function it looks
833
854
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=833s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
exactly like what I showed on this previous side slide we actually write down a function P model and we're able to evaluate log P model and increase it on the training data within the family of models that have an explicit density we may then ask whether that density function is actually tractable or not when we want to model very complicated distributions like the distribution of our natural
854
877
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=854s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
images or the distribution of her speech waveforms it can be challenging to design a parametric function that is able to capture the distribution efficiently and this means that many of the distributions we have studied are not actually tractable however with careful design it has been possible to design a few different density functions that actually are tractable that's the family of models
877
900
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=877s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
like pixel RNN pixel CNN and other fully visible belief networks like nade and made the other major family of distributions that have a tractable density is the nonlinear ICA family this family of models is based on taking a simple distribution like a Gaussian distribution and then using a nonlinear transformation of samples from that distribution to warp the samples into
900
926
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=900s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
the space that we care about if we're able to measure the determinant of the Jacobian of that transformation we can determine the density in the new space that results from net warping within the family of models that used in explicit density the other set of approaches is those that cannot actually have a tractable density function there are two basic approaches within this family one
926
954
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=926s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
of these is the model family that approximates an intractable density function by placing a lower bound on the log-likelihood and then maximizing that lower bound another approach is to use a Markov chain to make an estimate of the density function or of its gradient both of these families incur some disadvantages from the approximations that they use finally we may give up altogether on
954
982
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=954s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg
HGYYEUSm-0Q
having an explicit density function and instead we represent the density function implicitly this is the rightmost branch of the tree one of the main ways that you can implicitly represent a probability distribution is to design a procedure that can draw samples from that probability distribution even if we don't necessarily know the density function if we draw simple as using a Markov
982
1,005
https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=982s
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
https://i.ytimg.com/vi/H…axresdefault.jpg