video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
xamzdNUN1o0
computers for every possible value of X because Y will be measured in this scenario Z will be measured so y and z will be numbers available to us x will not be known can take on many values and so we'll compute this for every value of x now after we've computed this for every value of x we'll have a table of all these four values of X and those entries if we ignore the bottom will not
2,185
2,208
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2185s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
sum to one but we know it's supposed to be a distribution over x given y and z so then we know I also know we we ignored the thing at the bottom that does not depend on X that's key here there's no X in here so what we're computing is a table with entries that depend on X every X will have its own entry and we know all of them are missing the same rescaling which we've ignored and we but we can
2,208
2,231
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2208s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
find the rescaling by remembering that actually every all the entries in the table have to sum to one and so summing all the entries in that table will actually compute this thing over here mathematically we know that's a way to compute this and practice we don't often pay attention to what it really means we just say hey we want this we know this thing here does not depend on X so we
2,231
2,253
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2231s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
can ignore it as long as at the end we do a rescaling of all entries with the same scaling you cannot rescale one entry for x1 by the other one the other you need a fixed rescaling across all entries that you computed so the things written on the board we also have on the slides where often we have a pairing between the discrete version where there's a summation and
2,253
2,301
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2253s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
continuous version where it just replaced out some sign with a little squiggly thing and then you put a D something is over the variable instead of putting it under so essentially just always a place sum over the variable with this thing here and then a dy and then you got the same thing just for continuous space Bayes rule the terms we didn't name them but essentially often
2,301
2,327
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2301s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
the X the P of X is called the prior because we have it ahead of time Y given X is the likelihood of a measurement Y given this state is x is often what it ends up being solvent called likelihood and evidence is yeah that's just Y is what you measure to is a probability of getting that evidence this we just talked about when you do these calculations often you effectively
2,327
2,350
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2327s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
algorithmically you compute it for all X's you compute that top part in the fraction and after you've done that you know it has to be rescaling you just compute the rescaling and multiply with that in the paralytic robotics book notation it's a little bit funny that sometimes they use the ADA as the thing you multiply with sometimes I think you divide by just gonna I guess be it be
2,350
2,378
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2350s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
okay with that it has different meanings in different contexts but ultimately it's just a rescaling and some people like to divide by the risk and some people like to multiply with one over that thing all the same law of total probability with integrals we saw that basil with conditioning we saw that we saw conditional independence what it means again it's an assumption we often make
2,378
2,402
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2378s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
that will hopefully be approximately true or maybe even fully true so we can use it to simplify our math here's a simple example so we have a robot trying to see if the doors open or not okay what's the probability of doors open given some measurement of the sensor of the robot it is to illustrate that kind of reasoning behind why we often use Bayes rule probability
2,402
2,427
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2402s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
of being open given some measurement is a diagnostic distribution it's kind we're trying to diagnose what's the state given a measurement but those distributions often we cannot build we cannot deliver a sensor with that distribution because I mean you can't do that imagine you deliver the sensor to somebody and they lock the door and nobody has the key anymore then your
2,427
2,447
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2427s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
sensor can't know that because you didn't know that ahead of time what you really can build is the opposite distribution that causal one given a situation what am I going to measure and so that's what's be available to us and that's what we have to use but it's not what we want the reason about the world so we'll use Bayes rule to get what we want which is the opposite thing going from the
2,447
2,474
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2447s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
distribution that we know how to model which is the measurement given the state to the distribution we want to take action which is one know whether the door is open or not or at least a distribution over that for the causal one it's really just a matter of counting frequencies you might have a physical model that makes it easy but often you just kind of just do a bunch
2,474
2,496
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2474s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
of measurements so you see how often does a sense to read this or that as a function of whether the door is open or not and now I have my model let's do a concrete example we have we need to put some numbers into it probability of a measurement Z given the doors open is 0.6 Zija the door is not open is 0.3 so we see that when the door is open we are more likely to measure Z as the outcome
2,496
2,521
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2496s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
than when the door is not open so if we measure Z which should increase our probability that the door is open and our prior is 50% well we can apply Bayes rule we measured Z we can look at Bayes rule fill this in do the math which is described to find indeed that now after measuring Z which favors the door being open we have more specifically 0.67 probability that the door is open now
2,521
2,548
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2521s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
you could have another measurement imagine of another sensor wait wait go yeah let matching we have another sensor z2 how can we integrate that well we can again apply Bayes rule we already know that we can apply Bayes rule again even with conditioning in the back we already conditioned on one measurement just in a condition on the next measurement we actually do this as many times as we
2,548
2,573
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2548s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
want so the condition on many measurements we can do it one at a time and build it up so here we have the often a assumption is applied to simplify the math is that the sensor measurements are all independent given the state of the world because the state of the world is causing what you measure and so if the state of the world is a known entity then each outcome is
2,573
2,603
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2573s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
independent given that state that will simplify the math because then when we look at Z M the enth sensory measurement given the state of the world X and all the other sensory measurements we don't need to build a model for Z and given X and the other measurements we just need to build a model for Z and given X the state of the world which is exactly the kind of sensory model that we like to
2,603
2,625
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2603s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
build so we get a simplification happening over here we had ZN conditioned on everything that's the standard Bayes rule with conditioning but then we made an assumption there's not generality but we made an assumption that Z and is independent of the other Z's if we already know the state X and then this here has simplified then again the bottom we don't worry about there's
2,625
2,653
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2625s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
no X in it we can compute the top part for every value of x once we're done we sum it all together to know what the bottom is that will normalize it we can apply this repeatedly that the idea we just saw that this becomes Z on given X px given all the previous ones there's nothing specific about Z N and we'll see then we get a product over all Z is given x times prior over X and so now
2,653
2,677
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2653s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
all we need is that sensor model for each sensor given X we multiply probabilities together and we get our posterior distribution okay for the door example we had in this case a new sensor is z2 and when we see z2 it actually is more likely to be seen when the door is not open we observe z2 so what do you expect to happen mathematically we had a 2/3 probability
2,677
2,704
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2677s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
of the door being open ahead of time after sensor measurement 1 we observe z2 that probability should drop but let's check we can do the math we just apply Bayes rule fill in the numbers and indeed the probability drops as we expected all right now there is one thing I want to point out here is that a typical pitfall is that real-world sensors are not always super independent
2,704
2,741
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2704s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
so you might have a sensor and you might say okay maybe I have a laser rangefinder and it reads a reading but then maybe send out two beams next to each other but those two beings might encounter the same kind of noise or the same kind of interference in their paths and so those two measurements might not be independent and the assumption we just went through where there's all
2,741
2,768
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2741s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
simplifies just becomes a product of things is violated even though often in the math will want this and practice be careful you need to think carefully are these sensory readings really independent or imagine the laser rangefinders reading something at time T and you run it again at time T plus 1 but nothing has changed in the world then are those really independent
2,768
2,789
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2768s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
readings not totally necessarily because you're measuring the exact same configuration with the exact same sensors so maybe it's not completely independent so be careful what can the effect be imagine you imagine you think they're independent these readings and they're not what do you think will happen let's think about this imagine we have really one reading and really what we
2,789
2,825
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2789s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
should be having in this equation because there's only one in one measurement happening all the others are just copies that are not independent information they're just copies of that one reading that's how dependent they are extreme case of dependence we're just you know instead of fitting it in once into our Bayes rule thing we say hey why don't we feed it in ten times
2,825
2,844
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2825s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
because that way we can incorporate the information even more all right what would happen you feed it in ten times you multiply this thing in ten times so what will happen is that whatever that measurement is favoring you'll become overconfident in it if your favors door open and maybe normally with one time incorporating that measurement you go from 1/2 to 2/3 if you cooperated a
2,844
2,868
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2844s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
hundred times you multiply it in hundred times all of a sudden you'll be at 99% probability instead of where you should be which is 2/3 that's exactly what happens if we look at the graph here so here is a graph showing as a number on the horizontal axis number of times the same measurement is incorporated using the math that is the assumption that these measurements are independent and
2,868
2,896
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2868s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
we see that very quickly you can actually flip the probability from being 99% being one observation one state in the world to the other state in the world X 1 versus X 2 so be very careful about that another places might come is like you might maybe have I don't know accelerometer or gyro and the gyro is supposed to measure independently the angular rate of your system but the gyro
2,896
2,923
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2896s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
is a physical system under the hood or something physical happening there and so it might take time for that physical thing to settle in and really get that measurement and if you you know read it off I don't know ten thousand times per second that might not be valid that might only be 10 or 100 independent readings really and if you then use the 10,000 readings instead of just the ones
2,923
2,945
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2923s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
that are actually independent you'll become wildly overconfident in whatever it is that you're measuring rather than keeping a reasonable distribution of our possible states of the world okay let's take a couple minute break here and then let's look at Bayes filters and gaussians actually much more I did it's not as unstable for alright let's restart any questions
2,945
3,097
https://www.youtube.com/watch?v=xamzdNUN1o0&t=2945s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
about the first half yes so I've in a lot of things we assume a first-order Markovian system and kind of touching on the example that you were talking about at the end have you seen in practice whether maybe the first-order Markovian assumption isn't appropriate but a certain like n step Markovian assumption is appropriate so that we can find this balance between
3,097
3,135
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3097s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
having something that we can get accurate I guess quantification from but also the joint distributions are not intractable yeah so it's always a trade-off between tractability and match with the real world I would say I think in general first order assumption I mean in principle if you have full state then of course definition of full state means that the first-order assumption will be
3,135
3,164
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3135s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
true otherwise it's not full state but in practice your state will not be maybe the full state of the world to be a approximation of the state that you use as your representation for state space and as a consequence first sort of assumptions might not exactly be true I think often it's dealt with in pretty much ad hoc way where people just look at the system and say ok for this system
3,164
3,187
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3164s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
it seems like we need to look this far back or that far back to get what is effectively the state of the system by including enough history but I haven't seen like if necessarily like a very clear-cut way to define it in general and this ties directly on what we're going to do now which is Bayes filters so often the world is dynamic because robot does things other agents do things
3,187
3,215
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3187s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
time is passing by the world changes in various ways as a consequence so how to incorporate actions into what we're doing as well as our sensor readings so typical actions robot turns its wheels to move robot uses manipulator to grasp an object a plant grows over time and takes up more space actions are never carried out with absolute certainty so in contrast to measurements actually
3,215
3,240
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3215s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
actions generally increase the uncertainty dibal you have is you will have measurements reduce yesterday in your distribution then action gets taken which introduces new uncertainty and the distribution becomes higher variance how to model actions well we can again do it with a probability distribution so we can say the prohibition of our next state X prime given current state X and
3,240
3,266
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3240s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
action U and that again is a causal model it's something we can build a model for and then use to calculate the distribution over next States so for example a robot might try to open a door and it might succeed or might not succeed you might have a model for that so you might say if the doors already open and or in this case closed the robot tries to close the door doors
3,266
3,289
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3266s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
already open well 10% chance it'll just stay open because it fails 90% chance it managed to close it if it's already closed and it will keep it closed okay that's a model that might or might not be correct but these are the kind of models that we're going to be working with such that we can do calculations about possible states of the world based on our actions we took and sensory
3,289
3,311
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3289s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
readings we get well this is exactly the law of total probability we're applying here the conditional distribution for X prime next day given the action U is well we have a distribution for current state that's what we assume we assume that we know the solution for current state will then multiply in X prime given U and X to get to joint over X Prime and X once
3,311
3,340
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3311s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
if they joined over X Prime and X we actually just want X Prime so we sum out or integrate out X to just get the thing for X Prime that's exactly what's happening over here loft all the probability with some conditioning so we can run this for the robot example and I'm going to step through the details here but you can do you know with those numbers we have the
3,340
3,364
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3340s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
math you can see okay given the action that we took the probability of clothes become 15 out of 16 and the priority of open is 1 out of 16 how about measurements measurements are what we've been talking about the most so far we'll use Bayes rule Y again because we have easy time getting a model for sensitive reading given State but we don't know how to get a model for
3,364
3,389
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3364s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
state given sensory reading directly because we need to know the prior over States and wanted to use Bayes rule to get that now in Bayes filters the framework is that we get a stream of actions and observations we have a sensory model a action model or a dynamics model and a prior distribution for where the state of the system is starting from so these are our Givens
3,389
3,413
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3389s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
what we want is an estimate of the state of the system at all times so we want to keep a running estimate over distribution over possible States for each time T pictorially often it's drawn this way this graph captures the causal relationships that are present in our set of assumptions so X again is the state over time use the actions we take actions will affect the next state the
3,413
3,444
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3413s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
previous state will affect the next state and the observation in this case assumed to just depend on the state the state of the world determines what we might measure and so when we get to look at this graph the explicit assumptions being made is that the measurement at time T given everything that happened before everything happened before only depends on X T so for no X T no past
3,444
3,474
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3444s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
states matter when we know X denote past measurements matter no past control inputs matter it's all summarized in state XT we're going back to the question of course if your state space doesn't capture everything about the world then this will not be true in this assumption will be violated and typically that's the case you never captured the whole world stayed in what
3,474
3,494
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3474s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
you do but hopefully you're close enough that this actually works then same thing for dynamics next state could in principle depend on everything that came before but the assumption we make is that it only depends on previous state and action taken again the paralytic robotics book makes some kind of different kind of indexing that I'm personally used to but since that's the
3,494
3,519
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3494s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
the best book to go read probably on this if you want to read more I'm sticking with their annotation so they said the action taken UT is the one you take that will land you in XT normally you would call that UT minus 1 I think that's what almost everybody calls it but in the book they call a UT it's the action that you know you take right before you land in state X T so that's
3,519
3,542
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3519s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
what it is and that's also reflected in the graph over there UT it's actually you take right before you end up in state X D so unlined assumptions are independent noise meaning that the this thing here these two assumptions means that if you let's say how did the mystic model plus noise that the plus noise effect is independent at different times and it also assumes if you are if you're
3,542
3,570
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3542s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
going to say whatever I get out of these calculations is the correct distribution that's only true if these assumptions are correct if you make any approximations in your dynamics model in your sensory model well those approximations will result in miss estimates that you end up with but it's the best you can do but keep that in mind it's there's a lot of assumptions
3,570
3,591
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3570s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
made here and so you only get the result relative to the quality of the assumptions that you made the quality of the models that you built in okay let's step through Bayes filters so what builds up step-by-step is the probably the key thing for the second part of lecture so we want to believe which is a distribution over X D given everything from the past including also the current
3,591
3,622
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3591s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
observation of time T now remember it's going to be a recursive thing we're going to already have this presumably for time T minus one and we're going to see if we already had it for t minus one how do we get it for time T what's the latest thing that happened latest thing that happened in that progression is the measurement ZT how do you incorporate a measurement Bayes rule so ignore
3,622
3,645
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3622s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
everything but XD and ZT what we do p XD given ZT we'd say Bayes rule its ZT given XT times XT okay that's exactly what we're doing we're going to apply Bayes rule and then some normalization right but remember there's also that other stuff and we'll just carry it around everywhere we know that we can carry things around everywhere as long as we do it consistently so it's
3,645
3,668
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3645s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
just Bayes rule apply now we had some assumptions that the measurement at time T given XT does not depend on anything else so we just simplify accordingly okay so we've not yet gone from XT minus 1 to XT we've incorporated the last step the measurement can we now do the transition from XT minus 1 to XT well let's think about it how do we go from XT minus 1 to
3,668
3,693
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3668s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
XT well xt- one I've already have it we're multiplying the dynamics model to get the joint and then sum out XT minus one okay that's exactly what we're going to do we now say the distribution for XT is XT given XT minus 1 times distribution for XT minus 1 and integrators sum out XT minus 1 and sure we're carrying around all this extra stuff here you want or UT
3,693
3,722
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3693s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
and the Z's and so forth but we know that's just extra stuff we can care around consistently this is just the law of total probability for XT and XT minus 1 then we say well can we simplify this because our mark of assumptions yes we can we don't need to conditioned on everything that's conditioned on there XT given everything in the past only depends on UT and XT minus 1 and this
3,722
3,747
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3722s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
here is now our complete recursion because this is a thing we assume we already have is the same thing we wanted to find for XD we have it for XT minus 1 and now we can just repeat all the way back to X 0 and we're good to go so at every time step as we track our system we can incorporate the controls we applied U at time T here gives the tribution over XT this thing over here
3,747
3,773
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3747s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
gives distribution over XT given that what we already knew about XT minus 1 and the controls we applied and then we get a measurement we're multiplying the likelihood of the measurement and then we were renormalize and if our distribution for XT and we can keep tracking over time all right the mark of the assumption makes this even slightly simpler notation and we're
3,773
3,795
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3773s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
good to go all right so and then the last thing here is just the notation the book uses BL belief of XT minus 1 is a shorthand where there's no there's no assumption here it's not like we can delete all this stuff here just to be clear this stuff is now being deleted by some assumption is just the notation beol XT minus 1 by definition means XT minus 1 given all this is just
3,795
3,824
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3795s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
a shorthand notation so for the second-to-last equation or like the third line from the bottom and the second line it goes UT at the last one and then it's CT minus one what happened there like on the conditioning oh ok the only thing that happened there essentially that the UT was removed yeah it's not fully spelled out but essentially UT does not play a role for
3,824
3,852
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3824s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
XT minus 1 because it comes after and so UT was removed from that that's all that happened and so this is the final result we have we can put this in an algorithm and this the algorithm that will be running and this is the version for discrete States of course where we can just track the Blee compute this was actually very simple you start out with some distribution prior over X your
3,852
3,884
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3852s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
distribution over X then when you get an observation you're multiplying the likelihood of the observation given X of course if you do that for all possible values of X you sum that together normalized by it you every new believe for X after that observation has come in if it's not an observation that came in but you took an action well then you use law of total probability then a much
3,884
3,907
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3884s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
smaller gets multiplied in you get the next state and so you can run they don't if you alternate between Z and you you can just have a bunch of Z's coming in incorporate them then take up some actions incorporate that and just repeat over time whenever something come in act upon what happened so in summary Bayes rule allows us to compute probabilities are hard to assess
3,907
3,929
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3907s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
otherwise that's really what's going on here under the hood it's that we don't have direct access to the distribution we want we've access to it sometimes the reverse distribution we can use to get what we want under the markets assumption this recursive Bayesian updating can be done very efficiently and Bayes photos are kind of the common pricing tool for estimating the state of dynamic systems
3,929
3,951
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3929s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
here's a simple example imagine your robots supposed to localize has four measurements and that means measures in all four directions and it checks whether it's a wall or not in each of those directions it's a noisy sensor and we're going to not go through the math here we're going to qualitative let's see what happens well if it's if it's really over there but we of course don't
3,951
3,972
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3951s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
know where it is the gray e the uniform gray means our distribution is uniform over all possible locations we show the location of the robot just to you know know what's going on but the robot doesn't know that robot just could be anywhere get some measurement to come in since wow I only see things above and below me not to the sides so the Bayes rule update will result in
3,972
3,992
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3972s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
the new distribution then they might say I'm going to move but it might have noisy motion so it then kind of spreads out the distribution a little bit then it measures again which shrinks it again and this process repeats over time and over time it might localize itself in this building of course this assumes that access to a map of the building otherwise this whole
3,992
4,016
https://www.youtube.com/watch?v=xamzdNUN1o0&t=3992s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
base update rule wouldn't work out in this case okay that's it for Bayes filters I want to now move on to Gaussian so we'll look at univariate gaussians multivariate gaussians we'll look at law of total probability for gaussians which already covered for discrete case we'll look at what does the math look like for Gaussian so we'll do that next lecture but that's on the
4,016
4,047
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4016s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
menu for us for gaussians and we'll look at conditioning Bayes rule how it shapes up for gaussians univariate gaussians what did it look like is this kind of blob kind of distribution where most of the mass is in the middle but it can actually go all the way to infinity on both sides there's something called standard deviation Sigma and 68% of the mass lies within one standard deviation
4,047
4,069
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4047s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
of the mean and then it kind of decays out from there the mean is usually denoted by mu sanitation by Sigma and then the variance is Sigma squared the density itself is this thing over here so it's exponentiated negative x minus mu square so let's think about that when X is close to MU and the X is equal to MU that's zero and you have e to the power power zero which will give you one
4,069
4,101
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4069s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
and then if you are the further you go away actually could be even let's see yeah each of our zero would give you one if you go further away from you you start moving away from you this thing will become and the exponent will become a more and more negative number and that will make the expo that negative number will be a lower number and density drops it's exponential Exponential's drop very
4,101
4,132
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4101s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
quickly that so why this nicely integrates to one still because Exponential's drop quickly enough there's not a lot of probability mass out in the in the faraway parts the normalization constant upfront that's just you know you do some calculus and you find out that to make sure that this quantity here to make sure it integrates the one you need to put upfront 1 over
4,132
4,152
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4132s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
Sigma of 1 over square root 2 pi ok so what are some properties of gaussians well this is integrate to 1 so we know that this thing integrates to 1 the expected value of your variable X is expected value is you integrate over all possible X can take times probability of taking that value if you work out the integral and not proving that this is the integral here I'm just saying if you
4,152
4,180
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4152s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
work out this integral you'll see the result is mu so the expected value under a Gaussian is immune not to an expected either because I mean in the name is also saying it's the mean but you can actually do the math and principle to find out that mu is indeed the expected value under a Gaussian distribution the variance is defined as X minus mu squared on average what's your square deviation
4,180
4,204
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4180s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
away from the mean you can do the math compute this integral you'll see Sigma squared is the variance which is what we already called it but in principle you know you have to calculate it verify that the name we use for Sigma squared is actually really what variance means and indeed it is now you might say why might we care about gaussians these integrals I mean we figured them out or
4,204
4,227
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4204s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
somebody figured it about but they're not necessarily easiest but at least they can be done well there's another reason to care about them aside from them being convenient to work with the central limit theorem classical CLT says that let x1 x2 be an infinite sequence of independent random variables with an expected values mu and variance Sigma squared then we define a new variable ZM
4,227
4,252
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4227s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
Zn is the sum of all of them minus the sum of the averages so this should be centered around 0 because the sum of all the mind is the sum of the averages or expectations and then normalize by this quantity over here it's a scale down then for the limit of n going to infinity with infinitely many of those variables being put together in a sum and send it around
4,252
4,279
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4252s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
zero and normalized then we have Z going to a zero one so zero mean Santa Aviation one Gaussian so what that means is that if whatever you care about is the effect of multiple independent factors the resultant variable is often distributed more or less like a Gaussian and the more independent factors contribute to the variable you only care about the clothes that variable we
4,279
4,308
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4279s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
ultimately care about will be distributed according to a Gaussian so it's not just about convenience of math though that's a big part of it another big factor why gaussians are a reasonable thing to use is that in fact even if there's enough underlying factors it might really be a Gaussian the thing you're looking at about multivariate gaussians well here's what
4,308
4,330
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4308s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
it looks like these are densities where X is now a vector so we again see a normalization happening up front which is has a determinant of Sigma in it Sigma is a symmetric matrix the covariance matrix and have X minus mu X minus mu so whenever you see something like this often it's a little hard to make sense of it directly if you haven't seen it before your simplest thing to
4,330
4,355
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4330s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
you say okay yeah there is matrices and vectors but whatever is just scalar if it's just scale it looks like a single univariate gaussians again you just have X minus mu squared if it's not a scalar what do we have well Sigma is a symmetric matrix so the inverse of Sigma is also symmetric matrix so the symmetric matrix symmetric matrices are as we know from we've covered lqr just
4,355
4,381
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4355s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
the rotation away from being diagonal matrices just a coordinate transformation away from being diagonal so we can just as well think of it as diagonal matrices if we were just working the correct coordinate system okay so imagine it's diagonal then we really see like x1 and mu1 attracting with the first entry in that diagonal and x2 mutant with the second entry and completely
4,381
4,403
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4381s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
independent no interaction so x1 and x2 are their own gaussians in that coordinate system where now they might have different variances because the dialogue could have different entries for x1 and x2 but Elmi is just saying oh we have to gosh is 1 4 X 1 1 4 x2 they could have different variances that's it now if we're not on the coordinate line system shirt then it won't look like a diagonal
4,403
4,427
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4403s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
but the intuition remains the same there exists a coordinate system where this would be diagonal and easy to think about I also remember back when we did lqr I told you any matrix can be any symmetric matrix in a quadratic form is fully general because we have a non symmetric matrix the non symmetric part cancels out anyway now we did some math on the board and that feel free to revisit
4,427
4,449
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4427s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
there's no reason to ever consider a non symmetric matrix Sigma because the non symmetric part will just cancel out in that quadratic form alright so we can compute expectations here to expected value of variable X is of course is mu expected deviation of X from the mean is actually entries in Sigma so this is saying expected value of x I minus mu I so how much bigger is the Exide than its
4,449
4,476
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4449s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
mean x how much bigger is xj then its mean well if they're both together bigger and together smaller then this will always be a positive quantity and will have a positive number come out if one of them tends to be bigger when the other one is smaller and the other way around then they'll have opposite signs a negative entry will come out and they're completely independent and their
4,476
4,497
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4476s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
bigger is smaller then I'll cancel it be 0 so Sigma IJ is positive when they both together tend to be above average negative when they have counter correlation in terms of above below average and 0 when there's no relationship between when they're above or below average let's look at some examples what this looks like so here is a plot of a density of a Gaussian mean
4,497
4,520
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4497s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
at 1 0 the first axis is this 1 the second axis is this 1 in all these plots Center deviation 1 for each coordinate so symmetric Gaussian which keeps television while we shift it now the mean is at negative 0.5 for the first query we see it shifted what if we shifted even more will so shifted for the second coordinate to mean again this bump just moves around let's look at
4,520
4,546
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4520s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
some more examples again a starting point a 0 centered variance 1 Gaussian we then reduce entries on Sigma so Sigma is diagonal here setting stay acts as a line nicely symmetric point six point six it becomes a taller peak because all the density has to be closer because you're not allowed to be as far away from the mean as often so the mouse has to be more centered you can make the
4,546
4,572
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4546s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
standard mission larger and things will spread out you can also do things that are not diagonal matrices so here again on the left the standard one in the middle we said x1 and x2 are positively correlated so what do we expect we expect that there is a lot of mass along the axis where x1 and x2 are both above the mean or both below the mean the mean here is zero so along the main diagonal
4,572
4,600
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4572s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
we expect there to be a lot of mass which is exactly what we see then here we made it even more 0.8 and we'll get even more mass along that main diagonal because whenever one is above average the other one should also be above average below average the other one should be below average most of the time you can draw this all so differently so drawing in this kind of 3d like sketches
4,600
4,623
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4600s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
is sometimes hard to do much easier to draw contours so the contours of the densities shown below the kind of 3d plot then here's a corresponding density contour for the middle one and for the last one you see indeed the ellipse runs along that main diagonal how about some other examples here is negative correlation so here it runs along the opposite axis when x1 is above average
4,623
4,651
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4623s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
x2 is below average and so we get it to run the opposite way we can make it even more negative negative point-eight even stronger negative correlation or we can make it positive again and also make one have higher variance than the other one so the first coordinate has more variance than the other one any questions about these okay then next lecture we'll do the math
4,651
4,689
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4651s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
xamzdNUN1o0
for the total probability law and Bayes rule for gaussians actually one quick extra announcement for today's today's lecture is probably one lecture where the pace is probably the most off for most people because you all have very different backgrounds so if it was too fast for you make sure to review everything really carefully to be ready for the next few lectures all three
4,689
4,717
https://www.youtube.com/watch?v=xamzdNUN1o0&t=4689s
Lecture 11 Probability Review, Bayes Filters, Gaussians -- CS287-FA19 Advanced Robotics
https://i.ytimg.com/vi/x…o0/hqdefault.jpg
LfUsGv-ESbc
howdy-ho how's it going so today we are going to try out the DET är the end to end object detection with transformers from facebook AI research and they have a github repo and they pretty much give you everything like the model the pre trained weights and so on so today we're going to check out how easy it is to get started with that so in order to do that they have like a collab but we we won't
0
28
https://www.youtube.com/watch?v=LfUsGv-ESbc&t=0s
[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)
https://i.ytimg.com/vi/L…axresdefault.jpg
LfUsGv-ESbc
look at it too much I've glanced at it and we'll basically see how far can we go without looking at it too much and how easy is that so what I've done is I've spun up a collab that I will share at the end and I've imported torch and just loaded the model so you don't have to wait for that to happen so I've loaded that up and now we have it in the cache so now we can
28
53
https://www.youtube.com/watch?v=LfUsGv-ESbc&t=28s
[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)
https://i.ytimg.com/vi/L…axresdefault.jpg
LfUsGv-ESbc
basically go ahead and load an image into the model and try to detect objects in the image so first of all this is super easy right you simply load this from torch hub it's kind of like the the tensorflow hub you simply give the name of the model you say I want the pre trained please chugga-boom you now have a model so if we look at that model this is going to be this entire Det our model
53
80
https://www.youtube.com/watch?v=LfUsGv-ESbc&t=53s
[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)
https://i.ytimg.com/vi/L…axresdefault.jpg
LfUsGv-ESbc
right here with all the transformer and ResNet and whatnot okay this is almost a bit too much right here so what we want is an image so let's go find an image where better they find an image than Google so let's find an image of dogs because dogs is one of the classes in this cocoa dataset this one's nice right okay so we want the image address we want to load it in here somehow so so
80
108
https://www.youtube.com/watch?v=LfUsGv-ESbc&t=80s
[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)
https://i.ytimg.com/vi/L…axresdefault.jpg
LfUsGv-ESbc
that the URL is let's make this into some sort of like an input thing where we can paste the URL right here okay there we go so we have this right here and that's the URL all right no that's not the URL at all is it but a beam but a boom what about cool better now we need to load this for that we gonna use the requests library always a pleasure requests requests so the way to load a
108
154
https://www.youtube.com/watch?v=LfUsGv-ESbc&t=108s
[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)
https://i.ytimg.com/vi/L…axresdefault.jpg
LfUsGv-ESbc
binary file is you can put the URL here and you can say streamed here I glanced this from the other thing and the raw entry will get you the eventual deeply bytes no oh sorry get darrell streamed stream yeah so this will get you the sort of the the bytes of the image and then use just say image dot open and of course we need the image from a the pill library the python image
154
191
https://www.youtube.com/watch?v=LfUsGv-ESbc&t=154s
[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)
https://i.ytimg.com/vi/L…axresdefault.jpg
LfUsGv-ESbc
library so import image we got that and we can open that image up and with a bit of luck yeah yeah so this model expects I think cocoa dataset is 640 by 480 images but they if you can see right here and we're gonna take a quick glance at their transforming they resize it to 800 so we're gonna we're gonna steal that part right here people last time where some some found it really funny
191
234
https://www.youtube.com/watch?v=LfUsGv-ESbc&t=191s
[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)
https://i.ytimg.com/vi/L…axresdefault.jpg