video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
A7AnCvYDQrU
so the the bird system that's used for an LP those those mashed auto-encoder or denoising auto-encoder the the diagram looks like this you start with a piece of data you corrupt it which means you remove some pieces you run it through a few layers of neuron that there's a latent variable which is implicit in those models which is like which of the outputs is picked as a function of the
2,377
2,402
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2377s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
of the probability distribution on the output and then you compare this with the the actual data that you observed and you train the entire system to minimize the reconstruction error and in continued space the conceptually what that does is that if you imagine that your data manifold is this okay those points you take a point you corrupt it so you you add noise to it for example
2,402
2,429
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2402s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
in this case and then you train the your parameterize neural net to map this input to the output okay you feed this as input and you tell it you should map it here once the system is trained you can actually plot the vector field of you know those are little vectors that point in the direction where the neural net if you feed with this input would would take you I mean you have to
2,429
2,453
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2429s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
lengthen the the thing but they almost all take you to the to the manifold here and the color here indicates the energy so the energy is low on the manifold which is what you want it's high outside except there's a problem right here there's a ridge here and it's a it's a kind of a flat Ridge which is not good so here the reconstruction error is actually zero because the system when
2,453
2,474
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2453s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
this train doesn't can't decide whether to go this way or that way so there's a flaw with this with this thing there ways to fix it but no clear the main issue with this is that it doesn't scale with well in high dimension because in high dimension there are many many many ways to be different from a sample and you're never going to explore the entire dimension of the all the space
2,474
2,501
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2474s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
it's right it could be either but the part I'm abusing the reconstruction error is that here the reconstruction error is zero so which means the energy is zero spent so it's a phantom Low Energy I think there are ways to fix it there are no cheap you sure I'm not gonna go too okay so prediction with latent variables as I told you before I give you an X I'll give you Y you find the Z that minimizes
2,501
2,531
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2501s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
the reconstruction error and unfortunately because if Z is high-capacity this is gonna give you a flat energy surface so the solution to this is your regular Y Z you basically add a term in the energy along that x RZ where our Z measures you know basically tells you if you are at a particular region of space that you're happy with and so basically you pay a price for
2,531
2,555
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2531s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
making Z go outside of that region a good example of this which is familiar to many of you is our Z could be the l1 norm of Z so if you put the other one normal Z the sum of the absolute values of the components of Z to make this small you have to basically make many of the components of Z zero as many as possible and so you end up with sparse sparse representation and that actually
2,555
2,579
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2555s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
limits the the volume of space that is the has low energy essentially this is this is what what you get so this is sort of the unconditional version of it where there is no X is you just modeling Y and here I give you a Y Z the regular as before Z is the other one norm Z is multiplied by a matrix color item matrix he won't decoding decoding matrix it produces a reconstruction you measure
2,579
2,614
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2579s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
the square Euclidean distance between the two and that's your energy function so a classical in the applied math community at least you could generalize this and so what you get is when you turn this on this little spiral here yeah you get that the low-energy regions are basically piecewise linear approximations of with sort of low dimensionality in your subspaces of the
2,614
2,644
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2614s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
entire system that works really well in high dimension that's the cool thing and it's been studied you know a lot one thing that is not yet studied is what if you make the decoder nonlinear let's say instead of having a matrix here that you multiply Z by you have an entire neural net what happens I'll tell you about this little bit later now here is the problem though finding you know finding
2,644
2,673
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2644s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
I'm sorry it's not what I wanted to show finding Z for a given X a pair X Y finding the Z that minimizes the sum of those two terms can be expensive you have to back popping equation today is to go in and decide this could be a non smooth functions you have to do those l1 l2 optimization you know Easter or whatever it can be expensive so one idea is you actually train and you're on that
2,673
2,700
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2673s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
to predict the optimal solution to that optimization problem okay so ignore the the great part for now I'll give you an X on you I find the Z that minimizes the sum of this and that and then use this as a target to train a neural net which from x and y is going to predict this guy and then if this guy is well trained that I don't need to run the optimization algorithm for inference
2,700
2,724
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2700s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
anymore I just need to run through the encoder so that becomes very clear that is very important to limit the information content of Z because the system can cheat here it actually has access to the answer and you can just you know copy the answer on the output and so unless you have a way of restricting information content of this asset ml children completely ignore X
2,724
2,742
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2724s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
okay so if you have the unconditional version of this we don't have this part this is called a regularized auto encoder or sparse auto encoder this works really well in the sense that if you train as possible to encoder like this weather color is linear and the encoder is a few layers of a neural net on the at least the columns of the decoding matrix end up being little parts of characters
2,742
2,767
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2742s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
which means you can reconstruct any character with a linear combination of a small number of those things people call this call this these things atoms if you try our natural image patches this is a running algorithm running you end up with oriented edge detectors which is great you have to do a little bit of whitening for the images if you do this in a compositional mode where the the
2,767
2,789
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2767s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
decoder is actually conditional so Z is not a vector it's a bunch of feature maps and then you run them through collisions and you compute the sum and that's how you decode you get beautiful so these are the the business functions in the decoder the kernels that I used to reconstruct the the outputs and these are weights in the first layer of the uncoated encoder on only has two layers
2,789
2,812
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2789s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
in this case and they're basically mirror images of of the decoder and this is four one two four eight 16 32 64 filters you get a very high diversity of filter centers around gratings oriented edges at various frequencies it's really nice this is these are ten year old results more recently we can revive this technology because it's very interesting so this is again filters that are
2,812
2,842
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2812s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
learned on natural image patches from the c4 dataset 9x9 kernels and those are corresponding feature maps that are extremely sparse that can reconstruct basically any image on so far with real chip very good accuracy some other work we're doing along those lines is having multi-layer decoders so basically here is an image and then take a bunch of feature mats here run them through
2,842
2,864
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2842s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
completions and value and commissions on value when we construct and then you can sort of stack multiple layers of those on train this carefully if you know how to do it she's not good not easy but it kind of works these are reconstructions this is the original and these are kind of reconstructions you obtained with sparse representation so if you only reconstruct from here you ignore the
2,864
2,887
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2864s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
rest you get you get sort of high frequency information if you only reconstruct from here ignore this you just run through this network you can reconstruct a low-resolution version so this is you can think of this as like nonlinear wavelets if you want write the system so naturally learns to represent this let me skip this ok talk about this really quickly so something that's become very
2,887
2,916
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2887s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
popular in the in the business is something called version of tiny coders and variation over 20 coders are basically auto-encoder models they could be made conditional if you want but I cannot grade this out and they are an example of a model where you also limit the capacity of the of the the representation here in the middle and the way you limit the information
2,916
2,937
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2916s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
capacity of this vector is that you add noise okay so basically here's a why you run to an encoder you produce a prediction for what the code should be and then you add additive Gaussian noise to it and you run to the decoder and there's a constraint here that it's a penalty really used during learning that the the norm of the outputs of the encoder need needs to be as small as
2,937
2,960
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2937s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
possible okay so it's l2 regularization if you want during learning now what how does that give me the information content of the code well let's say that you train without noise right so you train your two encoder is going to assign a code this is in code space it's going to assign a code vector to every training sample these are all of the training samples now I'd know as to
2,960
2,978
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2960s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
those guys you turn them into fuzzy balls okay and those fuzzy balls might overlap so for example this this sample and that sample mind up my end up being confused with each other because when you add noise you can turn one into the other and so the reconstruction error will probably increase so what is the system going to do very easy it's gonna make those fuzzy ball fly away from each
2,978
3,002
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=2978s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
other right so that they don't overlap and that really is not that interesting you know it just makes the norm of the output of the encoder just larger but it you know it doesn't do anything for you so what you do is you play a trick you attach each of those little fuzzy balls to the origin with a spring okay you tell them okay you know you can fly away but not too far so you cannot have to
3,002
3,025
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3002s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
overlap with each and construct some sort of data manifold if you want and two bubbles will overlap to the extent that the reconstruction error is not dramatic on the output okay so there's a trade-off between the strength of that spring the size of those bubbles which in the case of rational coders autoencoders are maximized actually and and and things like that and if you if you read all the
3,025
3,051
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3025s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
papers and rational tone colors it's never formulated like this it's formulated as you know some version will lower bound or some probability distribution but it's a mechanical analogy I mean it makes it completely clear this is just a way of reducing the information content to keep the information capacity of the code okay I'm gonna end with an application of of all this which is the problem of
3,051
3,074
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3051s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
predicting what the world around you is going to do for things like avoiding to bump into other cars for example right so I already talked about this idea that if you have a forward model of the world that gives you the state of the world at time T plus one of the state the function of the state at time T and the action you're gonna take you can sort of roll out a an action in your head with
3,074
3,097
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3074s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
using this model and there has planned a sequence of action that will minimize your cost here the cost being I want to stay in my lane I don't want to bump into other car I don't want to get too close to any other cars okay and that's a different triple cost so I'm not talking about reinforcement running everything is differentiable everything is computable I don't need to try
3,097
3,113
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3097s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
anything I mean I don't need to like estimate gradients of stuff by by trial and error everything is differentiable so the part name of course is that this this model of what cars around you are going to do is not deterministic right there's a lot of things that cars around you are going to do that you know you may not predict and so there is a latent variable in the
3,113
3,135
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3113s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
model that you're going to need to sample which is which is going to parameterize the set of all stupid things that cars are on you can do and and non stupid things as well okay so you start from a state which you observe this is your current state this is what you wear the cars around you are and you sample the certain variable you take an action of your action and then the
3,135
3,156
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3135s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
system gives you a prediction for where the car and you are gonna be proud chiptune you okay if you decide to turn the wheel the world around you is gonna reel is gonna rotate okay so this is predicting what the world around you is going to look like and then what you could do is you can back propagate gradient from the cost to a network here that is supposed
3,156
3,178
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3156s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
to predict the correct action from the state so should I turn the wheel should I break sure that accelerate and by sampling multiple samples and running this on different initial conditions you you might have a car that trains itself to drive without actually driving just by just thinking about it having trained is forward modelled by observing all the cars driving so the way we do this is
3,178
3,203
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3178s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
that there is a camera that looks at a highway from the top and then you track every car and you extract a little rectangle around every car centered on every car and it turns with a car and so that's the world around every car and then you you can record sequences of those little things by tracking every car and that constitutes a training set the set of videos centered on every car
3,203
3,225
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3203s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
and so you give a few frames of this thing observe frames and you train a system that has written variables and all that stuff to predict the next frame so Z Z we represent all the stuff you can predict that the other cars are gonna do essentially right oh I see it's that's a good question I think it's I think is 256 dimension vector of 206 dimensions so this is so for inference you need to
3,225
3,266
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3225s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
kind of sample z4 for training Z is given to you by by an encoder basically right you train but you need one of those information capacity reduction here which in our case is done by a combination of adding noise and what we call dropout but it's basically set Z to zero it forces it to be zero half the time and so it tells the system like you know even if you don't have a latent
3,266
3,291
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3266s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
variable do a good job at predicting whatever you can and then half the time it lets the system use Z and the the latent variable is combined additively with the representation extracted from the predictor so that zero has kind of a special meaning if you want so this is what it produces so this is recording of the real world this is a prediction when you set Z to zero all the time and so
3,291
3,316
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3291s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
you get blurry predictions and what you see here are I'm gonna restart if it wants to restart so what you see here are four different predictions you know run kind of recursively for different samplings of the Z variables and you see they predict different features and it's indicated by the square on the circle here the indicate cars that do different things
3,316
3,348
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3316s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
for the different simply the cost function for training this thing is very simple it's you know whether the car is in lane or whether it's and how far it is from its neighbors and so you can train this policy by just by popping in gradient of the cost through the entire system all the way down to the policy Network if you do this it doesn't work because what happens is the system gets
3,348
3,373
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3348s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
into regions of the space where the foreign model does a really bad job at predicting what happens to have low cost okay so the car can it goes off the road or something like that and this can be due also to flaws in the in a cost function but but basically it doesn't do what you want so what you have to do is regularize it by forcing the system to stay within regions of the forward model
3,373
3,399
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3373s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
that where the foreign model is pretty sure of its predictions so that the system doesn't try to drive crazy stuff in crazy ways that are not present in the training set and and where its forward model can't really predict what's going to happen accurately and you do this by estimating the uncertainty in the prediction to the forward model by sampling the output of
3,399
3,419
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3399s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
the forward model with these random variables you can sample like the the drop out of in the network computing the variance of it and then using this as a term in the cost function so it forces the system to stay within a region of space where predictions are fairly reliable with low variance and this is where the system does so so this is the car being driven the green cars are
3,419
3,442
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3419s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
recorded videos and the white dot indicates whether the car wants to turn accelerate brake etc and they it's perhaps more visible in this example so the yellow car is the car that is in the recorded video the blue car is the one that we are driving and it didn't change lane the problem is that the blue car invisible to the other ones and so it get squeezed and it has to escape
3,442
3,470
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3442s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
because the other cars will just record it right so they don't they don't see the blue car is another example they're you know they are there is gonna less issues it's trying to stay sort of halfway between the cars in front in the back okay so that's slide I think this whole idea of supervised running is associated machine learning this don't necessarily believe me but that's where
3,470
3,499
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3470s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
everybody is I think we can learn complex hierarchical feature for low resource tasks which is which is becoming really important using supervised running actually in natural language it works is very important for natural language for example it's important for Facebook to be able to translate Burmese into English or to more precisely to actually train a classification system that detects his
3,499
3,525
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3499s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
speech in Bernie's because there is a sneek conflicts in in in Myanmar and so you want to be able to detect a speech to prevent bad things from happening but how much data how much training data we have in Burmese so one way to do this is to kind of turn text into a language independent representation and then train a speech detector independently of language it's very important for low
3,525
3,548
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3525s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
resource languages like Burmese or whatever I mean there is 2,000 language is something that people use on Facebook the advantage of that is that we can train massive networks it can accumulate a lot of background knowledge about the world in an on task dependent way and then we can use several techniques that handle uncertainty by to learn forward models for model-based control and
3,548
3,571
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3548s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
A7AnCvYDQrU
reinforcement learning model base reinforcement random so my money currently is on energy based approaches latent variable models so that it can handle multi modality regularize rates and viable models to prevent this collapse problem in particular sparse latent variable models although the precise way how to make that sparse is now clear and then latent variable prediction through a
3,571
3,592
https://www.youtube.com/watch?v=A7AnCvYDQrU&t=3571s
Yann LeCun: "Energy-Based Self-Supervised Learning"
https://i.ytimg.com/vi/A…axresdefault.jpg
lDLqrsye-rQ
yeah so I think since I'm one of the organizers I'll actually take the opportunity to thank all the speakers and all of you for attending it's been a lot of fun to hear the wide range of perspectives and topics this week and of course also thanks to the Simons Institute for for hosting us the date I noticed is wrong here actually obviously it's November 20th not October 20th
0
21
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=0s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
I haven't been sleeping much lately so so I'm gonna I think in the interest of time I'm gonna skip over some of the introductory material here we've heard a lot about ride-sharing platforms already so let me just skip that I think at this point in the week between the the talk that was given by Christos Co on Monday and then also the industrial visitors day we had last week we all are familiar
21
41
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=21s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
with uber and lyft and sidecar and platforms like this so what's our goal in our work oh and I should actually start preface this by saying this is joint work with Sid Banerjee who's an assistant professor at Cornell he was doing a postdoc with me and Carlos raqami who's a student of mine at Stanford and also Sid did an internship with lyft and with their data science
41
60
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=41s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
team and so some of the inspiration for this work came from from talking and working with them so I guess what but I'm interested in getting across to you and what I found exciting about this problem is that there was a combination of maybe three things that we needed to somehow include in one model and and then use that to actually say something useful about the you know the strategy
60
83
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=60s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
that the platform takes and that's basically that there's passengers and drivers who are strategic the platform is setting you know a pricing rule a decision for how transaction prices are set on each on each interaction and then there's this kind of underlying queueing dynamic that governs you know the number of rides that are requested and the number of drivers that are available and
83
104
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=83s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
in isolation we have a lot of you know different models and in various literature's that tell us about any one of these three problems one of the things that makes us really interesting to me is the fact that all three kind of come together in one place so this would just be I think you know a pure theoretical exercise it would be fun to build a model of something like that now
104
121
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=104s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
of course you want to do that with some purpose in mind in our case the motivation for wanting to do this in the first place is that we sort of wanted to try to understand what the advantages were of using a dynamic pricing policy over static price policy and I'll explain more what I mean by that as we go on so I'll skip this side as well I just want to briefly
121
139
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=121s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
point out that there's a wide range of literature in very different communities actually that touches on this so at the same time that you know we've been thinking about matching markets whether an econ or in in the UC community it's been interesting to see in the applied probability community there's been a huge amount a huge surge of interest in models of queuing systems with matching
139
160
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=139s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
behavior and so I think you know for those of you that are working on matching markets I would strongly recommend sort of looking into some of the some of the work that I think really starts with the Donna and bison goes from there there's a lot of work on strategic queuing models two-sided platforms revenue management so like I said you know there's there's a lot
160
174
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=160s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
there and and and and one of the things that made this fun is sitting at an intersection of those of those topics okay so let me tell you a bit about the model so the model is something where we need to capture three features so one is that I need to be able to say something about you know the platform's goals right you know how is it setting a pricing policy I need to be able to tell
174
197
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=174s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
you what the incentives are of passengers and what the incentives are our drivers and that's all sort of the strategic aspect of the model I also want to be able to tell you you know exactly even fixing all of this you know how how does the system evolve what are the dynamics of drivers and passengers okay so some an apology I want to interject here is that I think this is a
197
217
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=197s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
talk I personally view as sort of at the tip of a very large iceberg and in many ways I think the work that we were doing raises more questions than it answers so I'm happy with some of the answers we got but I also want to be very kind of explicit with you where I think there's important things missing okay so some of them are on this slide and some of them all mentioned as we go
217
237
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=217s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
through so one of them is that we're gonna focus on just a single block of time and what do I mean by a block I mean something like let's say rush hour or you know maybe for those of you that were here for Christmas goes talk a block of time might be you know a window of time around when bars close on a Friday or Saturday evening so why is that important in many ways what I want
237
258
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=237s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
to focus on here is not I want to avoid talking about predictable changes in demand okay so everybody knows there's going to be more demand around rush hour than there is in the afternoon alright so I want to avoid that and indeed the platform's avoid that so even even before surge pricing became sort of something which is changing at a minute-to-minute basis you
258
280
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=258s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
know it was the case that they would know in advance that there's going to be greater demand let's say around rush hour or bars closing and and the the surge market multiplier would be higher in those intervals so when I talk about static pricing I don't mean a fixed price over the entire week I mean static over something like you know a few hours a block of time okay the next thing is
280
300
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=280s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
in the talk I'm only going to focus on a single region now of course you know cities are not just a single region and I think even Chris on Monday mentioned that you know the pricing involves multiple neighborhoods in in the work that we've done in the paper we do have the main insights that we have do generalized to networks there's some sort of exceptions to that but I'm not
300
321
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=300s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
going to devolve that in the talk I think one important problem that my focusing on a single region completely ignores and even the work on networks completely ignores is I guess there's two issues that that that sort of eliminates right so one is the notion of an estimated time arrival or ETA and I think as you heard from Chris and as anyone who actually uses uber or lyft
321
342
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=321s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
would know you know in addition to whether or not their search pricing you're very sensitive to when you think you're actually going to get a ride note so what the ETA actually is so sort of by fixing the network and you'll see more in the models sort of how this plays out now one of the things we don't actually have much to say about is how passengers are sensitive to etas and
342
362
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=342s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
sort of in particular you know one thing that might mean is that the design of the regions themselves like what you consider to be a region is not a topic that we address at all right so you so you may want to make your regions more granular so that you're able to better you know acclimatize a better match supply and demand on a very local scale but as you do that you will also want to
362
385
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=362s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
make sure that drivers who are nearby are able to move in or out okay so we're not really we're not really accounting for those effects in the kinds of network model that we build and I think so so thinking about ETA s and and how the platform actually designs you know designs it's it's topology I think that's I think that's a really interesting direction of work it's
385
403
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=385s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
something sedan I've talked about we haven't really done much with it yet and find the last thing I'll mention is the objective function I'm going to focus on is is throughput the rate of completed rides there's at least three objectives that you you know might care about there's throughput there's profit and then there's welfare so we we have results for throughput that sort of have no
403
422
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=403s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
qualification whatsoever we have results for profit when the system is supplied limited in a sense I'll make precise and then there's similar numerical results for welfare but the theory there is actually a bit more challenging and so I'm not going to claim anything for that in the talk okay so let me start by modeling sort of the strategic side of the problem and this is another point
422
443
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=422s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
which we'd say you know very little about we don't say anything in our paper on and I think it's an interesting issue is that we're going to just assert that the platform takes a fixed fraction of every dollar that's spent and you know I think that's fairly consistent with how the platform's work today but it's kind of interesting I mean it's a mechanism designer you might actually think this
443
461
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=443s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
is one of the very first things you would want to design you'd want to optimize over so our work does not optimize over this it kind of holds that as an exogenous constant I think one of the most interesting things to do is to kind of a lot of questions during chrises talk on Monday I think alluded to this that it's interesting to think about whether the platform should be
461
478
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=461s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
sort of varying the the share that it takes perhaps you know based on the state of the system there's a good reasons why most platforms don't really change this on a very fast timescale you know I mean this is the type of thing that would be updated you know over months or something like that if at all and it's really a I think I think it's it's more of a sort of cultural issue
478
498
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=478s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
right i mean i think i think drivers and passengers are not drivers would not be very happy if they if they kind of were subject to to a rapidly varying share of earnings it becomes very unpredictable from their perspective of course the platform needs both drivers and passengers and it uses pricing to align the two sides one note on terminology in the platform's literature you know in
498
519
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=498s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
economics if you use the word pricing you have to be a little clearer about you means so pricing might mean the fee structure which is the gamaheer for example or pricing might mean actually setting the transaction price and one of the reasons that ride-sharing platforms were fun to think about in contrast to other kinds of platforms that I find interesting is because they
519
535
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=519s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
actually directly set the transaction price so that like that's why this is an interesting question you know if you come compare that with something like let's say op work or Airbnb they have a lot of influence on what the transaction prices might be like but they don't actually directly set them okay so here when I use the word pricing I actually mean the platform is setting the
535
552
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=535s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
transaction price and so they're able to use their mechanism for setting the transaction price to align the two sides so the first bit of notation is that the way I'm going to model the platform setting the transaction price is just as a function of the number of available drivers remember I'm focused on only one region there's a number of available drivers right now in that region I'll
552
572
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=552s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
imagine there's some function that the platform uses to map the number of available drivers to the price that you're going to get charged if you if you take a ride right now okay all right next qualifier so when I say price you know I'm saying you're setting the transaction price and again it's not directly the transaction price that's being set in in the ride-sharing
572
591
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=572s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
platforms it's a multiplier on the transact on a base price okay so the way these platforms are work they'll have a published formula that's time and distance dependant that's on their website and if you use a fare estimator you can actually directly calculate this so that's what they call the base price and any price manipulation that's happening on a faster time scale is
591
610
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=591s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
happening through a multiplier once the lift calls it prime time pricing uber calls it surge pricing and so you know what's happening is that they will they will tell you that there's some percentage that's going to get added on top of the base price because of that sort of current state of the market so when I use the word price in the talk what I'm really talking about is this
610
627
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=610s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
multiplier okay that raises yet another interesting question which is of course you know Los Angeles and San Francisco are very different markets and in Los Angeles in particular you know there may be a very large cost to pulling drivers from you know that look available from further away to come pick up a ride and that ride you know that cost as part of a distance dependent so these kinds of
627
648
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=627s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
things you know it's often a question that comes up here is well why is this also not a matter of manipulation not just a multiplier on type on top of a formula but why not actually just directly be sort of manipulating prices in a way that that you're not just picking a single multiplier regardless of what the time the distance is going to be and again something I'm not touching I think this
648
668
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=648s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
is another one of those things just thinking about from regulatory perspective you know ride-sharing platforms are fighting against the taxi industry and that's you know the taxi sort of industry has you know published fare schedules like this it's already challenging enough to convince the public that you know surge pricing or primetime pricing is palatable I find
668
685
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=668s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
that shocking I have to confess but that that's the way it is so given that that's hard enough I think varying this is also going to be you know politically even more difficult but that's said from a you know market design standpoint I think it's a reasonable question to ask okay so that's the platform yeah multiplier number of available drivers yes I'm gonna have a model where the
685
713
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=685s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
state of the system is the number of available drivers and this doesn't do that that's right yeah I think there's a lot of different things that are being left out here so that's part of it and as you're gonna see in a second in I'm taking in my talk in extreme view where drivers are making entry decisions over longer time skills now if you think about some of the tools that platforms
713
740
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=713s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
are using and and you know Chris talked about some of these on Monday another thing that's left out here is not just sort of the number of available drivers right now but let's say a forecast that I have of how many available drivers there will be all that kind of stuff is just left out of it I think there's a lot of interesting things to do there in terms of the
740
758
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=740s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
richness of the pricing policy okay so with the passengers what do passengers do act fairly simple model of passengers every passenger is one ride so passenger equals one ride request I don't model any sort of longitudinal behavior of the passenger it's just simple and basically I'm just gonna model the passengers as entering if their price exceeds if their reservation value exceeds the current
758
778
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=758s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
price in the system okay so to model that I think of every every ride request is being drawn iid from some valuation distribution and if if the value that's drawn is bigger than the current price they enter there's some eggs agenda straight of what I'll call app opens Chris talked about the same thing and that's basically like I open the app and looked do I want to request a ride and then
778
800
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=778s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
look at the price and determine whether I actually request a ride so from that it's pretty easy to work out based on the pricing you know formula what is the rate of ride requests it's the exogenous rate times the tail probability that the valuation exceeds the the current price okay so this F bar is the tail tail CDF of the evaluation distribution right so that's the passengers it's relatively
800
823
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=800s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
simple the drivers I think is where it gets a little bit more interesting this is maybe the one place where our motivation for choosing this model kind of came from what what seemed to us to be like a natural distinction between drivers and passengers I would say that I think of this as kind of a stylized extreme point and there you know especially as the platform's changed the
823
842
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=823s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
technology that they use to induce drivers to enter or exit I think we can get we can bisque and get a lot more interesting but let me tell you what this is basically the point that that we make here is that we think of drivers is making decisions on just a substantively different time scale than passengers okay so if a driver is thinking about whether to drive for example in the
842
861
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=842s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
early days of lyft you know you had like a booking calendar when you would essentially say like when you wanted to be on or off the platform and the time interval over which you were choosing to drive or not is probably something on the order of hours and so essentially what we do is we say well kind of from a driver's perspective they're not responding to the
861
879
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=861s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
instantaneous state of the system instead what they're thinking is if I enter what's the expected earnings that that I'll receive if I'm if I'm part of the platform and what they do is they compare the expected earnings they'll make while they're in the system to essentially be expected you know the kind of a reservation earnings that they want to reservation earnings rate okay
879
895
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=879s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
this is also really interesting so this is kind of a target earning model of the driver like basically they have some fixed mental model of how much they want to be able to make and they enter if it exceeds that you know I think Chris pointed to examples of a lot of different driver behavior in their data and you know you would certainly see that across all the platforms so I think
895
913
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=895s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
that's that's a that's a really interesting direction also first of all just do you have the right kind of utility model for drivers I think this is when I say you know when I say that we're taking an extreme point what I mean is this time scale separation between drivers and passengers is a fundamental part of the model it would be interesting to think about what
913
930
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=913s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
starts happening if drivers are responding more directly to instantaneous state and in particular I think the the way this comes together with the network's comment earlier is if I'm able to provide signals to drivers that say this is a place where I think demand is locally higher than supply and what I'm doing is I want to induce drivers to move in that direction and
930
950
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=930s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
when I'm trading off the effects I'm trading off are that it takes drivers time to move to a new area it takes drivers away from the area that they were in you know so I sort of I think that that's that's really where the the network modeling gets gets especially interesting so that's kind of one of the things that we want to keep doing with this work as with the passengers I'm
950
968
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=950s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
going to model this reservation earnings rate is iid across the drivers so similarly you can work out what is the actual rate at which drivers enter there's some exogenous rate drivers will enter if their earnings you know desired kind of reservation earnings rate is is lower than what they think they're going to make in you know per unit time and so expected earnings divided by expected
968
988
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=968s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg
lDLqrsye-rQ
time okay you don't have to worry too much about the fact that the expected czar both the numerator and denominator here there's sort of a waltz identity argument that lets you do away with that passenger model yeah oh yeah sorry that's a great point so so you're right this is not the number of rides that actually gets served this is a number of ride requests and so what will happen is
988
1,021
https://www.youtube.com/watch?v=lDLqrsye-rQ&t=988s
Dynamic Pricing in Ride-Sharing Platforms
https://i.ytimg.com/vi/l…axresdefault.jpg