video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
ByeRnmHJ-uk | will also discuss a little bit about extrapolation and generalization the perhaps pertains to this great so before we actually want to start going about evaluating meta learning algorithms are going about designing my lorry algorithms we need to figure out how to actually evaluate a meta learning algorithm once we have one and so it's worth mentioning that there a lot of the | 1,491 | 1,510 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1491s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | the modern metal learning advances and and techniques were motivated by some some work done by Brendan Lake in 2015 and Brendan introduced the Omniglot dataset which is a is much more simple than the mini imagenet dataset that the circuit was showing on the previous slides but allows us to really study some of the basics of mendler nning so the Omniglot dataset it has six hundred | 1,510 | 1,534 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1510s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | twenty three sixteen hundred twenty three characters from 50 different alphabets that are had many classes and few examples per class a few classes per class and what I find really appealing about these kinds of datasets is that they're more reflective of the statistics of the real world in the real world we have tremendous diversity in terms of the number of objects and | 1,534 | 1,577 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1534s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | number of items and people that we encounter and we don't encounter them over and over again we often encounter many new things constantly throughout our lifetime okay so proposes both discriminative and generative problems for example an initial approaches for this data set and for other data sets for a few shot learning we're based off of Bayesian models and on parametric's | 1,577 | 1,604 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1577s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | and similar to what Sergey was mentioning before in addition to this which in many ways actually methods are doing quite well on these days you've also been using things like mini image net C far cub celeb a and other data sets for for evaluating meta training algorithms and many of these were not necessarily initially purposed for for medal learning but we're able to | 1,604 | 1,626 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1604s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | kind of put them in to the purpose that we would like okay so this is similar to what was discussing earlier where we have some n way K shot classification problem such as image where we wanted to be able to perform learning from very small data sets so we might want to be able to learn from one example of five different classes to classify new examples or new images as being among | 1,626 | 1,654 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1626s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | one of the five classes shown on the left and the way that we can do this is we can take data from other image classes structure it in the same way is what we're gonna be seeing about a test time for example taking images of mushrooms and dogs and singer is structuring it into the likewise these these five way Wangchuk classification problems doing this for many different | 1,654 | 1,675 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1654s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | other image classes training a training or networked in order to perform these types of things across these training classes such that an evaluation is able to solve the problem on the top with held out classes and this is an example that's specific to image classification and we're gonna be coming back to this example a number of times because it's useful for comparing different | 1,675 | 1,697 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1675s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | approaches but the same sorts of ideas are also applicable to things like regression to language generation and and and prediction to skill learning really any machine learning problem you can construct in this way where you're training it on a number of machine learning problems and you want it to be able to generalize to learning a new problem with a small amount of data ok | 1,697 | 1,721 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1697s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | so now that we know how to evaluate a meta learning algorithm was actually dig into how we actually design these meta learning so the general recipe and the general principle behind these algorithms is that we need to choose some form of inferring the parameters of a model Phi given our chain data set and our meta parameters theta and then once we choose choose the form of this we can then | 1,721 | 1,742 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1721s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | optimize it optimize the meta parameters fight theta with respect to a maximum likelihood objective using our meta training data okay and many of the different algorithms that were gonna be looking at today really only differ in step one choosing how we want to represent this inference problem essentially and so I you can ask well can we just treat this as an inference | 1,742 | 1,763 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1742s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | problem and and pretty clear neural networks are actually are quite good at inference so maybe we can just use a neural network to to represent this function itself and that's exactly what the first approach will be so this is what we'll refer to as blackbox adaptation approaches and the key idea is for a neural network to represent this function that outputs a set of | 1,763 | 1,783 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1763s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | parameters given a data set and a set of meta parameters and for now we're going to be using deterministic or point estimate of this function key we told to note as f theta and of course we'll get back to Bayesian methods later and so we'll see Bayes a bit later okay so how do you actually try to design a neural network to do this well one thing you could do is you could use a recurrent | 1,783 | 1,809 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1783s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | neural network that takes in data points sequentially and produces a set of parameters Phi and so this recurrent neural network in this case will be representing F theta and then we'll take the outputted parameters use those parameters for another neural network that's gonna make predictions about test data points and so these are gonna be the data points from D test okay and | 1,809 | 1,832 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1809s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | then once we have this model we can train it with standard supervised learning this is just a standard recurrent neural network so we can train it to maximize a log probability of the labels of the test data points given the test inputs and we can do this optimization across all of the tasks in our meta training data set we can rewrite this loss function of performing | 1,832 | 1,854 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1832s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | a bag of evaluating predictions of a model as simply a loss function operating over the parameters Phi and the test data points so we're gonna write this right here and this this will be used mostly for convenience later on and then with this form we can write the full optimization problem as an optimization of the over the the parameters outputted by the neural | 1,854 | 1,876 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1854s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | network F theta and the test data set okay so now that we have this this optimization objective how do actually what is the algorithm that's used to optimize this so what the algorithm looks like is we first sample a task from our meditating data set or a mini batches of tasks then for that task we have a data set di and will sample disjoint data sets D tre and I and D | 1,876 | 1,903 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1876s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | test I from that data set and then once we have so I guess what this looks like I say these are the images corresponding to tasks i we want to be able to partition this or basically sample this sample D train and sample t-tests from from this data set and so we'll assign like so and then once we have D train and D test will compute the parameters using D train and then evaluate those | 1,903 | 1,928 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1903s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | predicted parameters using the test data points and it's quite important that d'être and D tests are disjoint so that we're not training for memorization of the labels but instead training for generalization and then once we update our meta parameters of them were of course going to repeat this process for new tasks and if we use a mini batch of tasks the gradient in step four is gonna | 1,928 | 1,948 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1928s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | be averaged across that mini batch great okay so there's the algorithm now how to actually represent the form of F data so the the form that I have written here is a recurrent neural network you could use something like an Ellis TM you could also use something that another memory augmented neural network like a neural Turing machine which has done it has been done in past work you could also | 1,948 | 1,967 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1948s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | use something like self attention or 1d convolutions or even really just a feed-forward network that then averages in some embedding space the key thing is that you want this you want these networks to be able to take in sets of data points and often times you want it to be able to take in variable numbers of data points and so that's where I will be using these types of | 1,967 | 1,986 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1967s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | architectures that have the capability to take in take in sets and variable numbers of data points okay great so I know that we've gone over kind of this type of approach and how it works and and the different architectures what are some of the challenges that come up so one thing that you might ask is well if our neural network is outputting all of the parameters of of another neural network | 1,986 | 2,009 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1986s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | this doesn't really seem scalable but if the the the neural network that's making inferences about test data points has millions of parameters then we need a neural network that outputs millions of a million million dimensional output and one idea that we can use to remedy this is we don't actually need to output all of the parameters of another neural network we really just need to output | 2,009 | 2,031 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2009s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | the sufficient statistics of that data set of the training tasks that allow us to make predictions about new data points and so what we can do is we can take this take this architecture instead of outputting Phi we can output something like H where H is representing a low dimensional vector and then and this will be essentially representing information about the task everything | 2,031 | 2,051 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2031s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | that's needed about the task in order to make predictions and then you can combine this sufficient statistic H with another set of parameters theta G that are also metal learned along with the parameters of F such that with these with both H and theta we can make predictions about new data points and then theta G can be something very high dimensional and with the combination of | 2,051 | 2,072 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2051s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | the two we'll be able to make predictions for a new task okay so the general form of what this looks like is you can kind of abstract oh wait is this notion of H and just write out the ability to make predictions given a training data set and a new test input outputting the corresponding label okay before I move on to the next type of approach are there any questions so the | 2,072 | 2,097 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2072s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | one question from the audience was does the network need to be recurrent or are there other architectures that could work well absolutely so as I mentioned on the previous slide this could be something that's recurrent like Ellis gems and you're all Turing machines you could use something like self attention or work recursive models but you also don't need to have something that | 2,097 | 2,117 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2097s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | actually is like a sequence the sequence model could also simply being be something that has a feed-forward model and then average if you assume that you're trained is that has fixed-length than you could of course also concatenate and use a fully connected network although these are the approaches on this slide tend to be a bit more scalable I think it's on hello hi um so this | 2,117 | 2,139 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2117s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | might be related to the few previous slide so I I'm trying to understand the difference between meta learning and they're learning to learn framework by Jonatan vector of the T / posts around 1990s so I think if I understand correctly the the basic idea of learning to learn is that you're trying to learn the inductive bias of the learning problem which in this case is data start | 2,139 | 2,169 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2139s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | in your notation could you elaborate a little bit more in this similarity or the difference Thanks yeah so I'm not sure if I'm familiar with the particular work that you mentioned but in general many of the ideas that we're presenting are inspired by work that was done initially in the late 80s and early 90s with with older types of neural network approaches many | 2,169 | 2,193 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2169s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | of those approaches didn't specifically look at the few shot learning setting which were focusing on this tutorial but looked at in general learning from from relatively large data sets and it's worth mentioning that actually this particular approach up here was actually done was was one of the approaches that was used in the 90s and and also by hawk radar at all in 2001 but some of the | 2,193 | 2,212 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2193s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | approaches that we'll be discussing in later parts of the tutorial are more new another question from the audience was can we use meta learning approaches to solve classical supervised learning problems and are there any benefits to doing so so I think that we'll get to this question a bit at the end there are situations where I guess the main thing the main type of problem that you want | 2,212 | 2,238 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2212s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | to be able to address with meta learning techniques is settings where you want to be able to take in information about a task that has that takes on the form of some data whether it be fully supervised or weekly supervised and so if your supervisor problems setting doesn't have that sort of structure where you want to learn from data to solve new problems then I think it would be quite | 2,238 | 2,261 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2238s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | challenging to apply some of these techniques but in other situations maybe your meta learning maybe your supervised learning problem does have that structure and these approaches would definitely but definitely do well ok so let's move on to the optimization based approaches so now that we just kind of talked about one way to kind of make this approach more scalable is there a | 2,261 | 2,281 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2261s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | way to infer all of the parameters of the neural network in a way that's scalable without having to train a neural network to output all of the parameters in a particular as Sergei mentioned before you can view the problem of supervised learning as an inference problem where I goes in for a set of parameters using data and the way that we solve supervised learning | 2,281 | 2,299 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2281s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | problems is through optimization and these optimization perches are quite scalable so what if we treat the problem of inferring a set of parameters from data and using meta parameters as exactly an optimization problem and and this is what optimization based approaches do and so the key idea here is that we're going to acquire our task specific parameters Phii through an | 2,299 | 2,320 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2299s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | optimization procedure that depends both on the training data and our meta parameters theta and the optimization will look something like this where we are optimizing objective that looks like the likelihood of the data given our toss parameters as well as the likely of our task parameters given our meta parameters we're essentially the meta parameters are serving as a prior now | 2,320 | 2,339 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2320s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | you might ask well what should the form of the prior be for our meta parameters well there's a lot of different approaches a lot of different things that we could do here but one very successful form of prior knowledge that we've seen in deep learning for example is using is training from an initialization provided by another data set and in particular what we seen is if | 2,339 | 2,362 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2339s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | we train on things like image and then fine-tune that model on other data sets were able to capture a lot of the rich information and supervision that's exists in the image that data set and use it for new tasks so this is a very successful form of prior knowledge and of course the way that it works is you have a set of pre trading parameters data and you run gradient descent using | 2,362 | 2,381 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2362s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | training data for your new tasks okay so this is this works really well in a number of different situations but what if your train data for your new task has only a few data points like the six data points that I showed in the example at the beginning well in this case things like fine tuning are gonna break down a bit because though they weren't actually trained for the ability to adopt very | 2,381 | 2,401 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2381s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | quickly and as a result you'll either over fit to those six examples or you won't be able to adapt quickly enough and move far enough from your initialization so this is what we want to be able to do a test I may be quite nice if we can just run fine tuning on on our six examples and get a get it up get some answer some function so this what we wanna be able to do at test time | 2,401 | 2,422 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2401s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | the key idea behind this approach is to explicitly optimize for us how to pre train parameters such that fine tuning with a very small data set works very well and so what this looks like is we're going to take the fine tuning process written here this is just one step of gradient descent but you could also use a few steps or up to like ten steps of green descent for example then | 2,422 | 2,443 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2422s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | we are going to take where we were after got after fine tuning this can be like Phii for example evaluate how well that generalizes to new data points for that task this is measuring how successful fine-tuning was and then we can optimize this objective with regard to the initial set of parameters so we're gonna optimize for set of pre train parameters such that fine tuning gives us a | 2,443 | 2,464 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2443s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | generalizable function for that task and of course we don't want to just do this over one task but we'll do this over all of the tasks that are meta training data set so that we can learn an initialization that's amenable to fine tuning for many different types of tasks ok so the key ideas is to learn this parameter vector that transfers effectively via fine tuning ok so what | 2,464 | 2,485 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2464s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | does this look like at a somewhat more intuitive level say theta is the the parameter vector that we're meta-learning and Phi I star is the optimal parameter vector for task I am then you can view the meta learning process as a thick black line where when we're at this point during Mediterranean we take a gradient step with respect to task 3 we're quite far from the optimum | 2,485 | 2,505 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2485s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | for task 3 whereas at the end of meta learning if we take a gradient up with respect to toss 3 or quite close to the optimum and likewise for a number of other tasks we refer to this procedure as model agnostic medal earning in the sense that is agnostic to the model that you use and loss function that you use as long as both of them are amenable to gradient based adaptation okay so now | 2,505 | 2,527 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2505s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | that we've gone through the objective again let's let's go through what actually that algorithm looks like so we can take the algorithm that we showed before for the standard for the the black box adaptation approach and instead we want to derive the algorithm for an optimization based approach and what we can do is we can simply just replace step 3 that's inferring the parameters with the | 2,527 | 2,546 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2527s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | neural network with a step that's actually optimizing for the parameters in this case through gradient descent so we'll sample a task sample destroy its data sets for that and for parameters with gradient descent on the training data and then update our meta parameters using the test data points note that this does bring up a second-order optimization problem because you have to | 2,546 | 2,565 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2546s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | compute the gradient in step four you have to differentiate through a gradient step and test in in step three in practice a number of standard auto differentiation libraries like tensor flow and PI torch can handle this quite gracefully and and really you don't have to worry about it too much at all and it also isn't particularly computationally expensive but we will talk a bit more | 2,565 | 2,584 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2565s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | about a ways to mitigate this in a few slides okay so now how does this approach compared to the black box out of potations that we mentioned before so let's bring up the general form that we talked about before where you have some neural network that's taking a training data point or training data set and a test data point and it's producing a prediction for the test data point it | 2,584 | 2,605 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2584s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | turns out that you can view the optimization based approach in the same general form so before we were using like a recurrent neural network to represent this function but now we're using what I'll denote as F mammal to represent this function and that is the function with parameters Phi that takes an X tests and produces a prediction where Phi is defined as the initial meta | 2,605 | 2,627 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2605s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | parameters plus gradient descent on the data point and so essentially you can view the mammal algorithm as a computation graph just with this funny embedded gradient operator within it and so I really you can just view it as a very similar approach for doing it but one that has a lot more structure the structure namely of optimization within it and also with this view that means | 2,627 | 2,651 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2627s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | that we can very naturally mix and match components of these different approaches so for example one of the things we could do is learn the initialization that that mammal is doing but also learn learn how to make gradient updates the initialization and that's exactly what ravi and la rochelle did in 2017 which actually preceded the mammal work and the and this computation graph view will | 2,651 | 2,675 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2651s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | come back again in as we discussed the third type of approach great so questions how is theta G learned this is coming back to the black box adaptation yeah so great question so theta G this is going back to the black box and in that case we had a we had a very like submission statistics age that are produced by the neural network and we also had theta G that was using to use | 2,675 | 2,703 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2675s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | to make predictions about the test data points and in that case theta G is optimized with all the other meta parameters of F so it's optimized just like all the other meta parameters another question about the black box setup tation so we'll Hib trivial meaning like why wouldn't a chai just learn to recognize which task is fed in and just up what's sort of like a one | 2,703 | 2,724 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2703s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | hot indicator for one of the N tasks that's a good question I think that in practice it doesn't do you have an answer for that yeah maybe one way to think about this is it's kind of the same problem as memorizing labels for regular supervised learning so in the same way that supervised learning can overfit meta learning can net over fit so if you start seeing that your that | 2,724 | 2,749 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2724s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | your model just outputs a task indicator that can happen if you have a very small number of meta training tasks that's just an instance of overfitting we'll talk about metal or filling towards the end of the tutorial great oh one more another one I don't know if you know this but what's oh gee from memory augmented neural networks yeah so that's a great question | 2,749 | 2,766 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2749s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | so the memory augmented neural networks question by santoro at all basically that just used a standard RNN to take in data points including the test data point and so in that case theta G was actually the same exact parameters as theta in F so H the sufficient statistic is simply the hidden state of an RNN and there is weight sharing across across theta G and theta enough where it's | 2,766 | 2,790 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2766s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | represented by the the weights that are shared across time and it recurrent neural network so this one is a crowd favorite apparently how was oldest related to hyper networks where we were interested in giving parameters of a model s output great yeah so the the first blackbox annotation approach is also what is done in hyper networks I think that I'm not completely sure about | 2,790 | 2,812 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2790s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | this but I think that the hyper networks paper wasn't particularly focused on better learning problems and look at you looking at other problems where you're gonna be outputting parameters of neural networks but the approach the algorithm used is is exactly the same as the black box on updation approaches that I mentioned before this question I can answer myself can we adopt mammal in a | 2,812 | 2,829 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2812s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | framework that we don't have a batch of tasks ahead of them instead we get them sequentially to hear the answer this question you'll have to wait until the very end of the tutorial on the last slide and she'll see we'll answer it okay one quick audience question okay so this is a question about FML so if I understand it correctly that MMO essentially learns the base model which | 2,829 | 2,851 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2829s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | kind of in the middle of like the optimal parameter for different tasks but it's it is working analyzing an assumption that those optimal parameters for those different tasks are now too far away so if depth in some kind of if you are choosing a model space that the optimal like parameter or far away so that maybe you're the middle point of the base below although it is not too far away | 2,851 | 2,879 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2851s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | from each of the optimal parameter it might be not optimal for any of those I noticed it in the original paper that you are basically using a pretty simple commercial neural net which have fewer neurons I'm just wondering will the MMO works keeps its performance like if you are using a more complex model where its parameter space is more complex and speaker yeah | 2,879 | 2,906 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2879s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | so first it's mentioning that this diagram we like to use for an illustration purposes in terms of understanding the algorithm but it can also be a bit misleading which is that in many cases particularly with heavily over parametrized neural networks there isn't just a single optimum for the correct solution there's actually an entire space of optimum and and with | 2,906 | 2,924 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2906s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | with those types of problems we see that the it is actually a little bit easier to find something where you're simply one or a few gradient steps away and in fact in a minute I'll talk about the expressive power of of the Manuel algorithm and its ability to adapt even when your tasks are extremely different with regard to architectures I'll talk a bit about that but in practice we do | 2,924 | 2,943 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2924s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | find that it scales well to to larger larger architectures but it may require a bit more tuning than other meta learning methods okay thank you okay so great leaving off where you left off so mammal exhibits this kind of structure unlike the black box on updation approaches which is it has this gradient operator inside of it and so it's actually performing an optimization | 2,943 | 2,967 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2943s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | problem both within the better training process as well as it met a test time and so one thing that might be quite natural to ask is using that structure does that mean that we can generalize better to two tasks that are slightly out of distribution and this is of course an empirical question that we're gonna study more so than a theoretical question and so what we're gonna do is | 2,967 | 2,986 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2967s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | we're going to compare mammal with with blackbox annotation approaches such as snail and meta networks and we looked at image Dominika image classification problem and we tried to plot the task variability versus performance and what we found consistently across the board is as we move away from the the Mediterranean tasks with either zero zero shear or a scale of 1 we see that | 2,986 | 3,010 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=2986s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | of course performance drops for all approaches but that mammal is able to perform better because it has the structure of being able to run in gradient descent at test time and at the very least you are still running grained descent so you'll you won't be doing significantly worse than then what you might be doing with with a neural work that you really don't can't really | 3,010 | 3,030 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3010s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | say anything about if it's just outputting parameters for example okay so you might say well we get this nice structure but does this come at a cost like as the question alluded to before do the did you need to assume that the task parameters are very close to each other for different tasks and so we studied this question by studying the expressive power of a single gradient | 3,030 | 3,051 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3030s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | step basically the update that's used in the mammal function and what we can say is that actually for sufficiently deep neural network function f the mammal function on the right can represent anything that the recurrent neural network can represent on the Left which is that it can represent any function of the training data and the test input and we can show this under a few assumptions | 3,051 | 3,071 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3051s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | that are relatively mild such as the non zero learning rate as well as unique data points in the training data set and the reason why this is interesting is that it means that mammal has the inductive bias of optimization of precede procedures being embedded within it but without losing the expressive power of gradient descent or without losing the expressive power of deep | 3,071 | 3,090 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3071s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | recurrent neural networks ok great so let's go back to some of the motivation that we talked about a little bit with optimization based approaches where we're saying that the meta parameters serve as a prior and we talked about how one form of prior knowledge is is initialization for fine-tuning can we make this a bit more formal and actually better characterize what's work sort of | 3,090 | 3,113 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3090s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | prior that is it mammal is imposing on the learning process and to do so we're going to look at Bayesian meta learning approaches that use graphical models so this is a kind of a graphical model similar to the one that Sergei showed before where Phii is the top specific parameters and theta is the meta parameters so if you want to do do meta learning or learn a prior theta in this | 3,113 | 3,135 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3113s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | graphical model it's gonna look like the following optimization where we're optimizing the logs likelihood of the data given given the parameters you can write this out similar to the equations that Surya showed earlier as an integration over the top specific parameters Phi which are not observed and the and this is simply empirical Bayes and the we can approximate is is | 3,135 | 3,158 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3135s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | it's quite intractable and so one thing we could do is we could approximate this integral with you maximum a posteriori estimate of the Taos Pacific parameters Phi I and this is a fairly crude approximation but it but one thing interestingly that we can show is if you compute the map estimate basically gradient descent with early stopping corresponds to map inference | 3,158 | 3,181 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3158s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | under a Gaussian prior with mean theta and a variance that's a function of the number of gradient descent steps and the learning rate and this is exact in the in the linear case and approximate approximate in the non-linear case and so what we can see is that through this approximate approximate equivalence as well as the appraoch the approximation of the integral with the map estimate is | 3,181 | 3,202 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3181s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | that mammal is approximating inference in this hierarchical bayesian model which i think is useful for providing some intuition for the types of priors that we're learning in the meta learning process okay so mammal is a form of implicit prior are there other forms of priors that we can impose on the optimization procedure one thing we could do is do grain descent with an | 3,202 | 3,223 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3202s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | explicit Gaussian prior the log likelihood of a Gaussian shown here and this is what was done by Reuters Warren at all in the implicit mammal paper we could also have a prior used in Bayesian linear regression in this case we can't input and post a prior on all weights of the neural network that would be intractable but we can impose it on the last layer of the neural network and do | 3,223 | 3,244 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3223s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | that on top of metal earned features this was done in an alpaca another type of mammal and we can also this is moving more away from Beijing about this but we can also do other forms of optimization on the last layer of the neural network such as doing Ridge regression logistic regression or support vector machines and this is uh this forward prior is essentially that | 3,244 | 3,263 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3244s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | we want features that are useful for linear classification that can be performed with these methods and to my knowledge is this last approach meta objet as the current state of the art on P shot image recognition benchmarks okay so now that we've talked about optimization based approaches let's go through a couple challenges with them so one challenge is how do you choose an | 3,263 | 3,284 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3263s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | architecture that's effective for embedding this integrated in descent procedure and one way to do this is to do architecture search and the interesting things that was found in this paper is it found that highly non-standard architectures that were very very deep and very narrow we're quite effective for use with mammal and this is a bit different from standard | 3,284 | 3,303 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3284s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | architectures that work well for standard supervised learning problems and I'm particularly many images at five wave five shot classification proceed benchmark mammal with the basic architecture I see achieves around sixty three percent accuracy while mammal with the architecture search is able to achieve seventy four percent accuracy a pretty substantial boost by actually | 3,303 | 3,323 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3303s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | tuning the architecture that works well for it lastly one other challenge that you come up with with the medal the mammal algorithm is that you run into the second order optimization procedure and this can exhibit different and different instabilities one idea for trying to mitigate this is really the dumbest idea you can come up with is to assume that the Jacobian of Phi with respect to | 3,323 | 3,346 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3323s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | theta is identity and simply copy the gradient with this rectify to be the gradient with respect to theta and this actually works somewhat surprisingly well oddly enough on relatively simple problems although anecdotally we found it not to work well as you try to move towards more complex problems like imitation learning and reinforcement learning I another thing you can do is | 3,346 | 3,365 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3346s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | automatically learn the inner and outer learning rates you can also optimize only a subset of parameters in the inner loop such as the last layer or affine transformations at each layer you could also try to decouple the the learning rate in the back term statistics that each gradient step to have fewer decoupled parameters that might cause instabilities and finally you could also | 3,365 | 3,383 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3365s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | introduced introduced additional context variables into the architecture to allow for multiplicative interactions between parameters and allow for a more expressive gradient and so come my takeaway here is that there are a range of simple tricks that can help the optimization significantly great so before we move on to nonparametric methods let's take one question from the | 3,383 | 3,402 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3383s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | audience and potentially some subtitle questions do you know how Mammon compares to other first-order method learning algorithms particularly reptile by Jones shown many tail so you questions how does it compare to first-order algorithms like I mean can it get over some of these problems you just presented in this fight and you're asking about the de mammal no like if you see reptile which is a | 3,402 | 3,429 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3402s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | first-order method for meta learning instead of mammal can it get over some of the second-order gradient problems you just presented here yeah so as I mentioned on the on the first idea I listed here both first-order mammal and reptile do by using this crude approximation you can have a faster optimization procedure and and it potentially can be less stable but the | 3,429 | 3,453 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3429s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | the main benefit that you get from it is this is faster and lower in memory but we have found that there are a number of problems where these types of first-order methods don't work at all and you need to use the the second-order methods in order to optimization optimize them well thank you why is theta minus Phi in the Gaussian prior this is a few slides ago the | 3,453 | 3,491 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3453s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | answer to this question though is that the the interpretation as a Gaussian prior it basically says that the prior is on Phi and Phi is normally distributed with a mean of theta the variance of that prior depends on the number of breeding steps you take which is actually a very natural thing bei so the more grading steps you take the further away you get from theta that | 3,491 | 3,510 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3491s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | corresponds to a prior with the wider variance next question is there any guarantee or test that Phi is not multimodal as map will assume unimodality yeah so it's certainly yeah this this tribution certainly could be multimodal and this approximation is ignoring that in practice we have found that that mammal can work quite well on multi modal problems where you have like | 3,510 | 3,537 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3510s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | quietly quite different functions that you're that are represented in the task variables although you do need a deeper neural network for that and there are also approaches such as multi modal mammal I believe that try to tackle this problem head-on and enable you to get more efficient use of your neural network parameters and by by allowing it to represent multimodal distributions | 3,537 | 3,557 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3537s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | over five data a couple more quick ones in the case of orthogonal tasks with mammal just memorize so if the tasks are if a single function can represent both tasks without relying on the data then it will just memorize the function and ignore the data in many cases can we do gradient descent for multiple steps to get Phi in mammal yeah absolutely so you can use a variable number of gradient | 3,557 | 3,592 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3557s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | steps and practice we found that up to five gradients that works well but in practice you can use you can use more than that if if you find it helpful for your algorithm I it does not introduce higher order terms than a second order optimization it still remains the second order optimization if you go through if you go through the mouth okay great so let's move on to nonparametric | 3,592 | 3,616 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3592s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | approaches and so far we've talked about parametric methods and that we're gonna be learning a model that's parameterize by five and and what about using methods that don't have parameters five and the motivation here is that in low danger regimes nonparametric methods are quite simple and work quite well and during meta test time we're in a future learning setting and so we are in elite | 3,616 | 3,642 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3616s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | low data regime however during but a training we still want to be parametric because we have large amounts of data across all of the meta training tasks so the key idea behind these approaches is can we use a parametric meta learner in order to produce a nonparametric learner okay and note that some of these methods that I'll be presenting do precede parametric approaches but we're | 3,642 | 3,668 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3642s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | presenting them in this in this group setting to aid in understanding okay so the key idea here is is here's a few shell learning problem and one of the things you might ask is well what one thing you could once very simple thing you could do in this approach is just take your test data point and compare it to each of the the data points in your training data basically do nearest | 3,668 | 3,686 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3668s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | neighbors by by comparing to each of the images the this is a quite a simple opportunity quite valid for these types of problems the key question is in what space do you compare these images and with what distance metric I for example you could do pixel space or l2 distance but that probably wouldn't give you an effective metric over the similarity between these | 3,686 | 3,708 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3686s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | images and so the key idea behind these on parametric methods is to learn a metric space that leads to effective comparisons learn a more semantic metric space that leads to effective predictions on the test data points and then we learn how to compare these images to make effective predictions and so the first very simple approach for doing this is to train a Siamese Network | 3,708 | 3,731 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=3708s | Learning to learn: An Introduction to Meta Learning |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.