video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
nv6oFDp6rNQ | landscape will actually make it such that it will neither if it starts somewhere it will neither converge to here nor to here it will actually converge to somewhere in the middle okay into the mean of the stored patterns and if we take that to the extreme what could be is it could be that the softmax distribution looks completely uniform okay which would basically mean that | 3,056 | 3,083 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3056s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | you know i don't care where my information comes from just average and this has its applications so if you for example want to make a sentiment classifier a very cheap way to do that is to simply take pre-trained word embeddings like glove or word to back you know assign each word word embedding and then just average the word embeddings okay and you count on the fact if there | 3,083 | 3,105 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3083s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | are a lot of kind of negative words in there like bad sad angry the word embedding kind of will you know reflect that and the average word embedding will point more into the bad direction and if there's a lot of happy words the average will point into the happy direction okay so there are applications of averaging information not caring particularly where it comes | 3,105 | 3,128 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3105s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | from and um in that case what we'd expect is that this number and we'll call that so we'll call that the number k in this case it equals one but in this case k equals i guess n the number of inputs okay because we need well not maybe n but you know approximately we need almost all of them to uh to reach the 90 percent okay and there there is an in between and these are called these meta stable | 3,128 | 3,162 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3128s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | states where and the in between is something like you'd have a couple of patterns here a couple here and a couple maybe here it's almost like a clustering like and these overlap and these overlap and these overlap but they don't overlap with each other which means that if you start somewhere here you would converge to the mean but not to the mean of all the patterns but | 3,162 | 3,188 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3162s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | just to the mean of these patterns and here here and here here so this this is like a clustering in latent space right so you can interpret these hopfield update rules as somehow you know getting not going to a particular pattern but going to sort of a cluster and this is if you ask something like hey is there any adjective around right and all of these patterns they | 3,188 | 3,211 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3188s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | kind of overlap in that space in that query space of adjective they overlap and therefore the update rule would converge to sort of the mean which would basically say yes there is an adjective here right and the information would not be routed so that the distribution if we start here right and we converge to this the distribution would look something like small small small and then you'd have a | 3,211 | 3,235 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3211s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | couple of large ones right you'd have like maybe two or three or four of large ones and these would exactly correspond to the patterns here so the information will be routed from all of those in that cluster to this particular node that asks the query okay these are these are what's called these meta stable states and what they do is they calculate over the entire data set this number | 3,235 | 3,263 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3235s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | k and here they show you the distribution so in these plots what you'll see is over the entire data set um k goes into that direction so i guess let's go to tis here this this seems pretty easy so k uh is in this direction and this is simply the amount of like how so in each you you let a data point run through it you measure k for that particular layer one you see this is layer one head | 3,263 | 3,295 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3263s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | four okay this is one layer one attention head and then you can see that the number k is distributed like this okay so contrast this to this head right here where it's a lot of weight on the number one or like very few numbers okay so these blue ones would be these are your typical like when you retrieve one particular pattern so this attention head we can sort of | 3,295 | 3,326 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3295s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | conclude in this particular tension head this is very specific it looks at its input it looks at its token and it decides what information do i want and it retrieves one particular thing from the other nodes okay whereas here it's more like kind of an an averaging it's more like i want this kind of information and on average i don't even know what the sequence length is here | 3,326 | 3,355 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3326s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | i guess it's maybe 512. uh so of the 512 the median this number is always the median in median it collects information from 231 of them okay so you can see that this corresponds this green and orange ones correspond to these meta-stable states where uh there's kind of an implicit clustering done in the in this space of attention whereas the blue ones they correspond to | 3,355 | 3,386 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3355s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | attention heads that ask for particular information retrieve one particular maybe a few patterns and um happy with that and the red ones here you can see that they often just average they just you know because k is so high means that i need all of the i need all of these bars to get to the 90 or i need almost all of them which basically means it's a uniform distribution right so | 3,386 | 3,413 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3386s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | it's like i don't care where information comes from just average whatever average i just want the average you know some particular uh space and as we said that also has its uses interesting how this translate through so this here is as we go down the vert model on the bottom you have layer one you see there are a lot of these averaging operations going on so a lot | 3,413 | 3,437 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3413s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | of the heads are simply doing averaging and as you go up the layers the heads get more and more specific in the types of information they seek but then again in the last layers interestingly you get into a lot of these meta stable states again which i guess again interpret this as you as you want i'm going to leave this up to you but it sort of says like here you want | 3,437 | 3,463 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3437s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | kind of general patterns at the bottom and then the middle layers are kind of the logical workhorses so you look for very specific things in the input this is i guess this is where i guess this is where the thinking happens um so this is sort of pre-processing i'm just making stuff up here by the way this is this must be in no way true this is maybe thinking and | 3,463 | 3,492 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3463s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | this this here this might already be output again because you know after that you have language modeling or classification so this might already be like aggregating uh types of information this is how i sort of interpret it okay uh yeah so so this these these experiments are pretty pretty pretty interesting and here they have they do these are the last experiments for this paper | 3,492 | 3,521 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3492s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | um they do an interesting experiment where they actually replace the attention heads by simply an average mechanism and later they actually replace them by gaussians but in this case they simply average and they show that look if i replace layer one with just averaging the perplexity doesn't rise that much right so it's pretty good um even if i replace an entire layer here with | 3,521 | 3,547 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3521s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | averaging uh it it perplexity goes more up and you can see the corresponds if you remember the previous plot the correspondence is pretty one to one with how much blue and green uh heads there are as in contrast to how much red uh and orange ones there are so here you have lots of blue ones and you can see that the error kind of goes up and interestingly here you | 3,547 | 3,576 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3547s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | have more meta-stable states at the end but still the perplexity goes up uh more so i guess you can only really replace the red ones with the averaging so this is always averaging in one particular layer and they go into more detail here where they say look this is this is layer six and this is layer 12. so this is one particular tension head from layer 6 and | 3,576 | 3,603 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3576s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | layer 12 and the updates don't be confused it goes in this direction okay i was confused at first and you can see right here this number k at first you know it's kind of spread out but then it pretty quickly converges to a very small number and there is this kind of point right here i don't know if the learning rate's decreased i don't think so i think that's just kind of a | 3,603 | 3,626 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3603s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | a phase transition right here this is the blue line by the way the blue training line a face transition where all of a sudden these just these attention heads they somehow decide okay this is the thing i want to specialize in this is the type of task i want like a sub task of linguistic subtask i want to specialize in and then they concentrate on one particular pattern per input so | 3,626 | 3,649 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3626s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | they are really specializing whereas in the last layer you see here that even during training they are sort of continuously learning so first they also do this averaging then they go into this meta-stable region right this is this meta-stable region k isn't one but also k isn't a very high number um so they continuously learn and it's even indicative of this | 3,649 | 3,678 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3649s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | training might not be done here first of all and second of all it would be really interesting to see how this works out with you know sizes of transformers and like especially these these huge transformers just the fact that they can keep learning the more we train them might be you know be interpreted in the light of what kind of states they converge to and the fact that there are tension | 3,678 | 3,703 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3678s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | heads i don't know how does this go on do they stay in the meta stable states because it makes sense to have metastable states as i said it makes sense to kind of cluster things or are they simply is this simply an intermediate step and if you go really far down they would actually also converge to the k equals one where they really specialize or maybe do we need | 3,703 | 3,727 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3703s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | more attention heads for this i don't know it it's just i think this is just the the beginning of kind of research in this direction i think just this kind of number k um how it's how it's made it's pretty simple and apparently it's pretty pretty revealing so you know that's pretty cool so that was the paper uh and its experiments it's it's a pretty sizable paper as i said even the paper | 3,727 | 3,755 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3727s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | itself is uh 10 pages and then there is this immune repertoire classification which uh i will like spend one minute looking at it so you have you have these set classifications so for each human you obtain a set of immune receptors and you simply obtain one label whether that human is immune to a particular disease or not and your task is kind and then a different | 3,755 | 3,780 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3755s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | human has a different set you have no idea which one of these things is responsible for it being for the human being um for the human being immune or not in fact there is a it you can't even decide based on these you can only decide based on like sub sequences uh of these and they might be in combination with each other so there might not be a single one responsible | 3,780 | 3,805 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3780s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | but like a combination but you don't have labels for the individual ones and you have different ones per human and they are different lengths all of this is just a giant um giant task and you have many of them you have tens of thousands per human right so they build a system here where first they do these 1d convolutions to process the inside sequences and then they do this | 3,805 | 3,830 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3805s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | hopfield um attention mechanism all with with learned queries uh over these things and then they train on the output label and surprisingly uh that actually works even with tens of thousands of inside sequences and only one label for all of them and so they they achieve i guess uh favorable results compared to other baselines on this task using these hopfield network which is pretty | 3,830 | 3,860 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3830s | Hopfield Networks is All You Need (Paper Explained) | |
nv6oFDp6rNQ | interesting but i'll let you look at that paper yourself so i hope this somehow uh made it a bit clear what happens here and it would actually be pretty interesting um if we you know to see what happens if we just do maybe two rounds of these updates is this even desirable right uh is it desirable to run this to convergence is there something good about not running into convergence or does it | 3,860 | 3,890 | https://www.youtube.com/watch?v=nv6oFDp6rNQ&t=3860s | Hopfield Networks is All You Need (Paper Explained) | |
ByeRnmHJ-uk | thank you for the introduction Riya we're excited to be giving this tutorial on meta learning throughout the tutorial we encourage you to be thinking about questions that you might have about the content that we're presenting and as you go through if you have a question you may come up to one of the four microphones but also if you want to also ask a question from from your own seat | 0 | 25 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=0s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | without having to get up we have a link for that will allow you to post questions and to also upload other questions and we'll be monitoring that and also asking questions throughout from that link so that's slide okama sauce meta but we'll also have the link on the future slides also we posted all of a pdf version of these slides at tinyurl.com sasha ICML - meta slides and | 25 | 48 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=25s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | so if you don't want to be putting your phone up to take pictures of the slides and you can look at the slides there as well great okay I mean additionally we'll also be taking questions at the break and at the end of the tutorial so let's get started so a lot of the motivation for meta learning comes from being able to learn from small amounts of data and in particular what we've | 48 | 72 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=48s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | seen is that meta learning thrives with large data sets if there's one thing to take away from the last few years of machine learning research I think it's that large diverse data sets plus large models leads to broad generalization we've seen this time and time again from systems trained on imagenet to transform our models trained on large machine translation systems to GPT to trained | 72 | 94 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=72s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | for large-scale language modeling and all this falls under the paradigm of deep supervised learning but what if you don't have a large data set what if you're in domains such as medical imaging or robotics or translation of rare languages or recommendation systems in each of these situations we don't have a large data set for every possible task every possible switch situation or | 94 | 116 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=94s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | every possible person we want to personalize our machine learning system to or what if you want a more general-purpose AI system that can do many different things that you want to be able to continuously adapt and learn on the job if so it's impractical to learn each and everything from scratch for doing this and so instead we want to be able to very quickly learn new things based on | 116 | 135 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=116s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | our previous experience and finally what if your data has a long tail for example what if the number of data points starts going down significantly as you encounter more objects or as you interact with new people here new words and encountering new driving situations in these situations your standard machine learning systems will do well in in the kind of the big data regime but | 135 | 157 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=135s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | as as you move towards having fewer examples these systems will start to break down they're prettier it not only in the long tail something but in all three of these situations these settings start to break the standard supervised learning paradigm ok so what I'd like to try out next is is it actually give you guys a test so supervised learning breaks down here but actually humans are | 157 | 181 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=157s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | pretty good at these situations and I want to give you a future learning test and your goal is I'll give you six training data points which are shown on the left the three on the the first column are from the painter Brock and the the middle three are from Cezanne and your goal is to be able to classify the painter for the for the paintings shown on the right who painted that | 181 | 202 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=181s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | painting and this is a future learning problem because you only get six data points in order to do this how so you get six label data points for this binary classification problem okay so raise your hand if you think that the painting on the right was painted by Cezanne okay and raise your hand if you think that the painter the painting was drawn by Brock okay great | 202 | 227 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=202s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | so most of you got the right answer so this is indeed by Brock and so and in the way that you could recognize this is that there's kind of some some more straight lines and more kind of high contrast lines in the painting and so how did you accomplish this so this sort of thing trading from learning from only six examples but have to be extremely hard for a lot of modern machine | 227 | 248 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=227s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | learning systems and yet all of you eight guys were able to do it or most of you guys were able to do it quite well so the way that you were able to accomplish this was because you have previous experience you weren't trying to learn from these six examples from scratch and many of you probably haven't seen these particular paintings before or maybe you haven't even | 248 | 265 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=248s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | scene paintings from these particular artists before but you have experienced different shapes different textures you've probably seen other paintings before and do that previous experience you're able to figure out how to solve this task from only six examples okay so now how about we get a machine learning system to solve this task depending on what era you're in you | 265 | 285 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=265s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | would probably answer it differently you might try to model the image formation process you might try them all the geometry of different objects in the image if you were using slightly more sophisticated techniques you might use something like histah features or Haga features with a support vector machine or more recently maybe you try to fine tune from imagenet features or try to do | 285 | 304 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=285s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | domain adaptation from other painters for example and and maybe in the future we'll be doing something even more sophisticated so these different approaches may seem very distinct in that kind of the approach that they're taking but they all share one thing in common which is all of them are different ways to inject previous knowledge or previous experience into | 304 | 323 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=304s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | the system and as you move down these these prior knowledge you get few engineered human engineered priors and more data-driven priors and also as you move down you get systems that work undoubtedly better and so in this tutorial we want to try to take this one step further and in particular we want to be able to learn priors explicitly from previous experience that lead to | 323 | 348 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=323s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | efficient downstream learning an entirely data Duren approach to acquiring these priors that is can we how these systems learn how to learn to solve tasks and this is what is known as meta learning in the rest of this tutorial Sergey will first talk about the problem statement and overview the general meta learning problem then we'll be talking about different meta learning algorithms | 348 | 368 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=348s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | ranging from black box out updation approaches to autumn tape optimization based approaches to nonparametric methods then we'll discuss how we can develop bayesian variants of each of these methods then we'll talk about how meta learning has a plot been applied to different application areas we'll take a short five-minute break and also allows additional questions Sergey will then | 368 | 389 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=368s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | talk about meta reinforcement learning and we'll conclude by discussing challenges on frontiers ok next circuit will be talking about the problem statement and overview chelsey and those of you that were trying to find the slides we did actually there was somebody who actually posted the link again if you go to the thing on the slide here the slide Oh / meta the first question is actually a | 389 | 410 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=389s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | link to the slide deck so if you want the slide deck please check that out there alright so let's start with a discussion of how we can actually formulate the meta learning problem and there are really kind of two distinct viewpoints on meta learning there's kind of a mechanistic view and a probabilistic view let me explain what I mean by these the mechanistic view looks | 410 | 431 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=410s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | at meta learning as a setting where there's a deep neural network model that can read in an entire data set and then make predictions for new data points training this network use a metadata set which itself consists of many data sets each for a different task and this view of meta learning makes it easier to implement meta learning algorithms so if you're actually coding something up in | 431 | 452 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=431s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | tensorflow or PI torch the mechanistic view is probably the one that makes it clearest the probabilistic view treats meta learning as the problem of extracting prior information from a set of meta training tasks that allows for efficient learning of new tasks this view says that learning a new task basically used this prior plus a small amount of training data to infer the | 452 | 475 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=452s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | most likely posterior parameters that will allow you to solve this task and this view of meta learning makes it easier to understand meta learning algorithms these are not two views that result in different methods they're actually two viewpoints that can be taken to understand the same methods so in this part of the tutorial I'll actually focus on the second view on the | 475 | 493 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=475s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | probabilistic view because our aim is really to help everybody to understand meta learning algorithms but we'll see the more mechanistic view emerge when we talk about particular practical instantiation of these methods okay so just to work towards a problem definition for meta learning let's first start with a problem definition for regular supervised learning and cast it | 493 | 512 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=493s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | in a probabilistic framework so a lot of what I'm gonna say some of you might have already seen may be in a course on machine learning or a textbook but I just want to walk through it step by step because the meta learning problem definition will build on this so if we're doing supervised learning what we're really doing is we're finding the most likely parameters Phi given our | 512 | 530 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=512s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | data D so Phi denotes the parameters of your model so if your training for example a deep neural network model Phi literally refers to the weights the D refers to your training data so it's a set of tuples of input-output pairs where the input might be something like an image and the output is maybe the label corresponding to the class of the object in that image now when we actually want | 530 | 554 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=530s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | to do this kind of maximum like that estimation problem we typically apply Bayes rule and rewrite it as the sum of log P of D given Phi plus log P of Phi and the first term is typically referred to as the likelihood of your data and the second term is the prior or the regularizer so if you're using weight decay that corresponds to a Gaussian prior for example and if we factorize | 554 | 576 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=554s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | the likelihood if we assume independent and identically distributed data points then we get the familiar form shown here it's a sum over all of your data points of the log probability of the label Y I given the input X I and your parameters Phi so this is essentially supervised learning now of course there are some things that are a little bit problematic about this as Chelsea alluded to in the | 576 | 597 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=576s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | motivation the models that will work best the most powerful models will typically require a large amounts of data so if your data is very limited it might be very difficult to get a very accurate posterior or very accurate estimate of Phi so the problem at its core that we're going to be dealing with a meta learning is how do you do a good job of estimating Phi when your data is | 597 | 617 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=597s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | limited and the way you're going to do that is by incorporating more data that is not exactly for the tasks that you want but somehow structurally related so the question is how can we incorporate additional data and I should say as an aside this is very much the same kind of challenge that things like semi supervised learning and unsupervised learning deal with so in semi-supervised | 617 | 636 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=617s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | learning you incorporate additional data that doesn't have wise and and so forth in metal learning you incorporate additional data that we're going to call D meta terrain which is labeled data it's just labeled data for different tasks so D meta trained is actually a data set of data sets so it's a set of data sets d1 through DN where each of those data sets di itself consists of a | 636 | 660 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=636s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | set of tuples X I and why I wear those why's our labels for a different task so we assume the tasks are somehow structurally similar but not actually the same so you can't just directly incorporate Dee Mehta trained as trainee dated or supervised learning let me give a little example this is based on a popular benchmark for meta learning called the mini image in a data set | 660 | 681 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=660s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | let's say that your few shot classification task requires you to do five-way classification between cats dogs lions worms and stacks of bowls I don't know why you would want to do this task but let's say this is a few shot tasks and you have only a few examples of each image now those few examples are not enough to solve the tasks by themselves so we're gonna use | 681 | 699 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=681s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | the meta train which is a collection of data sets for other five-way classification problems so maybe one of them classifies you know birds mushrooms dogs singers and pianos for instance different tasks but some structural similarity because all the more visual recognition tasks another example maybe the tasks you want to solve is a few shot regression problem you have a few | 699 | 719 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=699s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | examples of input-output pairs but then your meta training tasks consist of other curve fitting problems so other curves with a few sample input-output pairs or maybe you have some kind of speech recognition task or some kind of language translation task and so on so in all these cases you can formulate a set of meta training tasks that are not the same as the tasks you want to solve | 719 | 740 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=719s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | but somehow structurally similar and typically these would come from your prior data now we could simply stop right there and treat meta learning as a nonparametric learning problem so you want to basically use D and D meta train together but oftentimes we want to use high capacity models we don't want to store all of our data and keep it around forever we'd like to somehow to SIL it | 740 | 761 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=740s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | into a model into a parametric model with learn model parameters so in meta learning we don't want to keep D meta train around forever what we'd like to do instead is learn some meta parameters theta so we're going to use D meta train to learn theta and theta will basically contain all the information that we need for solving new tasks that we've extracted from D meta train so whatever | 761 | 782 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=761s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | we need to know about D meta train is going to be baked into theta via a meta learning process and that's essentially the essence of the meta learning problem now if we want to treat this probabilistically what we can do now is we can say well we can write out our p of phi given d common Mehta trained as an equation where we're integrating out the these sufficient | 782 | 802 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=782s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | statistics of our meta training data called theta and this implies the assumption that Phi is conditionally independent of D meta train given theta which is very reasonable because we just said the theta should be whatever extracts all the necessary sufficient statistics from D meta train now in reality integrating out theta is computationally very very expensive so | 802 | 823 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=802s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | we wouldn't want to do this what we would want to do in practice typically is use a maximum a posteriori estimate which is which means that we're going to approximate this integral with just a point estimate for theta star where theta star is whatever actually maximizes the log probability of theta given D meta train which again is a very standard thing to do in machine learning | 823 | 843 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=823s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | so this Arg max that I have written on the right side here this is the meta learning problem the meta learning problem is to pull out the right theta from your meta training data so that that theta contains everything you need to know to efficiently solve new tasks and efficiently solve new tasks means figure out Phi so once you have theta the problem of getting Phi can be | 843 | 866 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=843s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | written as the Arg max of log P Phi given D comma theta star because you don't need the D mail train anymore that's all been baked into theta star okay so that's the basic problem formulation if anybody has any questions feel free to come up to the microphones and ask me otherwise I'm going to move on to a simple example yeah that's an excellent question so meta learning is | 866 | 893 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=866s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | conceptually quite related to a number of other problem settings including transfer learning multitask learning you know even things like semi-supervised learning in that all of these problem settings deal with incorporating additional data that is not quite from your task but is going to help you solve your task more efficiently the main difference is that meta | 893 | 911 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=893s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | learning deals with a setting where you still have to do some amount of adaptation on your new task transfer learning formula in a certain ways can I should be viewed as a type of metal learning as well I'll describe related problem settings a little bit more at the end of this section and maybe then things will be a little clearer okay let's continue so let's work through a little example | 911 | 935 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=911s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | of how we can design a cartoony version of a meta learning algorithm Chelsea will talk about much more practical algorithms this is just meant to be an illustration so first let's talk about the the adaptation let's say that we already have this theta star we don't care how we learn it and now we just want to classify new data points using a small data set D so classifying new data | 935 | 954 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=935s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | points means your test data point X test goes in your label Y test comes out and this function that does this is somehow parameterize by theta star so theta star determines the mapping between X test and y test Waiters theta star come from well it comes from using your data set D which might be a small data set for your new task together with your theta star | 954 | 976 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=954s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | so D is going to be read in by some function and that function that reads in D is parametrized by theta star sorry about that so that function is parametrized by theta star now you would also like to of course be able to learn this theta star using large amounts of metal training data which I'll come to in a second but if you can somehow use that Mediterranea to get theta star then | 976 | 1,003 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=976s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | that will process your data set up with Phi star and allow you to turn your test inputs into test labels so theta star is what parametrize is this function okay so now how do we actually train this thing well as I alluded to before it's going to involve this meta training data and the key idea behind setting up metal learning algorithms I think is best summarized by this sentence from a paper | 1,003 | 1,024 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1003s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | by venules at all called matching networks which says that our training procedure is based on a simple machine learning principle which is that tests and train conditions must match now let's unpack this a little bit what are the test conditions well test here refers to meta test right so meta test time is adaptation the test condition is that a model parametrized by theta star | 1,024 | 1,046 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1024s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | reads in d outputs Phi star and five stars then used to classify new data points for your task so the training time conditions need to match so it met a training time you also need to have a model that reads in a data set which data set well a data set di from your met training set it is going to be parametrized by theta it's going to output Phi star and that Phi star needs | 1,046 | 1,071 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1046s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | to be good for classifying points which points well that's maybe the puzzle so what is it that we're actually going to classify here what we need to do in order to complete the meta learning problem definition is we need to reserve a little test set for each task so it's not enough to just have a training set the training set is what the model needs to read in but then needs to be trained | 1,071 | 1,093 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1071s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | on something and what it actually going to be trained on is a little test set for each task so for every one of our few shot tasks we're gonna assume that we have K training points but done also some number L of little test points and those test points are was going to supervise the metal learning they are not used for adaptation they're just used for metal learning so D test is | 1,093 | 1,115 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1093s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | where X tests and Y tests will be sampled from so the game that you're playing then is read in D train output Phi and make sure that Phi is good for classifying points from D test for that same task so now we can actually complete the meta learning problem definition so the adaptation step we can write more compactly as some function f theta star of D train so f theta star | 1,115 | 1,141 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1115s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | reads in D train and outputs Phi star so now all we have to do is learn theta such that Phi equals F theta D train is good for D test so for every task I you want to read in D train I and be good for D test I which means that we can write down the meta learning problem formulation like this theta star is the Arg max for the sum over all of your tasks of log P Phi I given D test I where Phi I | 1,141 | 1,170 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1141s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | is equal to F theta apply to D train so notice that we get 5 from D train but we met a train on D test we can also represent this with a graphical model so if you're into graphical models here's a graphical model that represents this relationship so you have theta which are your global metal earned parameters for every task you have a Phi I and X train together with Phii | 1,170 | 1,195 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1170s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | determines y train and X tests together with Phii determines Y test and Y test is observed during meta training but not observed during that a testing that's why it's kind of half shaded there okay so this basically defines the meta learning problem but let's kind of round out this explanation with a little bit of an overview of terminology because we'll see a bunch of this terminology | 1,195 | 1,219 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1195s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | pop up again again during the tutorial so we make a distinction between meta training meta testing training and testing so you're learning the parameters theta during a meta training phase that meta training phase trains on a collection of data sets each of which is separate into a training set and a test set so when we say training set we mean that small few shots set for a | 1,219 | 1,244 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1219s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | particular task when we say test set we mean the corresponding test images when we say meta training we mean the whole set of tasks so meta training consists of multiple tasks each one with a training set of a test set and meta testing is what happens once you're done meta training and you want to adapt to a training set for a new task so the set of day of data sets is called D meta | 1,244 | 1,267 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1244s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | train that's what we refer to it as these are meta training tasks we're gonna use Ti to refer to meta training tasks so these are all of our key is this is our meta test tasks I'm sorry and then sometimes you hear people say support set and quarry set so support refer is basically synonymous with training set and sometimes people use supports that just to avoid the | 1,267 | 1,296 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1267s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | confusion between meta training and training so if someone says support they mean the inner training set and when someone says quarry they're referring to the test the the inner test not the meta test just the test so the quarry is the thing that you actually want to classify correctly after reading in the support and if someone says like oh I have a case shot classification problem what | 1,296 | 1,320 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1296s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | they're referring to is the number of examples if someone says I have a five-way classification problem they're referring to the number of classes so if you say have a five-shot five-way classification problem that means I have five classes each of which has five examples there's a little bit of confusion about the word shot sometimes it means the number of images per class and sometimes it means | 1,320 | 1,339 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1320s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | the total number of images usually we'll use it to mean the number of images per class so five shot five way means twenty-five data points okay now just to wrap up a few closely related problem settings that are good to be aware of and this is coming back to that question about transfer learning from before so middle learning is closely related to a few other things that we can actually | 1,339 | 1,359 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1339s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | cast as you know within the same terminology so multitask learning deals with the problem of learning a model with parameters theta star that immediately solve multiple tasks so you can think of multitask learning as sort of zero shot metal learning so that corresponds to defining parameters that immediately solve all the tasks at the same time this is usually not possible | 1,359 | 1,379 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1359s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | in metal learning problems you can't have one model that classifies you know that does the five Way classification with dogs and lions and also with you know the pianos and the cats but you can view multitask learning is a special case where Phi is just equal to theta another very closely related problem setting is type of parameter optimization and auto ml these can be | 1,379 | 1,401 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1379s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | cast as metal learning they're actually they are essentially metal learning problems they're outside of the scope of this tutorial but I'll just mention briefly how they can be related so in hyper parameter optimization you can say that theta refers to your hyper parameters that's where you're going to get out of your meta training set and Phi is the network weights so you'll | 1,401 | 1,416 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1401s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | learn your hyper parameters from d mehta train and then you'll use them to get Phi architecture search for same deal theta refers to the parameters of your architecture and Phi is the actual weights in the model this is a very active area of research unfortunately outside the scope of this tutorial but hopefully this will tell you a little bit about how they relate | 1,416 | 1,433 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1416s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | okay and next Chelsea will discuss a number of actual metal learning algorithms that we can use based on this problem setting oh yes and we'd be happy to take any questions right now - yeah so one question from can you elaborate more on the structural similarity that's required between the Mediterranean tasks yeah so in regard to the structural similarity between the metal training | 1,433 | 1,455 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1433s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | tasks this is we can actually make that notion formal the way we make it formal as we say there's a lucien over tasks there's a distribution p task and you assume that all of your mediterranean tasks are drawn from that distribution and you assume that all of your meta test tasks are drawn from the same distribution so this is the meta learning analog of the standard | 1,455 | 1,473 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1455s | Learning to learn: An Introduction to Meta Learning | |
ByeRnmHJ-uk | supervised learning assumption now what does a distribution over tasks really mean well that's sometimes ends up being a much more subjective notion if you have a piece of code that generates your tasks and you can say well these it needs to be generated by the same code but of course in reality those tasks are probably produced by nature and there it becomes a much fuzzier line so Chelsea | 1,473 | 1,491 | https://www.youtube.com/watch?v=ByeRnmHJ-uk&t=1473s | Learning to learn: An Introduction to Meta Learning |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.