video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
BnpB3GrpsfM
that don't you know have anywhere near the robustness or generality of humans you're going to need a lot of data tiling everything but at least it'll be unsupervised and so we have it available and you're going to need unfortunately at least to get the you know the soda grind a little more you can in fed some amount of compute with which to learn them but again that may produce a model
7,921
7,938
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7921s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
that's actually quite small and efficient to run a test time and I think that's one of the hopeful direction is going for is you you know train these big models and you know Google or Facebook or open the I you know burns the GPU years to to get that model but then you know you're able to distill it and prune in and release it and then it can still run on your own laptop and or
7,938
7,959
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7938s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
on you know a single GPU and you know that means that there's downstream tasks that you may want to investigate or you know build models on are much more efficient because you've amortized all this compute that went into pre training and now you're able to you know use that during the fine-tuning so it may actually be that like bird is actually you know though it took a ton of compute
7,959
7,976
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7959s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
to train bird and Roberto may actually have reduced the overall volume of compute needed to achieve a given level of result and may actually widen the amount of usefulness and test that can be tackled in the field because it can transfer and you know been and be beneficial to everything downstream and you know I think it's very reasonable that some people in the field kind of
7,976
7,996
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7976s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
look at all this coming together and are like well you know I don't find that satisfying and so I think that's a valid view and so you know maybe backing up and working towards you know more grounded learning and there's lots of really interesting work in this space now of you know moving towards reinforcement learning and granted learning with you know multimodal agents
7,996
8,015
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7996s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
and all this kind of stuff that connects to you know more what feels like you know true learning about the world instead of just seeing abstract bits of text you know that I think that's a very valid approach but right now you know we've been just seeing that it's been driving a good chunk of empirical progress over the last few years you know there's a whole other set of
8,015
8,033
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8015s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
methods here that's multitask learning and I think that that's actually been showing a lot of promise in the last when I made these slides this slide last year I think I was a little bit more pessimistic on it and there's actually been a lot of good work like I'm tdmn and others that's been making progress on on this set of methods but it still kind of relies on us building a data set
8,033
8,050
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8033s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
so for multi task learning you train on a bunch of different tests together and you kind of hope that you get transferred nationally between them but often they're all supervised tasks and t5 is a good paper actually like really talk through the nuances of what's the test learning for gendered pre training and one of the surprising things they share now is when you do it well and you
8,050
8,067
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8050s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
kind of exactly emulate the pre trained and fine-tuned framework if you do multitask pre-training followed by supervised fine-tuning you still need the uh you still need the unsupervised objective of like math slang which blah blah like but you can get rid of or you can at least find very similar performance in many cases compared to having to do the giant pre-training on
8,067
8,087
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8067s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
you know the full internet for instance so they're still having room left and it's actually improving these methods and the final one here to just chat a little bit about is some of the fall work we did it and they're open ion gt2 and this is kind of like what I've been chatting through here is kind of like a lot of the motivation that went to this project so we collected more data
8,087
8,105
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8087s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
compared to GPT one and we collected much more diverse and heterogeneous data so we're hoping that we have models that would generalize better and see a much broader set of tasks so it's 40 gigabytes of text 10 billion to go cans 8 million webpages we scale up the models just because we kind of saw those trend lines and you know I think there's a lot of reasonable arguments for why
8,105
8,124
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8105s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
you just need bigger models to handle complex tasks and it's just a language model which predicts everything so immediately it still left right on aggressive model so it has some drawbacks compared to things like Bert but it's just a language model and so what we focused on in this case was purely how well this mala could do across you know many different tasks in
8,124
8,142
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8124s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
at zero shot setting so we we never fine-tuned it because you know supervised learning is tricky and it learns to exploit spurious correlations and dependencies so we're only ever saying well you did all your pre training work and we had you predict a bunch of words how well can you handle this new data distribution you've not only never seen before I mean really you
8,142
8,159
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8142s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
know we trained on a lot of data so we actually see a bit of a lot of data distributions you're not letting it like specifically turn specific tasks but that specific label we're just saying run it and see what I can do and we show that like it actually begins to do something particularly as you scale the model across a wide range of canonical and LP tasks so it's purely
8,159
8,176
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8159s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
unsupervised there's no you know there's no direct human labeling or supervision going on here but this model can actually you know you can feed it a paragraph in the mask of question and you get transfer and linkage well can give the right answer sometimes often they're just matching kind of old baselines and they still have a huge gap to the you know human
8,176
8,193
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8176s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
performance but I feel like this is a much better measure of potentially what the like underlying performance of these systems might be because we're not doing supervised training here and you know yeah and surprisingly we know our models are still worse than people so that kind of shows up here but it also shows a promising trend line where in some cases like there's very domain-specific
8,193
8,211
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8193s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
algorithms for unsupervised translation middlee it's been a year so that speech should be up here now is there like some great follow up work from Ferran well there's pushing and supervising on team farther but this is just a language model with no real customization and we're just seeing that it begins to do translation between the cumulation French you know you can tack a tldr on
8,211
8,231
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8211s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
the end of a document and get something like summarization it's pretty garbage on the official metrics because it's only barely matching read three random sentences from the article but kind of quantitatively and qualitatively if you ask people which you prefer it looks a lot better than these numbers show because this is a kind of very coarse evaluation metric and then the final
8,231
8,252
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8231s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
thing here is like question answering so it's kind of shows domain knowledge and kind of like kind of world knowledge and potentially a lot of factoid information and this one we unsurprisingly see a really strong scaling curve with model capacity so like how is this working how does this kind of unsupervised system it's just a language model begin to translation question answering
8,252
8,272
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8252s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
and reading comprehension well if we go through an inspector data said it turns out there's actually just like kind of natural occurrences of tasks and you're turning them all to predict the next words so that's you know it's easy an English sentences then you're like then it happens to just have you know inside of the middle of this article that someone wrote a training example of
8,272
8,288
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8272s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
English to French so it's a much more natural way of learning and when you have very large data sets you just actually begin to have a non-trivial data and so you see for translation for summarization like if we just like crap through the data set how many times does TLDR up here well there's not a thousand training examples in quotes here and how many times does like someone asked her
8,288
8,309
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8288s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
who what where when how why question well there's six million of those so we're kind of seeing that these kind of systems that you know don't make assumptions and silly about any specific task we kind of try to predict everything kind of really begin to make some progress I mean again like one of those areas we saw the most on is this question answering an open domain
8,309
8,326
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8309s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
question answering where you're just asking like what is the capital of you know Paris or in what year was star wars released and you know I think that this kind of gives you a very clear picture of why supervised learning with like kind of the standard approach just is never going to really be able to solve this kind of task so on the x-axis we have a number of training examples seen
8,326
8,345
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8326s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
and again this is log scale and yeah if you start with a Randal initialize what model there's no way it's going to be able to do question answering I don't open domain you know there's no way it can have the information for you know what is the capital of Paris until it's seen that training example and there's very little generalization there you just need so much data to approach this
8,345
8,364
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8345s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
from a naive supervised learning approach whereas we have bigger models that have more capacity you know in the limit they very quickly began to do non trivially on these data sets and then they kind of fine tune in and learn how to better extract the information that's somehow contained within the weights to various degrees so again this red baseline here is completely randomly
8,364
8,382
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8364s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
emitted and these are basically random guessing numbers the entire way through you know those data sets doll is 20,000 labeled examples but as we try bigger and bigger language models we see that they really begin to make personís and t5 I think has continued pushing this quite a lot farther to where they're actually sometimes matching with only a neural model that's never looking at
8,382
8,400
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8382s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
documents with like the actual factoids in them it just from its parameters is able to answer quite competitively on some of these tasks yeah we're pretty much into a conversational period at this point but um you know some of the takeaways I would kind of say from this and kind of you know really pushing on language models for a few years here's performances you know not usually
8,400
8,421
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8400s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
limited by something single paper fixes this is a very long history of you know I think we probably talked about 25 papers during the trajectory of research here and usually it's always someone chipping away on one specific access you know diminishing returns basically mean there's always some other bottleneck so if you scale to compute but not the data you'll get back
8,421
8,439
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8421s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
there if you scale you know the parameter size you'll just need more computer or if you scale you know the Moloch caste but don't increase that is that it'll just over fit or you could try to scale via like you know human intuition and you can use fancier models but maybe that's just more difficult to train so kind of I tell you that like you know particularly if you have a
8,439
8,458
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8439s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
little bit more of an engineering mindset here kind of the pragmatic approach of kind of pushing on all these axes together may allow kind of for a larger effect size to show up than pushing on anyone in isolation this is an unfortunate tension I think in research and science where you often want to you know microscopically measure effect sizes and walk controlled
8,458
8,475
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8458s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
oblations and experiments in isolation but you know if you change a few things together you might actually see morgan outsized effect because that's like one of the things we do we typically to where we get more data get more you know a bigger model through everything cut it together let me try to see if that really pushes toward qualitatively different behavior maybe yeah I mean I
8,475
8,494
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8475s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
really could transition in a question period at any point now you know there's a little bit more advice at the end just saying that don't work on large scale models particularly you know like as things like a lecture so show you can work on the smaller models and see the same effects showing up they're not going to have the same accuracy curves but you know we know from scaling laws
8,494
8,512
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8494s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
and kind of all those trend lines that if you start seeing an effect that's robust at small scale probably fingers crossed it'll also hold at larger scale so you can do a lot more more rapid development and you know and this I think works quite well you know you should try ten as many or ten times as many times models that are just ten times smaller each and you know that way
8,512
8,533
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8512s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
you can run 10 times as many experiments in parallel this is still you know a large research field so there's a lot of things you got to try and you know beat all the behaviors in a paper like GP d2 which kind of I feel like gets pointed to as a canonical like the computer Big Data kind of thing they still show up on models you can train on a single desktop it you know it takes a week to about to
8,533
8,553
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8533s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
see the hints of that middle E but you know gbg small you can train quite well and I've got a week on like a for GPU setup and then after you get proofs of concept on like your algorithm or your idea then you can scale up if you can have the computer resources you're going to get a you know a low enough time on the cluster or thanks and kind of the same strategy used back to
8,553
8,573
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8553s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
the day with like the seventh unit where the initial proofs of concept were 512 dimensional as teams that took a day or two on standard hardware and then you know for the final version then we kicked off a big run with the model that took 16 times the computer and you know how do you not go insane when you wait for a model to train from month well we like to do this thing at opening eye
8,573
8,593
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8573s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
where you boot your big model over before you go on vacation so you put that before winter break and you just let her train over the break and luckily fortunately you're in the set of sight machine for the whole time but don't stare at that graph every day you won't make nearly as much progress if you're just staring every day at that number but often models surprise you when you
8,593
8,611
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8593s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
give them more time to learn so you know when you're really trying to push that result at the end it's a really good idea to try that if it's available in the option yeah and again one of the other spreading things just about this field has been how far we've gotten pushing often where the developers of one paper or modeling architecture let me just push them log probability or
8,611
8,630
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8611s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
the you know type 1 evals and then someone else coming along in another paper and showed oh this thing's actually great a type 2 evals so I think that's you know really reassuring and you know I'd often say that you could work on one or the other in isolation and often you see things that robustly scale or contribute on both sides you know there's some gotcha as always with
8,630
8,649
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8630s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
scaling you know things break when you at some point you can't extrapolate too far and things just change so you've got to watch out a bit for that you know for like a model like 2 PT 2 real on one of my collaborators was like we were originally trying to train these deeper bigger models and they just weren't working better and we had to fix an initialization technique and rearm came
8,649
8,671
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8649s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
up with this and it helped you know continue scaling so when you see you're scaling like not happening in the way you'd expect or the try mods kind of suggest it's also in some that something's wrong you need to like tweak it or fine-tune it or come up with like to actually do the clever work I don't do much of that myself to fix it up and try to keep making progress yeah and then the other
8,671
8,692
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8671s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
thing is just like writing efficient and smart code these days luckily hardware is proving and for the same price point so with things like FP 16 1/2 precision compute if you switch over to that with things like with example being GPT one the original version took 25 days and FP 32 and one generation older hardware and then the same next generation hardware where you you know a lot of people did a
8,692
8,715
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8692s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
great job optimizing this to Scott grey in particular it's amazing GPU engineer and researcher and opening I we worked with some blocks part part the box of our spork is basically his work and he was able to optimize these down by almost order of magnitude on just the next generations hardware from a lot of you know great improvements across the field so often if you write efficient
8,715
8,735
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8715s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
code and use all the right tricks in terms of accelerating your models you can ring a lot out of the same you know the same level of hardware and just be efficient about that we have a library called the block sparse library that can help with that and provides a lot of these opsin honestly also libraries like right origin are doing a great job merging these in providing their own ops kind of
8,735
8,755
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8735s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
more integrated into these these kind of wrappers so that's I think exciting for the fellows at all yeah you know in terms of sweet spots for computer you know for 20 ATI desktops still can do along with space they just cost a fair amount of money and then you know your standard 80 100 bucks on a cloud provider is a very is a medium scale compute platform you know papers like
8,755
8,778
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8755s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
electro can do a lot which 'single be 100 and I mean a 20 ATI is basically the cheapy 100 for 4 or 5 times less oh yeah that's about it honestly I think we have where you have about 15 minutes left for questions and you know I have a few more random slides everyone this this is really great Alec thank you so much let's see if people have some questions hey Alec yeah how question so I was
8,778
8,817
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8778s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
wondering if you could give you a views on like what do you see 0 shot language modeling something that could could could be production quality performance over time or do you think it's always gonna be lower than a collecting supervises and fine tuning some big current model just try to understand like the space between GPT and like Bert like models yeah oh yeah
8,817
8,849
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8817s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
so you know right now it is absolutely garbage from a production perspective like GPT - I mean well okay there's hints of life there you know for reading comprehension it's it's matching some of the original neural supervise baselines so I'd say there's hints of life there we're still talking about you know you need to do a lot more research and if you looked at kind of those scaling laws
8,849
8,869
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8849s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
for like what kind of you know GPT - looked like like if you draw those out there's still quite a lot of order of magnitudes left to go so from a pragmatic or practical perspective it's not really there right now and that might be the scary answer which is you know our models do rely on exploiting and you know I don't override this view but you know like it may just be to
8,869
8,891
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8869s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
actually do these tests correctly you do just need you know much more compute and something like the zero shot setting so it's kind of like working I think I see it kind of is like working with you know like letting shoes or something like resistance training I think it's a fascinating research area to push on because it does have some of these exciting qualities from like maybe
8,891
8,908
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8891s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
representing the you know much more difficult and hopefully much more true representation of test performance but yeah it still has a long way to go so I think it's a fascinating research direction but here's a lot of pushing to be done on that thank you and yeah I think for a pragmatic perspective like you said you know you really should find tune on some supervised data and you
8,908
8,927
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8908s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
know like I mentioned Burke models are still showing quite good robustness out of distribution there I don't think there's been any good work comparing pure zero shot to learning of a task to like supervise the fine-tuning of a pre train model but I think we're talking about something that's like a few years out at least thank you thank you I saw a question here earlier you motivated
8,927
8,950
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8927s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
Ellen's by comparing probabilities of pairs of strings to exact knowledge such as cats at first cat sets has this intuition comparing sentences I guess with exact knowledge been used for training general models a text or Maxo like that ubiquitous so maybe this is about a comparative or contrastive method for training generative models where you compare sentences and know
8,950
8,968
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8950s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
that like one should have higher probability than the other there was one one paper for reps in tation learning perspective which it's not quite the generative model side but it's representation learning CPC you know is that whole family of contrast methods is dominating you know unsupervised learning for image representations so it's somewhat of a contrast where in NLP
8,968
8,991
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8968s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
we haven't seen it really tick off yet so I think it's a very exciting research direction in the original CPC paper actually had some results that were promising on natural language but they you know like the original CPC paper in general we're exciting but nowhere near stay the art and a lot of the refinements in the last year or two on the image side really pushed that quite
8,991
9,009
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8991s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
far I think you might have had a lecture just on that or about two so it would be very cool to see if someone could do that kind of similarly for natural language but if it was about kind of exploiting more structured knowledge about like differences and encoding that into the generative model there is some pretty interesting work on this particularly from some more the like
9,009
9,031
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9009s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
linguistic heavy folks and field of combining kind of hybrid systems of you know neural and with like kind of something like grammar constraints or the like and it's you know I'd say it's primarily focused a little bit more on you know the settings where you might expect encoding that inductive bias to help which is like smaller data sets but you know at least personally my kind of
9,031
9,055
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9031s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
I find some degree of at least from a pragmatic perspective a lot of current language modeling benchmarks I think are quite artificial because they work with such small amounts of data which from pragmatic perspective just doesn't make sense because there's all of what could be out there there's it's so easy to just write a scrape or your stuff or download a shard of comic roll and
9,055
9,074
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9055s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
that's more data than you basically ever going to need to work with or you know be able to process and so I think at least from a pragmatic perspective we should really be for how to use the large volumes of data we have you know I think it's a very valid other approach to push on data in isolation and you know how how much you know how data efficient we can get with limited set of
9,074
9,094
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9074s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
data but I think it's probably add just to farm in extreme when you have you know only a million words of training data and things like country things so Alec a follow-up question on that it seems like one way to to learn languages read the entire internet right another way to learn language is the way I think most people learn language which is absolutely you kind of I don't know how
9,094
9,125
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9094s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
many words or how large the data set would be that somebody encounters by the time maybe they're six years old and they can speak pretty well maybe at that point they have any notion of kind of how much data is required in that context compared to how much data is required here oh it's it's awful at least for you know in for neural models I think it's um yeah for like a six-year-old child I
9,125
9,148
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9125s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
think it's maybe you know I just bashed on 1 million words being unrealistic but I think it's about one to ten million so you know compared to GPT to being ten billion tokens there's orders there's three orders of magnitude at least of headroom there potentially and i think that again understandably motivates why a lot of people do work on that city but my guess
9,148
9,167
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9148s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
would be that to really make progress in that setting a lot of that is because of transfer between modalities and you know actually you know interacting with very high quality sources of supervision like other people and you know being you're grounded agent that interacts with you know video and audio and like i think that that research is very interesting longer-term and you know we're probably
9,167
9,188
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9167s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
going to saturate kind of what we can do with these ungrounded giant systems in the next few years or maybe it's even already starting in the last year so that's like very i think exciting next round of work and clearly like the numbers just show there's a huge amount of room to go got it thank you makes that work well for when you are laying apply to like other abilities like video
9,188
9,215
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9188s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
surgeon educators okay yeah so genetic did is actually a great example there there's a really I think Joshua Meyer and collaborators between I think was it proud and it's an NYU team in slush fair I think Rob Fergus is not working a lot on this so they took Bert and they applied it to protein sequences or I think sorry amino acid sequences and probably I don't have
9,215
9,241
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9215s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
strong bio background but much of bio background but they were showing that the same methods or you know learning like a lot of the structure in those different domains so like kind of the central unit analysis or the sentiment you know example I gave for pure language there was also another paper from I believe church the church lab at Harvard where they took like literally
9,241
9,263
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9241s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
my code and ran it over amino acid sequences and we're showing that there was like instead of a central unit there was like a like a beta sheet unit or so current course finding like secondary or tertiary structure of proteins the models were having units that were like understanding you know or even though these are like very nonparametric kind of abstract models that just like you
9,263
9,285
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9263s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
know have a bunch of parameters that just factorize a probability distribution they're somehow learning the structure of the domain or hints of that so I think that's very exciting and that's another line of work I think given how exciting this stuff has been for MLP and how much of an impact it's made over the last few years whether it could work in other domains would be
9,285
9,304
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9285s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
quite interesting you know there's definitely differences so for video I think video just needs so much compute that it's like still maybe quite a few years off just because of the volume of data and you know the amount of compute that might be necessary but maybe I'm just being cynical there whereas I'm images you know there's a weird contrast which is like I mentioned the contrast
9,304
9,325
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9304s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
in methods are doing quite well and if you just run a generative model where you know actually okay that's not quite right there's one paper from deep mind called big by again where they took it immediately you know pretty different generative model and they were showing that those are starting to learn quite good representations or images at least from the standards of unsupervised
9,325
9,342
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9325s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
learning still being crushed by the latest moco's or sim clears but they're you know they're quite promising and you know showing a kind of a foothold of this generative model kind of approach in other domains and maybe you know one more piece of context to shine on there I think there's some one of a nicety to language because it's produced by you know people
9,342
9,364
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9342s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
it kind of is naturally designed to be very clean and very high-level and yeah it removes all the noise so when I think we run and try to train the same generative models or approaches in domains like images or video it may just be that like when you're dealing with raw natural audio signals are you know sorry not raw natural signals they have so much noise like particularly a
9,364
9,385
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9364s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
likelihood based generative model is just like spending so much effort and capacity trying to predict all that noise and the you know this noise to signal ratio it's just a lot cooler that that just like makes it a much more difficult task right now yeah you know it's it's I think it's a very interesting research question so Alec we're about out of time here
9,385
9,424
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9385s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
give any closing thoughts oh yeah let's wrap it up we're mostly there I guess you know what one thing again is like you know III one of the things that I really enjoyed about being able to have the opportunity to this talk was kind of going through and showing that full history here and kind of you know I think it's it's a great example of how there's so many
9,424
9,445
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9424s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
pieces that built on top of each other and you know there's so many different authors and so many different institutions that really contributed to this and you know even given that open area there's been a lot of climbers that have pushed on this stuff over the last few years and you know it really you see it just evolved like so many different pieces of the research with all the
9,445
9,466
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9445s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
different you know things being brought to bear new models new datasets you know new approaches so I think it's a really great and you know exciting example is like a very rapidly evolving research field that managed you know do some exciting things in the last little bit yeah well thank you fantastic lecture Thank You Alec we'll stop the recording here right
9,466
9,492
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9466s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
v2GRWzIhaqQ
hi there take a look at the following problem on the left right here so you have this quadruped and the goal is to have it walk forward or in any direction as far as possible now usually this is the domain of sort of reinforcement learning so you have inputs which is the sensors of the joints of the quadruped and you have outputs which is how much force you want to put
0
25
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=0s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
on each of the legs and you have to somehow learn a policy to make it walk forward reinforcement learning does that by sort of trial and error using an environment to learn the policy directly however this paper does something different what it does is it learns a policy that is adaptive during training which basically means that at the beginning of each episode
25
52
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=25s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
the policy in it is initialized randomly and by policy here we mean a policy network uh policy neural network which you can see at the bottom so that's initialized randomly and then during the episode depending on the input uh this network is changed and adapted in order to achieve high performance so even at test time the network is started randomly and then
52
81
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=52s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
adapted during the episode so this paper deals with this problem and tries to implement this sort of more biologically plausible way of learning a policy adapting to the environment and achieve ultimately good performance in this task and it has some nice property namely that it can deal with these things as you can see here front right leg damage front left leg
81
109
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=81s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
damage but we'll get to that later but just so you know what's coming so the paper is called meta learning through hebbian plasticity in random networks by elias najaro and sebastian rizi so we'll go through the paper what it does what evolutionary methods are really briefly which they use what hebbian plasticity is and the difference to classic reinforcement learning and then we'll
109
136
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=109s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
look at the experiments and that's going to be it if you like content like this as always don't hesitate to subscribe and share it out and tell me what you think in the comments i still read all the comments so i am very interested in what you think about works like this and about the video itself okay so they say lifelong learning and adaptability are two defining aspects of biological agents
136
163
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=136s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
modern reinforcement learning approaches have shown significant progress in solving complex tasks however once training is concluded the found solutions are typically static and incapable of adapting to new information or perturbations so they contrast the two things here reinforcement learning as you know is very powerful in these domains but its goal is to
163
188
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=163s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
learn a policy and then that policy is fixed and it's specific to that particular problem however biological agents you know humans uh animals and so on they're able to adapt usually very very quickly they give some sort of examples right here like if a if an animal is born it almost immediately knows how to walk um so even if it has some sort of injury even if it has some sort of disability
188
217
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=188s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
um usually the animal can walk uh pretty much instantly and that means it sort of adapts to the body that it is in sort of reconfigures itself on the fly and that's what we're going to explore here so this isn't going to out compete uh rl anytime soon it's just a different way and a biologically more plausible way in order to do that so again they say we still don't know completely how
217
247
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=217s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
biological brains learn and adapt so efficiently from experience it is believed that synaptic plasticity plays a prominent role in this process and that's why they are using these hebien learning rules in order to configure the network so let's contrast the two things for a second in reinforcement learning what you have is a policy network now the policy network
247
272
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=247s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
is a neural network that maps sensory inputs to actions okay so you have the observation goes in and out comes in action this is your policy network now during training in reinforcement learning what you do is you have some sort of environment okay this is the environment and you play this back and forth game with the environment and you try to improve this policy network right here as
272
300
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=272s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
best as you can in order to achieve a high reward then during testing so this is train then during testing you freeze you freeze this network right here so you freeze the network and then you simply play that game and you see how well it does okay so this gives you some sort of reward and that's going to be your testing reward and you know that can be generalization it can be to different
300
329
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=300s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
environments and so on but the crucial part is that you in train you learn and then you freeze during test in this in this particular paper right here they do something different so let's call that the hebbian plasticity world in the hebbian plasticity world again you have your environment and you play this game but you play the game in episodes and at the beginning of each episode you
329
364
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=329s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
initialize this using some sort of distribution here a normal distribution you initialize the network and then you learn you adapt during the episode you adapt the network to have good performance okay so this thing right here these are the hebian rules so you update the network during the episode and then at the end of the episode you go back you initialize the network
364
395
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=364s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
again you start a new episode and you again adapt that randomly initialized network so what's actually learned here isn't the weight of the network what's learned during training is these rules that transform any randomly initialized network into a high performing network now of course you you might just object and say hey wait a minute i can just basically hard code the
395
424
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=395s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
you know the optimal weights here into these hebian rules like my rules can simply you know not care about the input and simply output whatever good weights there are and ultimately that would lead back to rl but as you will be able to see in the experiments they also have some videos provided that i invite you to watch you can really see that the network reconfigures itself
424
449
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=424s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
first of all at the beginning it reconfigures itself to a good state but then also as the episode is progressing it continuously reconfigures itself depending on the input so this is the real power of these hebbian rules in that during the episode the network can continuously reconfigure itself in order to achieve high rewards so it's not just that i can go from the random initialization
449
472
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=449s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
to a good performing policy i can adapt that policy depending on what the input is so at test time in this habit world what we're going to do is again we are going to freeze the learning rules so you have to kind of rethink we're going to freeze the hebian rules but still we're going to randomly initialize our policy in each episode and then we're going to change that during the episode okay and
472
505
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=472s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
then that's ultimately going to give us our reward so that the thing that's learned is just something different here you learn the weights directly in the rl setting and in the heavy and plasticity setting you learn the rules to update the weights dynamically depending on the input this is a form of meta learning right it's not exactly but it is a form of meta learning
505
532
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=505s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
so let's see what those hebbian rules are and you can as again you can see this right here during training so this is one episode and it always starts with these random networks at the beginning and then you can see as you progress there is structure emerging and again i'll link to the videos and you can see that during the episode even this is changing and this is
532
557
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=532s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
especially visible on their other example that they have here like this this car example so in this car example during the video you'll see that now there's a curve like this and then as imagine you're a driver like there is a kind of a left curve coming and you adjust your mental state let's say to say okay i don't know what's around the curve i need to be ready to break and so
557
582
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=557s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
on and then there is a straight piece coming and you'll be like well i i see everything you know i can focus on different things you cannot reconfigure your state in order to adapt to the the observation and that's exactly what you'll see in that video is that the weights are continuously updating not so much in these quarter pads to which we'll get later so
582
604
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=582s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
these habian rules what do they look like these are biologically inspired rules and they say the following so this here is the delta w i j and our perspective of policy networks is going to be that this is a neural network as we said and we'll just pick up one layer right here and there is going to be weights right here you know weights from all to all these are going to be fully
604
632
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=604s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg
v2GRWzIhaqQ
connected networks and like this and there's going to be neuron i somewhere here and neuron j somewhere here okay so neuron i and neuron j are going to have a connection together this thing right here and there's going this the question is going to be how do we update that weight from one time step to the next remembering the weights here are changed in each
632
660
https://www.youtube.com/watch?v=v2GRWzIhaqQ&t=632s
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
https://i.ytimg.com/vi/v…axresdefault.jpg