video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
BnpB3GrpsfM
know we say bi-directional context that corresponds to training the same self attention transformer basically except you don't longer have this masking to prevent future locations like J you know after I being able to have I look at and attend to position J after I so that that's kind of the architectural detail that corresponds to this change and it kind of makes sense that having that
5,969
5,991
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5969s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
ability to look at both sides of context helps with disambiguation it helps with information processing and you know information flow through the model because you know the model can like query back and you know for things like question answering for instance if you have a if you have the question after the context you can't update the representations of the context you know in a left-right
5,991
6,009
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5991s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
Auto regressive model after you've seen the question because they're mass then they're hidden from it so the model isn't doing any you know right context dependent processing but in bird it can actually you know bidirectionally attend and quickly passed information forward and backward and this is just what you see if anyone who actually does like self attention architecture from scratch
6,009
6,026
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6009s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
on a supervised test they always use bi-directional or almost always instead of no mess mess self attention matrices and so this turns out to have a huge boost on glue so that that bump I believe between GP t1 and Burt was like they went from I will GPT one had like an average of 78 or something or sorry excuse me I think this got reworked and sorry we excluded WN a lie it was like a bump of
6,026
6,054
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6026s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
like five five plus percent so it basically got a double the Headroom on gbg1 and they show with very careful controls that for the exact same model in the exact same setting you know it does look quite a bit better so bidirectionality makes sense also for sentence comparison tasks like entailment where you have two sentences you're comparing really want them all to
6,054
6,072
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6054s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
be able to compute them and tend back and forth between them and you know look at one and then the other that just seems like correct behavior to do whereas 251 would just go left right and then you'd be done so yeah Burt ended up being kind of the thing after Elmo Elmo kicked it off especially in the research side and got a lot of people to start investing in this space bird is kind of
6,072
6,093
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6072s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
the thing that moved this to the point where suddenly it was like you know ready for like more commercialization or you know production ready basically and so this is now deployed in Google search and its really like kind of showing up everywhere you know if you go to basically any leaderboard burg variants are often very near the top now and pretty much most NLP tasks and just like
6,093
6,116
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6093s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
GPT one they use the same architecture everywhere they remove the need for kind of having these tests specific modules on top and so this was another like incredibly strong step so you know that was Bert I guess there's one more point to make which is because it's predicting these masking tokens it's only predicting like you know you have to set that mass percentage and by default it's
6,116
6,137
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6116s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
often said to like 15 percent so you should understand that like your left/right model it actually predicts a lot more words because it'll predict the full sequence within a single fordpass whereas by default you'd have to run a Bert you know model like six times two on average see every predict every token so it turns out that they often learn a bit slower early but then they just keep
6,137
6,156
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6137s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
training and they begin to learn how to use the bidirectional representations to their benefit and then they continue to outperform left/right language models now the problem is you can't sample from it and it's no longer quite as clear that it's like you know you can't compute a correct I'm normally like correctly normalized probability over the sequence without a lot of work
6,156
6,174
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6156s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
there's some research and to figure out how to do this with clever methods but kind of it removes some of the elegance of like sampling and having easy density or probability estimates for kind of trading off this representation capability so Roberta is if we go back to this leaderboard the next big jump up from 80 point five to eighty eight point one you know kind of you know like as a
6,174
6,196
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6174s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
benchmark or you know important event it kind of you know solidly is above the supposed human average baselines here so what is Roberta Roberta is a very well executed engineering refinement on Burt it's it's a good example of how so often in this field kind of you know the second pass at an approach with maybe the same very similar model architecture algorithm I've can just by careful
6,196
6,221
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6196s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
engineering and fine tuning and tweaking still have tons of extra Headroom to it so they better tune the hyper parameters they remove a few hacks that the original Burt had so for instance the original point for computational reasons predicted most of its training on a relatively short context length and I believe out of 28 tokens and then right at the end of training double that two
6,221
6,241
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6221s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
times two up to 512 tokens for prediction and so they just train at 512 the whole way through it's the same model capacity it has the same runtime per sequence length but they just have you know they spend the pre-training compute to buy that and when you're thinking about deploying the system you know one of the important criteria to realize is especially when you're
6,241
6,261
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6241s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
talking about a system that might get deployed broadly and used across the you know across the world and many different companies once it's released most of the compute is going into inference time it's not actually going into training time and so that means that if you have a method of getting further performance improvements by spending more flops at pre training time often it can be quite
6,261
6,279
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6261s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
worth it from like kind of a full ecosystem view of where is the compute being spent this is one of the counterintuitive things I think about how you think about these systems so they also do better data generation it turned out that the original bird kind of from a simplicity perspective cache the masking so they only actually masked the sequences once and they always print
6,279
6,297
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6279s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
mask location and so you can simply change that to an online setting where you keep sampling the mask and that helps with overfitting and they also use a more flexible vocab scheme these kind of a full BP scheme that can do kind of full utf-8 byte sequences so you can handle any string at least with the standard byte sequence representation and then they just train longer with more compute
6,297
6,318
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6297s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
so as we mentioned before bird is only predicting like one of six tokens on average so that just means it's under train for the coolant amount of time and you can actually just keep training it longer with more GPUs and continue to see higher and higher performance and so I mentioned Roberta Bert was on the leaderboards everywhere well now about you know eight months later it's Roberta
6,318
6,337
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6318s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
everywhere on the leaderboards and that's still true today largely except for a few like targeted things is I think if you go to the our general PD leaderboard you're gonna find that model in the first place is some variants of like a row burger or something so that's like an example again of where you know it's not you know like there's no super clever new algorithm or approach or you
6,337
6,357
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6337s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
know and even for bird it's a pretty you know it's a pretty precise refinement of previous of like gt1 but it can have a huge impact when it's just well executed and you know is I think somewhat you know exciting from one view where it's like okay we're kind of really finding that there's a lot of fertile ground here and with like kind of the right tweaks and you know clever insights we
6,357
6,381
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6357s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
can continue to make further progress so this is where a lecture comes in and this is like I think one of the ones that first shows kind of another new interesting algorithmic potential improvement and someone excitingly shows that it's much more efficient so we mentioned the masking for bird so there's actually this kind of gap here which is the problem is when you
6,381
6,399
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6381s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
were training you're masking all these input sequences and you know you may sample the masquerades so you you're kind of crunching your inputs but then when you want to train a test time or when you want to transfer to some downstream task it doesn't make sense to corrupt the inputs right because if you were doing some analysis and you mask the token you know this was a mask movie
6,399
6,417
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6399s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
you don't know if it's going to be a great movie or a terrible movie in that mask location so bird just kind of like as a few tricks to minimize this impact but if the other day it's kind of this train test gap where you trained it with one distribution with mask inputs and then you want to test it and and and predict with it on a different one and it turns out though that gap
6,417
6,437
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6417s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
actually looks to contribute to some performance issues the other gap is again it's only predicting 15% of tokens so it may also be learning slower than it could because he would have to F crop six times to see the same same predicted segment of data so when Elektra does is a very clever hybrid system so they have a bird or basically a mini bird inside of it so it's the standard math language
6,437
6,460
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6437s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
modeling technique and then you sample from it and you say well you know for that first word that we've asked what do you think is the right word so sample from it's uh its distribution over next tokens and then you're going to feed it into this discriminator which is the actual Electra model and what its job is to do is to predict whether or not the token at any given location is the
6,460
6,480
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6460s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
original token or a replaced token so if the generator gets it wrong again it might sample something kind of reasonable and kind of correct like cooked verse eight the job of the discriminator or the Electra system is to just estimate is this the correct one or wrong not so it's just a binary classification task but it's done at every location it's basically saying was
6,480
6,498
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6480s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
this input corrupted and that allows it to you know one it has a natural distribution and it may be because this math language model could be quite good a lot closer to the real input distribution so you don't have this shock when you transfer it for your downstream tasks and you also speed things up because you're taking a loss and propagating a gradient for every
6,498
6,516
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6498s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
location because you're always estimating is it the correct one or the wrong one and that's like still can be a difficult task for every location rather than like kind of the degenerate thing for like 80 percent of tokens which is just like the egg and B function for birth and so when we look across the board here we see that this model kind of the standard models and you get like
6,516
6,533
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6516s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
glove or selma over spurt and they're all kind of like smashed up right here on 0 because and your TP one starts to move over and then you get kind of you know roberto scales with more and more compute and then the graph keeps going it is hidden you can see that electro is kind of across the board can be quite a lot more efficient and often by like factors of 5 for kind of equivalent
6,533
6,551
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6533s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
performance on a data set so that's quite exciting and in the limit they show that for instance in Elektra a small electrical model so quite a lot smaller than even a GPT one by exploiting bidirectionality in this dense training function can actually outperform GPT one in two days on a single be 100 whereas gbg1 was 25 days on 8p 6000 partially this is because I have P
6,551
6,573
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6551s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
16 verse F P 32 but it really shows like how you know I think unfortunately some people have and it kind of makes sense because I've talked about the importance of scale what not you know some people have kind of like written this whole subfield office like whoever has the most GPUs is gonna win and you know oh it's all just training bigger models and maybe you know as a new when you're a
6,573
6,592
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6573s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
grad student or as a hobbyist I don't have access to the resources to do interesting work in this space but if paper like electro it's really exciting because it shows that you know a single commercial GPU can actually still have very interesting results in this space nominally they still run the foreign version of the model on a TPU pod but you know there's here you're already
6,592
6,611
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6592s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
having last year's model being beaten in a day or two on a single GPU next year so I think that's a very exciting point in this is from Clarke it all blew the system for Google Clarion I think it's Kevin sorry his first name yeah so it's it's really exciting work here there's this final one kind of this is like kind of the deluxe result coming out of space from Colin Rafal and
6,611
6,640
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6611s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
collaborators at Google and this is like kind of after the first crazy year of like well there's you know bird and now there's Roberta and others you know like you know all these things coming out one after the other every few months bumping up the leaderboard this is the paper that like took a step back and kind of more systematically studying the space analyzed it used a lot of compute to do
6,640
6,659
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6640s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
it but kind of really brought a lot of things together and kind of very carefully curated it's a it's a treasure trove of information for this space it's 50 pages long there's pages and pages of table so with hundreds of numbers and them so it can take a while to work through it but I really recommend it is like one of the ways to like get up to speed on this whole area and all the
6,659
6,677
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6659s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
techniques and all the different ways so they again systematically study this so their standard language modeling objectives there's bird style masking there's there's there's their own kind of things like spam based extensions of bird and then they also look at differences in the architecture so there's your standard left/right language model there's encoder decoders
6,677
6,695
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6677s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
which could have like bi-directional encoder that processes like the previous sentence kind of skip thought style and then a autoregressive decoder and then there's a corporate hybrid called a prefix L M which is a single well you could have untied weights but you think of it as like a partial and tying of the masking and a self attention matrix where you allow some part of the sequence to do
6,695
6,718
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6695s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
bi-directional attention like in the past and then you switch over at some point to doing auto regressive like language modeling so you can get the benefits potentially of for past contexts doing bidirectional representations or in the limit if your downstream task is always just going to be bi-directional you can just run it in purely bi-directional mode so it's kind
6,718
6,734
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6718s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
of trading a hybrid system and I think that was also a quite clever improvement they had the other thing I really like about this paper is it goes even farther in terms of elegance of kind of this shared framework for doing all tasks and all predictions so kind of one of the trends has been moving away from these custom architectures to the kind of shared pre-trained models that are a
6,734
6,754
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6734s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
little bit more monolithic and can be used across a wide range of tasks with high performance and so tv5 you know typically like you know what do t1 and Bert the only difference we would do is we still flawed in the linear classifier at the end you like predict which of the right classes it's correct so what t5 says instead is and this is extra something that Brian McCann and
6,754
6,773
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6754s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
collaborators at Salesforce introduced about two years ago is they basically say we're gonna phrase everything is like pure natural language pure question-answering or something so we're going to give the model like you know a command or a prompt as the prefix like translate in English sentence to German and then it'll give it the English that is good and then t5 just through natural
6,773
6,791
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6773s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
language rests that - you know dust is good or something and you know for all of these it does this basically so for the coolest sentences it'll predict the language phrase not acceptable and for you know STS STS B here's a kind of almost silly version where it's a it's a continuous valued sentence similarity prediction task and then they just have an output discreet token 3.8 so it has
6,791
6,813
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6791s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
the but you know because it's pre-trained it's learned kind of the continuum of numbers and the similarities between them but it's kind of like this for me to see a regression test reframed as discrete token prediction and you know again it's quite general you can do summarize and everything and so we saw a little bit of this what's like when we were probing
6,813
6,829
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6813s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
with like where Schwartz's work just using natural language like probabilities from language model or like some of those zero shot transfers transform like GPT one and so like to five really goes through and shows yeah you can actually exploit the natural language that it's learned and that helps with the transfer and helps with the fine-tuning tasks potentially
6,829
6,844
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6829s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
so yeah to five it's a really good kind of overview of like all the work in this space and then they also just threw a big model on it to at the end in that gets you another bump on those leaderboards that we were talking about so that's that in fourth place though now so others have done some more things on top of it so that's kind of I think the core like set of literature I wanted to cover here
6,844
6,869
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6844s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
and ideas at this point we've kind of gone through the history of kind of language models and how they've been adapted and used across kind of you know the winding history here of how kind of NLP like kind of really took off with these and supervised and self supervised methods and kind of figured out how to use them and all these different papers that kind of found pieces of the puzzle
6,869
6,889
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6869s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
and propose different methods that it or didn't work and kind of combined well with other you know modeling improvements and everything I think it's a really cool story I'm excited that I was able to chat through that with y'all today the last bit here now we still have about fifteen minutes left but we should maybe leave a little bit for questions at the end to is kind of just a bit of
6,889
6,907
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6889s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
more high-level thoughts on this is an unsupervised learning course why do we need it you know what's wrong with the current paradigm of supervised learning and I'm sure you see motivation you know and there's been great discussion already on this topic here but kind of I would like to share a bit of my own like thoughts and opinions here so you know I think a motivating thing here again kind
6,907
6,928
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6907s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
of we've had this thread running through a lot of the discussions so far on this in this talk has been how well does supervised learning work and you know what should we expect of it and so kind of concurrent with some of the stuff taking off in the last few years was kind of a lot of work that started critically evaluating kind of deep learning for supervised NLP and so you
6,928
6,948
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6928s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
know for a national ears in France for instance this is a three-way classification task and even before pre training really took off so isom is using just word vectors and it was a very you know well design architecture they nominally got to you know average human accuracy of a I believe a single Turk worker it may have been an unsound three so it's like whoa is this done did
6,948
6,968
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6948s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
we already hit human accuracy and you know like I think everyone kind of knew well no because clearly these models are you know still making weird mistakes and this is kind of where in the last few years there's been a lot of great work starting to really quantify these concepts of how robust our models how well do they work kind of distribution kind of pressuring and challenging the
6,968
6,987
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6968s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
standards supervised learning paradigm of you know training on an iid training set and evaluating on another ID split held out data and basically showing that that's no longer sufficient and something's going wrong somewhere in supervised learning that means this is a being too fortunate to algorithms and you know not being fortunate enough to humans and so this is a great paper from
6,987
7,008
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6987s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
a Sutra and I believe called annotation artifacts of natural image inference data and so when you do hear people talking about these models are exploiting statistical artifacts and Maya sees of the train distribution you know this is a paper that really nailed that down and showed it quite conclusively so you know they kind of start from a high level well how were
7,008
7,027
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7008s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
these datasets created these supervised data sets you know admittedly they're kind of artificial you're paying people to label these tasks they're not natural instances of the task it kind of what people can come up with on the top of their head or you know they can have very good experimental methodology in data sets like a semi multi know why are some of the best we've got in terms of
7,027
7,045
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7027s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
like very good set ups you know curated by people who really know what they're doing but you still run into the issue of like well you've got to have a human generate an example and you know maybe they're less creative than you think so the data is actually drawn from a much more narrow distribution that it should be and so this paper kind of went through critically and showed a lot of
7,045
7,063
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7045s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
these artifacts actually showing up so you know a worker would be told make a you know a negative or a contradictory label and so they would just be like oh I'll just slap a knot and top with all copy the sentence you know and it's not quite this bad but it gives you the idea of what's going on is they'll copy the premise sentences the hypothesis and they'll just put a knot in it or they'll
7,063
7,083
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7063s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
you know to have entailment they'll just restate the sentence in a more generic or abstract way and so you might go from you know you know you know a dog is you know playing to an animal is playing or a pet is playing or stuff like that or you'll add some kind of like super information like tall or sad or popular to hint at the neutral class which is like well it might be true or am I not
7,083
7,105
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7083s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
but it's not clear that way and so what they showed is somewhat disturbingly if you only trained to model on the hypothesis sentence so the second sentence and again semantically this task is defined as the logic correlation between two sentences but have you trained a model only on the second sentence to predict which of the classes it would be it actually did it basically
7,105
7,126
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7105s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
got half of them right you know it went from 33 to 66 percent or so it was a large bump and you know by default we know that model can't be doing the true task because it's just predicting you know given only at the random second sentence so this is a great example of where you can see that standard supervised learning might be picking up on these spurious correlations or
7,126
7,144
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7126s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
artifacts and you know assume when you evaluate it on a new card set which is the set of answers that the model that only looks in the sentence evidence can get right drops quickly from like 16 percentage points from 80 to 88 to 72 percent and this shows up across the board there's now probably a dozen papers in this space if not more that show that kind of these analyzed systems
7,144
7,165
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7144s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
that you know nominally we're supposed to have human level accuracy actually are not consistent or not robust or not systematically generalizing so you know this is another one from Glockner that you know very carefully constructs kind of these probe sets so it's like permuting objects and the sentences are permuting you know synonyms or antonyms and you know on these probes they show
7,165
7,185
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7165s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
that you know actually again drops quite a lot and then a final point here is on distributional robustness so this is a paper from Devon called learning evaluating general linguistic intelligence and so what they showed is those near sort of question-answering models which again on squad it's you take a wikipedia sentence and you take burden which we've already talked about
7,185
7,201
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7185s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
how much of an improvement it's had and you know how you know it's improved scores a ton so you take that sentence that question answering well that's trainable capilla and you just run out on a different data sets it's still question answering except maybe we run it on like trivia like trivia factoids that are sourced from like google search results or maybe
7,201
7,220
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7201s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
we run it in a more conversational framework with kind of you know to to people asking questions between each other and we see that just like a gracie's can crater or you know f ones basically actually metric here so it's you know the same task and we know that people when you ask them a question on one task force on another they're gonna do about the same you know maybe
7,220
7,238
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7220s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
yes one task is a little bit harder than the other but you don't see them like suddenly you know lose half their accuracy this is again just hit set some of the distribution a lot more muscles tissues and brittleness we're seeing and again this is still some of the best stuff we've got it's combining supervised learning and unsupervised learning but there's hints as we're going to go
7,238
7,255
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7238s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
through here that these self supervised methods and unsupervised pre training is really helping with the robustness we're still not there yet but we're making progress and all that's being driven by moving away from a purely supervised learning framework to moving to these like hybrid Android training and pre training methods so as I mentioned like there's a lot of things that could be
7,255
7,273
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7255s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
going on current techniques are brittle they're memorizing and so generalizing they're exploiting spurious correlations you know also stop learning once you want to get to you know memorizing your training set you just wall turns off because the gradient ties ISM training lost goes to zero so it just kind of feels incorrect so there's like a lot of different routes we could go down to
7,273
7,292
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7273s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
make progress we can do better models and architectures we could do more data we can go down different paths all together so obviously since I'm kind of talking about unsupervised learning in a supervised learning class I'm gonna talk about how that's a very exciting one but you know we could always keep working in the supervised learning paradigm and just say well we're gonna have better
7,292
7,307
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7292s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
models and we're gonna get more data we're some kind of purses problems in the same way and so this was like kind of what I'd say a lot of like early deep learning was really highlighting was kind of you know we were working on supervised learning datasets we kind of were seeing you know these new architectures that were exploiting priors and inductive biases of the data
7,307
7,326
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7307s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
domain we're really helping a ton so on images you know this is the grand story of we added you know comets and they are a great fit for the domain and they kind of cleverly quote encode you know all these equivariance and translation and you know shared weights and all this structure and that helps a ton with their accuracies and then we just use a large supervised
7,326
7,345
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7326s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
notice and let HDD figure it all out for us and so this kind of led to I think of mindset of heavily emphasizing architecture engineering you know there's a very large design space here someone cynically it allows for a lot of different papers to be written and you know you can really kind of combine and contrast like all these building blocks like we really like playing with these
7,345
7,364
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7345s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
blocks and you know a lot of really good work has been done that like does empirically push the state of the art by exploiting you know properties of domains and you know an example of that is this diagram on the left so does anyone want to guess the name of this model well sorry because it's kind of hypothetical it's called a simple model so this has got six different color
7,364
7,387
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7364s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
embeddings and you know there's screws and character models and by attentions and MLPs and you know it starts to get quite complex when you're really all you've got is inductive biases and kind of the standard supervised learning datasets so it's a heroic effort but you're kind of exploiting more and more details and getting more and more complex to make progress when you kind
7,387
7,407
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7387s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
of have locked in these other constraints like the dataset size and the paradigm of training and on the right is another one that I think is like almost looks like it's like you know some like pentagram or something you know they look like kind of these very cool like architectures and they're very quite fun to look at and kind of look through all the work that's been
7,407
7,424
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7407s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
done on creating these systems and again like we said there's all these different methods of having a deck Tobias and it can really help a lot and so they're all important and very impactful and please don't take this as like criticising kind of the standard approach of like iterating and hell climbing on supervised learning with like better and better architectures but I think it's a
7,424
7,444
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7424s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
bit like this where really when you treat a data set in isolation if we come back to how people learn and experience the world it's so varied it's so diverse there's so much experience in information and knowledge you're leveraging before you ever saw this data set and some machine learning models when they're started in isolation on a supervisor zone you get a set by itself
7,444
7,462
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7444s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
are kind of like you know that supervised data set is like a peak in a very big space it's a small peak and we can add more and more data and make that peak more you know taller and wider and that might help with robustness and generalization but at the other day it's kind of a little bit futile I think you know the real way to solve these tasks or at least the way that people do it is
7,462
7,483
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7462s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
they don't sit down and memorize you know a million different examples they somehow learn a much more general set of tasks behavior and transfer of knowledge and information instead of like just becoming a master at a very specific isolated domain you know we're amazing because of our Gen not because of our you know or well we were amazing for both because we can do
7,483
7,503
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7483s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
incredible things in specific domains but at least machine learning is starting to see that I'm very targeted supervised data says you can do it models that do a bit better and so then there's papers that even on architecture engineering show kind of somewhat critically that some of these you know fancy or new architectures that we saw them don't quite improve as much as you
7,503
7,521
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7503s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
think or with more careful oblations don't show much of a benefit so you know there's a one of the famous examples here is they took a baseline Alice T and gave it some love this is kind of a common story for language modeling and show that it was about performing kind of a lot of new recent state-of-the-art models if you just have careful comparisons in careful trying to me so
7,521
7,540
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7521s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
you know maybe we need to back off and rethink beyond just pure supervised learning on test specific datasets and you know I think one of the reasons to frame this is the largest supervised days so you know basically in the world what I'm aware of publicly is gft 300 million actually there's a Facebook one that I haven't talked about their Instagram pre training but this work
7,540
7,559
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7540s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
this was true a little while ago so there's a straight a million images 18,000 classes if you if you do like a very simple like loose bound on how much information content you get you have like log 18,000 bits per image and you have 300 million of them so that ends up with about getting 300 50 megabytes of constraint on the function you can learn so this is the world's biggest data set
7,559
7,582
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7559s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
and in terms of the correct function that we're trying to approximate with supervised learning you know we only are able to pump about from this kind of slightly naive in toyish view about 530 megabytes of information into the system from the supervision here but you know like like trying to connect this back to everything we've been talking about today there's you know terabytes and
7,582
7,602
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7582s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
petabytes of actual raw natural language on the internet so if we figure out how to exploit all that information in some reasonable way there's a hell of a lot horror there that we should hopefully and again we're gonna be a lot less efficient you know gold labeled supervised data you know per bit is probably helping far more and less specify and learn a task but we only
7,602
7,621
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7602s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
have a little bit of it it's like you know yawns kickin ology of the cherry on top first kind of you know everything else we need to be able to do and I kind of tried to take the supervised learning approach for language for I spent most of 2015 myself building what I hoped would be an image time for text it was a very large weekly supervised data set where we basically
7,621
7,640
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7621s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
did classification over edit communities and we built like on a 50 million training examples over a thousand communities we turned our Nan's to predict everything and we're hoping they would learn useful features and representations kind of skip thought style it was pretty concurrent work at the time except we were going with the supervisor route instead the
7,640
7,656
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7640s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
unsupervised route and the sad thing was the unsupervised model beat us so skip thought vectors was beating you know just by doing a large bottle objective was beating this system that we built with like your middle e weekly supervised data but we were like oh yeah this is the gold label so you know these are the right things to predict they're semantically aligned with classification
7,656
7,674
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7656s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
and this kind of really made me quite confused and kind of skeptical so what's going on in his face because you or at least supervised learning and got me really excited more on the gener of long ago and some of us I'm excited because we just weren't seeing the supervised learning pull through here because it's just I think is a little bit too weak of a supervision source and a little bit
7,674
7,694
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7674s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
too specific so like again the big question I think is a lot more you know in terms of like novel research frontiers how do we go from kind of isolated peaks of competency that you can very quickly you know fall down if you change the problem just a little bit you know quickly collapse in terms of task mastery how do we go to systems that perform you know and then much more
7,694
7,714
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7694s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
general robust kind of you know maybe they're not nearly as good in terms of competency on any given specific task but how do they perform much more broadly across the board and again so this this is an example of kind of the classic architecture engineered approach like one of the kind of you know incredibly well done versions here that's exploiting so much information
7,714
7,733
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7714s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
with inductive biases is using a word net which is like that great hand curated data set and so we see that it like gets and you know because it's able to exploit all this site information of you know helping with like learning oh these are in it you know synonyms or antonyms or this is you know more abstract or less abstract you know a child or a parent and in terms of like
7,733
7,753
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7733s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
semantic hierarchy of a different word so you can see how that's bringing in the information that should help with generalization and so it actually does better on those kind of systemic evals so this is one way of like widening that peak and someone excitedly though if we just slot GPT one in as well it performs just as well on the more robust transfer setting
7,753
7,772
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7753s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
so there we didn't have to you know manually curate that the relations between words or build Ward Annette we kind of let a language model figure it out and so I think this really again is one of the proof points that you know some supervisory training is really figuring out the same relation is the same kind of connecting the concepts connecting them and helping with
7,772
7,791
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7772s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
robustness and generalization and there's some new work from Tim Hendricks this week that I've which I have put in these slides showing that Berkeley as follows are much more robust a t-distribution than classic purely supervisor models with like LS TNS or cnn's so I think that's starting to get much more well empirically founded than kind of me spouting off one or two numbers
7,791
7,808
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7791s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
from like the models I know so kind of at the high level takeaway here kind of this is just a hurrah message for everyone taking this course is you know I really think that one of the most promising methods of moving forward here is in terms of like really lying tasks and robust systems that actually you know perform the things we want them to is we need to move away from standard
7,808
7,829
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7808s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
supervised learning instead of manually specifying what to predict through the creation of large supervised data sets we need to figure out how to learn from and predict everything out there and you know one of the ways that you can think of this is like every time we build a data set we're sitting the importance of everything in that data set to one and everything else in the world and all the
7,829
7,846
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7829s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
other useful information may be out there is set to zero so like when you start with a model from scratch you should really get in that supervised learning as well as head and be like oh it's almost a hopeless task you know they know so little and we've hidden so much from them when we only give them this one canonical gold standard data set and of course they're
7,846
7,863
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7846s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
gonna cheat however they can because they're you know great at optimizing the objectives we give them but if they don't have the foundations with which to truly you know build off of all they can do is exploit clever spurious correlations so yeah I think this kind of comes together with all the work we've been chatting about of like a potential recipe for and you know I
7,863
7,883
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7863s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
think this is getting proved out with t5 and all the future work here of how to kind of combine a bunch of pieces together we need high capacity and flexible model classes so they can handle a lot of different tasks we need algorithms for extracting information running the structure across many different domains so this would be basically you know a lot of things we talked about it turned
7,883
7,901
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7883s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
out language modeling you actually just worked really well as one of these it's an incredibly old idea but that algorithm just or you know method just worked quite well in terms of people there's a lot of different clever approaches to specify and proxy tasks but this simple one has gone it's quite far and you're unfortunately still going to need because these are dumb models
7,901
7,921
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7901s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg