video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
BnpB3GrpsfM
while keeping the state size hi it's character aware with some improvements that let it process the character level inputs so you kind of see on the right that this is starting get to be a kind of complex system and then they throw you know a large vocabulary they throw a 32 King 4k 40s at it so 32 GPUs for three weeks and they kind of really got a huge improvement over the previous
3,925
3,945
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3925s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
results and at this point those old Engram language models the old statistical methods were in the mid 40s or even in the 50s and 60s were hybrid systems and suddenly you're at like 23.7 so you basically have this metric you know again it's exponentiated so it's actually like a 20 percent reduction in like just actual log loss but you know you're starting to see a lot of
3,945
3,968
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3945s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
significant progress at this space just throwing scale at it and this has ended up being you know something that was really developed just to push it and say how far can we get you know sentence quality can we start to get something that looks like coherence and one of the surprising results is it turned out that this actually paved the way for further methods even though it was just designed
3,968
3,988
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3968s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
to be a really good language model and just better predict the next word it ends up laying the foundations for talk about in a little bit called Elmo that really was the first one to crack how do we use these Ellen's all over the place and start seeing it working for question answering and you know summarization and all these different domains so there's kind of a bit of the
3,988
4,007
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3988s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
Tibbett here we're at an hour should we stop for a little bit or let me check out a stopping point I'm gonna go a bit farther we could go to about an hour 30 and stop for a little bit longer there is my kind of conference yeah so you know I've motivated scale a little bit so like I mentioned there's a whole internet out there there's so much information and that perfect language
4,007
4,029
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4007s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
model would you know basically from one view need to fit the Internet into its parameters given how big it is it's not surprising that we're going to need a big model to do that we're going to need a lot of compute pretentiously to do it to get as close as possible and for many of these tests we're talking about where you want to learn long term dependencies
4,029
4,043
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4029s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
we want to learn complicated tasks you know they might be quite rare they also are quite difficult so you know the closer you get the better you are to maybe learning real interesting behaviors first kind of a basic system that just like is locally plugging a few words together so another you know just vivid way of pointing this out is a small character RNN is basically
4,043
4,065
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4043s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
gibberish you know this is what happens you know this can be a very good architecture but if you don't give it capacity it just can't really learn language you know there's so many words there's so many objects there's so many relations you really need a lot of expressivity to handle all that complexity and you know another way of putting pointing this out is classic
4,065
4,082
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4065s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
resources that were built by humans trying to map out kind of like the relations between all words in natural language you know build hierarchies over them so there's there's really heroic efforts here like wordnet they were larger than many of the language models we were still training especially a few years ago so it might have like five point five million relational features
4,082
4,099
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4082s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
in this package and you know when you have it zipped on disk or unzipped on disk it's like already 55 megabytes and you know a lot of common language model is especially early on we're only a few megabytes for parameters and so we know this is probably going to be very inefficient and you know we're probably going to need quite large models and right now you know the answer we have so
4,099
4,120
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4099s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
far is to kind of address this facts with scale and you know hopefully we do find out more efficient and we'll talk a bit about that later too but right now you know kind of the first dumb thing you try is brute force if we scale and you know another reason why this is worth investing in is it's now a very well validated empirical trend so across the bottom here is for
4,120
4,141
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4120s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
both language modeling and and for like computer vision kind of the performance of models laid out on log scale plots where you see you have a large scale x-axis which might be the amount of words you train on so every new tic is a doubling of the data set size you know block scale is not great because it quickly gets inefficient but these trends are incredibly linear they're
4,141
4,164
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4141s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
very predictable so like it's almost like the natural kind of domain to think about is like what how does this look on a log scale and you see that again for language models on the right left and for like the performance of like captioning cysts or sorry image consecration systems on image net in the middle so these are quite consistent trends and they span now quite a few
4,164
4,186
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4164s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
orders of magnitude so so far they've continued to improve from 6 million parameters up to 600 million on like image net and you know data set sizes spanning that's probably over two to two orders of magnitude or two orders of magnitude there yeah and also computers becoming available as investment of more resources in Jim machine learning and AI and improvements in Hardware and
4,186
4,208
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4186s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
distributive training have kind of allowed for you know even though there's there's this logarithmic or this heavy demand for additional compute to see kind of finite sized improvements at least as of yet kind of the industry as a whole has been developing techniques and systems to keep providing that additional compute to keep these trend lines going so that was kind of just a
4,208
4,227
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4208s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
quick digression on likewise scale might be important and it really intimately plays into like where these language models came from and how they kind of had their success so here's a like kind of a cute example looking at kind of starting to get away from just learning these kind of feature representations that could then be reused by downstream tasks towards maybe we can learn the
4,227
4,248
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4227s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
tests themselves without having to have standard human label feedback and kind of shared that intuition with like the talking about the you know computing the probability of the string I rate this you know one star out of five after seeing the prefix of the of the product review so this is a paper I did in a 20-17 which was like kind of a very targeted experiment here and one of the
4,248
4,270
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4248s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
hypotheses I was working on was that maybe just data was the model neck you know our models are so inefficient that if we were able to just tile kind of in an unsupervised fashion the landscape of one domain we might care about like product reviews we could maybe do quite well so we made a much larger dataset that's our rather we used an existing data set from from I think UCSD and
4,270
4,294
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4270s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
Amazon in partnership which had 40 gigabytes of text so that was way bigger than that billion word benchmark and it's all in just one domain and we trained a byte level language model on this for you know a reasonable amount of computer month on for tiny Nexus the model ended up under fitting a lot but you know one of the most interesting things about this is if we go and poke
4,294
4,313
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4294s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
into that model and say well you have this hidden state that summarizes everything you've seen and we do probes over that we found that actually there was a single unit within this language model which very vividly indirectly just computes a running estimates of what is the sentiment of the characters I've seen so far in the review so you can see that you know as it turns on this is one
4,313
4,333
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4313s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
of Michael Crichton's best books and so we have green colored as positive and red colored as negative so again there's no supervised learning going on here this is all just unsupervised prediction of a byte stream it just sees a stream of bytes 40 billion in a row and they're all just you know numbers 0 to 256 and it somehow figures out in order to better predict this text you know it
4,333
4,352
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4333s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
recovers this useful feature which is well as this review gonna be excited or you know dismissive and you know it can handle complexity where you know I can switch from a great start you know it's something where it's like you know you know here in the middle seriously the screenplay and the directing were horrendous and then it suddenly drops off and it's you know performance
4,352
4,373
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4352s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
analysis and starts going negative you know I can't fault the actors I know good novels especially are hard but this may be the absolute worst disparity in quality between a null and screen adaptation forever so it really does it and it turns out that if we just threshold on this unit so we're not even fitting parameters we're fitting one parameter it actually was matching these old words
4,373
4,394
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4373s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
avec or by ground baselines and even things like skip thought vectors and it's just a single unit in the model and we're just running it over this over the document and you know threshold the value at zero and so this is a histogram for positive reviews and negative reviews of what this system does so this is kind of showing I think in a very clean and pure way how you can really do
4,394
4,414
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4394s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
some unsupervised representation learning here and start to learn something that really helps potentially with downstream tasks it's very hand engineered it was very targeted we knew that like you know product reviews sentiment is a very important feature so we were kind of really hoping that something like this would happen and it would learn a really good representation but it was you know
4,414
4,433
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4414s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
it's still like kind of shows a proof point when with limited scale but lots of data you can get something done here a follow-up work we did was with scott gray was pushing on kind of model signs again so we said maybe hidden state size is the bottleneck so again these standard LS teams and RNs summarize the entire past context as a fixed length feature vector and so that might be for
4,433
4,454
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4433s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
a standard model in a big model like 4096 units or that were false model was 8 K units and you know if we had like three hundred dimensional word vectors if you naively just concatenated them into your that state representation you could only handle like 30 in a row with like a you know an 8k or 9k state size that's only about a sentence or two so we thought that you know maybe it just
4,454
4,475
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4454s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
turned out that models were really limited by their state size and so we pushed on these kind of blocks sparse methods that kind of allowed us to train with much larger state sizes that we would factorize the weight matrices such that they would be represented kind of as this two layered system of having a dense sub dense block and a lot of sparse blocks that are pruned away and
4,475
4,496
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4475s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
we saw that these were slightly efficient more efficient in terms of parameters and they also worked better on things like set analysis when evaluated by these linear models which is like a standard probing for how good of a feature representation have I learned that's partially because when your model is just like lots of features and that's all they wear their
4,496
4,512
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4496s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
expressiveness comes from you know when your summer ability is easier and high dimensional spaces and yeah I was this was kind of like explaining some of the history of I was pushing on trying to get these things to work and figure out how do I like really you know push their performance potentially and so this is like showing again that that performance analysis of these units
4,512
4,530
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4512s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
learned by these models so this is how that kind of representation evolves and we show kind of data efficiency on the x-axis here so in the limit we know there's that zero shot performance of fitting a threshold to zero examples and that actually turned out to be about about here on this graph if you use all the data to probe and find it but if you just fit kind of naively as you saw more
4,530
4,551
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4530s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
and more data you you know could start with like in the limit only needing 10 labeled examples to beat some of the original supervised learning baselines which just train systems from scratch there's this recurrent neural tenser Network paper from a searcher at all very early do planning work here with a you know really cool complex model and we were able to imagine it with just ten
4,551
4,569
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4551s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
labeled examples whereas it was trained on all 8,000 in this case and then as we kind of keep adding more and more data we see that the representations learned by these language models can be quite powerful and you're kind of able to like quickly sweep through kind of in the limit you know if you don't have any pre training you started getting into these increasingly complex and desperate
4,569
4,586
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4569s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
desperate maybe a judgy word ensembles of 30 different models to hit sodas and then we're able to just use this model that exploits this unsupervised learning on a lot more data to push significantly higher and then that small world improvement with blocks bars had another large jump above that and so this is kind of one of the precursors that or kind of heralds what's about to happen
4,586
4,607
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4586s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
on every task over the next few years this is 2017 as like this field really starts taking off so we mentioned this kind of cool and interesting thing of learning a single feature within one of these networks that kind of really shows some representation learning going on so there's another really great paper I love here from Royce warts and collaborators in 2017 that I think again
4,607
4,631
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4607s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
starts to speak to hey these language models that are you know recurrent networks or more expressive neural networks are really actually learning something interesting and beginning to be useful for downstream tasks that might be difficult so this is a data set called the story closed task so what you do is you have a paragraph of context in this case Karen was assigned a roommate
4,631
4,652
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4631s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
for her first year in college yeah they go to a music show together and it goes really well and then you turn you're trying to train a system to predict which is the right ending and which is the wrong end and so this fits very cleanly or this is what Rory was quite clever about was realizing that this fits very cleanly into the generative modeling framework you could say well
4,652
4,670
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4652s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
what is the probability of the right ending versus what is the probability of the wrong ending and again as we get better language models they should start to learn to exploit context and assign correct like you know the correct probabilities to these different strings and so very early work kind of took the classic supervised learning approach of just throwing you know a you know a
4,670
4,689
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4670s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
model maybe with even word vectors pretty trained at the system and treating it as like a binary classification task but in this case the story close task it's difficult to generate this data they only had 2,000 labeled examples so a purely supervised discriminatory system really couldn't get that far and they actually were basically not performing much better
4,689
4,705
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4689s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
than random and so what Roy was able to show is that well if you exploit tons more additional data which was available of like training on small short stories and then you use this model to score the endings so it just produces a single scalar which is like the ratio of the probabilities is sinky my trick that we talked about before but commuted for a language model where you say well what
4,705
4,727
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4705s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
is the probability of the N being given the story and you normalize by the probability the ending in isolation and this this trick just helps a bit compared to just computing only the probability the ending given story that actually still works quite well but you get a fair amount more and so they were able to significantly improve the performance on this data set again in
4,727
4,744
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4727s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
the limit just using that single feature the RN NLM features here they got a almost 10% jump in performance just by using the generative model off the shelf there's no discernible it's not exploiting statistic you know spurious correlations here because it doesn't see any labels it's just fitting a threshold of what it already thinks is the right ending versus wrong I'm being
4,744
4,766
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4744s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
you know another quick inner loop of scaling so these kind of all are happening Nestle together and I think this gives kind of a sense of how you know research feels often involved where you see these different authors and different people pushing down different lines of work and then kind of things come together in exciting ways so this is work from Noma shows here really just
4,766
4,782
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4766s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
pushing on maybe parameter counting the bottleneck you know maybe that's what's been holding back language models and so they really went crazy here and they they train models that have these what they call sparse again a mixture of experts layers so you have your standard LS teams and pink on top and bottom of this model and then in the middle you sandwich in what's
4,782
4,799
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4782s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
called its mixture of experts layer and what this has is it has a gaining network that decides to pick basically a two layer fully connected Network and it says which ones just slotted in for this given word so you think that you know maybe you want to memorize a lot of information and when you see you know they went to the city blank or something the mixture network and the gating
4,799
4,819
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4799s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
Network will say oh I should load up like you know the expert that handles you know where cities are in the world or is this kind of just a hand wavy high-level intuition and particularly when you train this thing at large scale because it's sparse only one experts being evaluated for any given location at a time so you can group these and you can have many of these experts being
4,819
4,837
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4819s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
trained in parallel and so they're able to push to like you know an eye-popping 137 billion parameters in this language model it's all on this very specific sub module but it ends up being more computer ficient and it has like a lot of clever and very impressive system engineering work to handle how do you run this thing at scale and you know have it be efficient when handling so
4,837
4,858
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4837s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
many parameters there so now we come back to type to evals and kind of the standard slutted in and see how it does and this is like really the paper that kind of set this field off it's called a elmo from peter's at all this day i to work again and they or elmo is the name of them all but it's really about deep contextualized word representations and this is kind of where there's the clean
4,858
4,882
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4858s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
mark between the word defect error and the like language model era and so the way era and so the way they do this is they kind of cleverly say well what do word vectors do they slot in kind of as inputs and they rear nth you know this discrete identity tour ID identifier token of like you know word you know cab being ID 256 with a distributed representation as we discussed before
4,882
4,909
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4882s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
things like contexts are missing in this case so this paper talks about how to use a language model to do the same thing they're substituting the input representation but instead what they're using is a deep bi-directional language model so this is kind of the schematic here where they have a for Dallas TM that will first take in its own learn word representations and it
4,909
4,929
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4909s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
right over the sentence in a left/right fashion and then they want to you know have context not just for sentence words that happened in the past but words that might be about to happen so they also run a backwards L s TM in the other direction from the right to the left and then they have this Bihar cable or sorry you sees me a deep model with multiple layers so they run two
4,929
4,948
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4929s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
layers rails TM and then what they do is they learn weighted averages of the word vector so maybe for some low-level tasks you only want those input representations but maybe for some tasks you really want that kind of long-range context and so you might want to use the higher-level layers and so then they rear nth instead of feeding in those kind of like one-to-one look up in a
4,948
4,965
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4948s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
table what the word vector is they have this RNN language model that processes the sentence or a piece of text in both directions and it learns to you know reuse its hidden state representation as the input to the model instead of the word vector representation so kind of seeing all those early results like skip thought vector showing that well you could learn and distribute
4,965
4,984
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4965s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
representation of the sentence this one does it but it does it at a word level and it just cleanly slots in where word vectors used to go and so what this is quite nice by is it allows you to have very direct comparisons with prior work and across the board they basically show that like simple baseline models which substituted to use these representations instead to reward effective
4,984
5,003
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4984s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
representations we're outperforming very well-engineered very tuned state-of-the-art systems that were like squeezing as much performance as they could add award vectors and they're getting you know quite large numbers here where you see you know 10 20 % relative or sorry yet relative error improvements and importantly they kind of have that clean sweep of very many
5,003
5,021
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5003s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
different tasks like question-answering entailments coreference ner so even classical tests like you know part of speech recognition like and you know this kind of really just swept everything and it was very clean it kind of like made clear that okay you know it word vectors were great but it's time to you know here comes the new thing and you know the other very important and
5,021
5,042
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5021s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
fascinating thing I find about this is this model was that language model that was developed or the limit of all they used for this system is that language model that were fall developed in 2016 at Google to just along with co-authors like Orioles that they really were just pushing on four-plex keys they're just pushing on how well can we get a generative model
5,042
5,062
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5042s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
this text and then you know two years later or you know two years later someone just was like wait a second this thing is learning amazing representations and you know those two works are separated by two years and completely different research labs and they just discovered that you know these language models are really doing something here yeah so that's kind of
5,062
5,082
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5062s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
like really where things turned and you see you know again looking at data efficiency that when you're at very low amounts of data you get huge improvements like 10 plus percent absolute improvements so that really feels like you know as you get more and more supervised data you can begin to overcome the limitations of you know training from scratch but in the limit
5,082
5,101
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5082s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
you know you want to use as little data as possible you want to learn as quickly as possible so this is like very exciting and it's kind of like really got everyone to start stirring and paying attention to this field yeah so final one before the break is kind of you could think of it as pretty much in the same vein as Elmo and what we did instead is we took a better language
5,101
5,124
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5101s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
model again so transformers came out and we were really excited by their ability to handle longer range dependencies and they were also very computer efficient so you you could train them quite well and quite fast so we swap out the recurrent network or the LS TM in the language model for a transformer based language model and if we want we could talk a bit about a self attention and
5,124
5,144
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5124s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
transform based architectures in a bit but now just think of it as like we subbed in a different better architecture and it's slightly larger we use a similar data set of books it's the same data set that skip thought vectors introduced slash trained on and we we just fine-tune it the same way that Android I it all did and this exciting thing here is we saw that we no longer
5,144
5,166
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5144s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
needed these tests specific architectures for each task so you know a lot of the cleanliness of Elmo was that because it was just substituting the impact representation you could reuse all those and engineered architectures and often they would you know counteract for the issues of handling long term dependencies in or an end with like an attention layer the
5,166
5,184
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5166s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
like but they still require you know that engineering of each these tasks for each of these different architectures which means that you're still leaving performance on the head room you know it's not like where you're initializing the middle layer features of a CNN instead of like the edge detectors of the lower layers but then we still are sticking new layers on top so we were trying to
5,184
5,201
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5184s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
like really kind of move towards a general-purpose framework that kind of a lot of Surrey is the same architecture everywhere and not have to have as much of these tests specific engineering which requires a lot of effort and time and grad student hours to like push those systems for so we have this transfer based language model and we kind of showed that for a fair variety
5,201
5,219
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5201s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
of question of tasks primarily classification we kind of take the same model and without having to modify it or introduce new layers we could just fine-tune it with only a linear classifier in text typed on top and we could across-the-board do quite well and in many cases we were outperforming ensembles the same way that elmo is doing before and using basically the
5,219
5,241
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5219s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
same unified architecture to perform quite a lot of different tasks and the glue benchmark had recently come out as like kind of a standard multi test benchmark and this is kind of one of the first major ones to bump up accuracy there and reduce the complexity of it and you know there's two particular things that I'd like to focus on for discussing some of the results from that
5,241
5,258
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5241s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
paper which is if we oblate the number of features transferred we really see that this is a 12 transformer 12 self attention block model we really see that you need all those layers and the random initialization of higher layers was not working well at the time it may be you know as always that you figure out better initialization methods and you can close that gap but you see kind of
5,258
5,278
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5258s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
cleanly that we're transferring a deep heart you know a deep distributed representation and the you know the deeper it was the better it was generalizing and that seemed to hold true across multiple data sets and was a very clean kind of performance increase as you just transfer more and more of those blocks so Elmo is a 2 layer model and now we're going to like a 12 layer
5,278
5,297
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5278s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
model and then this rightmost graph is really the one that I want to focus on and this kind of links together some of the hints and pieces we've been seeing so far through like many of the different papers which is kind of this interesting behavior sometimes the language model is learning a supervised task or a task we kind of thought needed supervision to classically trained in
5,297
5,316
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5297s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
the machine learning framework without any direct explicit labeling or supervision of it so what we did here is we took this transform language ball and we kind of design these heuristic ways of having it compute probabilities the same way that right where Schwartz was doing and we kind of started to extend that beyond just you know the very specific thing like which of these two sentences is
5,316
5,335
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5316s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
most likely so like for instance we could take a language model and do exactly that example at the beginning and ask it well you just saw a movie sentence review do you think the word very positive or very negative it's more likely after seeing this sentence so this would be this probe here which is sentiment analysis in blue and so we show over the course of training this
5,335
5,352
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5335s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
language model we evaluate this kind of zero shot performance probe and we call it zero shot because and this is a broader you know we didn't invent zero shot by any means but it just means evaluating on a task or data set or a class that we've never seen before and we haven't done standard supervised learning to update the representations or to train the model to do this and so
5,352
5,372
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5352s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
we see that kind of as you train you steadily improve performance we've normalized test performance so that zero is random guessing and one was the overall state of the art do you still see across the board that these models are you know nowhere near soda and often they're less than 50% of between soda and random guessing but they're showing clear and steady improvements and
5,372
5,389
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5372s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
they're showing that even on tasks like question answering you could actually you know take a paragraph of like a question answering task and asked it well which of these answers do you think is more likely and you know there's no supervised training here it was trying to predict books and then you ask it like a 5th grade science question and it starts to sorry I shouldn't answer
5,389
5,406
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5389s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
promotoras it so much but you just probe it you know you can compute some conditional probabilities from it you start to see progress being made on you know some potentially quite far afield task the final point to make your to is self attention and transformers really seem to help a lot here where as we did the same exact model or you know it's equivalent size and similar compute with
5,406
5,426
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5406s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
an LS TM and we were seeing that especially on the zero shot tasks sometimes it could do relatively well but on some of them especially ones that handle long range dependencies you really need these self attention layers handle long term dependencies cool so I think it's we're at about half time for the lecture and I think that's probably good time for a break then
5,426
5,446
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5426s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
fantastic Alec thank you let's let's take a break till 6.50 specific time well about eight minutes does that sound good yeah okay great and I'll pause the recording for a moment in here I'm sure if you have a certain like limitations on how large your Monica tree life is everything you need to run in like a particular device and you can to like train a large model or a cartoon
5,446
5,476
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5446s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
elephant this one are there any strategies yeah I mean so I admittedly have been emphasizing the need for scale but it's kind of a continuous spectrum thing and there's some work we'll be talking about later that kind of focuses on efficiency and kind of how far you can push models of a given capacity in size you know probably the answer here I think from a pragmatic perspective is to
5,476
5,497
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5476s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
kind of use whatever is the largest thing you can fit into the given device framework or kind of you know resource specifications you have but then kind of really pushing on how far you can you can take that thing and some of the methods and techniques that have been developed especially in the last year or two of kind of increased efficiencies by factors of maybe five to ten so there's
5,497
5,517
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5497s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
I think a lot of promise there from you know really pushing even with a fixed size and many of those still fit on single GPUs yeah thanks cool yeah so yeah I guess given it seems like the class has gone over transforms a few times I won't do the super detailed version here so yeah I guess we'll just kind of look through that real quickly so we've kind of talked right now so far
5,517
5,544
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5517s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
about standard mostly standard language models and kind of using different architectures you know character level aren't ends and 2ls TM or double all those teams and transfer based language models and they're always kind of trained with the standard auto regressive left right or in the case of Elmo adding a backwards right-left language model and you know that's nice
5,544
5,566
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5544s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
it's a it's a clear framework it allows you to compete probabilities easily it allows you to sample kind of in just a iterated fashion it's not the fastest but it's quite simple to do you just feed in the sample in the distribution over the next word and then you feed that as a new input and conditioned on it and then resample and so it's it's it's a very clean in like general framework but it
5,566
5,588
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5566s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
may actually not be all that optimal so it's cool and exciting to see some of the things that these language models are doing and some of the work as I was just mentioning has really pushed farther by walking away from that very explicit like left/right auto regressive language modeling strategy so this is a common leaderboard it's called the glute benchmark and it combines a set of like
5,588
5,611
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5588s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
nine tasks together and this is this was pretty important for the field to kind of standardize on the set of tasks people reported on as you can imagine especially early on when the research is kind of scattered and not all that standardized you kind of see you know people picking their favorite benchmarks I was totally guilty of this myself I really cared about seven classification
5,611
5,629
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5611s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
just because I happen to you know find that to be an interesting task and worked on it a lot and so you know my favorite report is that a classification someone else is to report on you know in tailmon and someone else report on crushing answering you gotta have this lack of commonality and comparison points so the blue benchmark came in and said we're going to standardize we're
5,629
5,646
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5629s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
gonna focus on 7th level comparison tasks primarily and we're going to kind of have a suite of them and we're basically gonna say hey you should report on all of them so you can't hide your bad results on one and this helped drive a lot of progress too so this is a screenshot of kind of where this leaderboard has gone showing all these new improvements and methods so there's
5,646
5,664
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5646s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
the BIOS TM Elmo baseline autumn that we mentioned and GT one would have slotted in slightly above that but then there's at now ranked 20 from Jacob Devlin and crew and then there's Facebook ai's Roberta as another big jump and so we saw it like on the average metric here which kind of just averages the performance across all these different tasks it went from 70 to 80 between the biles
5,664
5,689
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5664s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
team elmo bench baseline - yeah - burn so that was a big jump there and then an almost equally sized jump happened with Berk - Roberta which we'll talk about in a bit and then there's newer things like Elektra and t5 so this kind of whole area has really you know really exploded in the last year two years in terms of the amount of teams and basically every major research lab
5,689
5,713
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5689s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
Microsoft like Stanford and why you like pretty much you know AI too you see a huge amount of people you know I do every everyone everywhere has been kind of pushing what they can do on this this kind of benchmark and really seeing a lot of progress so we're going to go through kind of each of some of these improvements that these are highlighted select a few there's many others so
5,713
5,735
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5713s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
sorry if I dropped here you briefly was a cool recording spec on so an SST too is like synth analysis like we mentioned before so it's kind of a you know again a diverse suite of tasks here so how do we kind of what are these big improvements we're seeing beyond the standard left-right language models and there's one more point Domanick which is there is a human baseline here and it's
5,735
5,786
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5735s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
slotted in actually in the middle it's in 12th place now so what does it mean like are these models actually better than people and you know the answer really is no and it's complicated and confusing and we'll chat about this a bit more later and supervised learning is always playing tricks on you but you know now these models are like really really you know went from like in the
5,786
5,806
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5786s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
last two or three years because of leveraging unsupervised pre training and kind of scalable methods to really make quite a lot of progress in this space very quickly so this is Bert so what Bert does is it basically finds a very great way to hybridize a language model objective with kind of the importance of like bidirectionality so again you know by default we have
5,806
5,829
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5806s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
this like left/right auto regressive factorization where we say given the previous words predict the next word in the language model that's like what GPT one does and so what we see with that is you're not able to exploit context for the right you're not able to see you know you by masking the model you have to prevent it from being able to just look at the next word and say well I see
5,829
5,850
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5829s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
my sequence that it's cat so I'll just learn to copy it over in predict cat so that has a major limitation and when we when we release GP t1 we actually like weren't able to do well on or you know we found that some of the question-answering datasets we just couldn't do as long because we weren't able to exploit bidirectional context whereas elmo was the old bi-directional
5,850
5,870
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5850s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
image model and with an LST M and you know because they trained afford one of their backward one and the average of the representations that works quite well for shell models and that gets you that my directional context and that can help a ton and you know they performed us I still have performed this on some data sets because of that and then Bert basically figures out how to have
5,870
5,886
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5870s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
bi-directional context within a self attention model and the way they do this is they change the objective so they're no longer doing this like standard you know maximum likelihood training on like just the data distribution they use this kind of proxy tasks called mast language modeling so again you know at the bottom here we could see left-right LM is like the cat
5,886
5,905
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5886s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
sat on the and then you blink out a word and it's supposed to predict math right language modeling would be we'd go the other way around and same at the onset cat mask and predict you know there and so what master LUN does is it just takes your input sequence and it's it corrupts a few locations 15% in the case of bird and it trains the model to predict what's at those masked locations so you
5,905
5,927
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5905s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
know might in in this case there's no like left requirement to write requirement it just randomly selects 15% and this allows you to have bi-directional like representations you can't leak the word because it's masked in the inputs whereas for a standard left-right right-left Ella you kind of hide that this is probably one detail of self attention models that you have that
5,927
5,947
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5927s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
you have the self attention matrix and that kind of defines the connectivity pattern between different locations in your sequence of inputs that you're processing and so you used masked self attention matrix for standard left/right language modeling or right left wing which following where you masked the upper triangle and that prevents you from that future blinking and so you
5,947
5,969
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5947s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg