video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
BnpB3GrpsfM
just reads like a full news article and it keeps characters and names persistent and you know pulls information from the source sentence over you know multiple paragraphs and this is all a lot of progress being driven in the last few years so kind of now that we have just like a look at the cool samples let's like get into the details here so this is going to be about statistical or
1,893
1,915
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1893s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
probabilistic language modeling and kind of the way we formulate this or is we interpret language as a high dimensional discrete data distribution that we want to model and kind of the set up since this is statistical method is we're going to observe a bunch of strings of language and the framing here with a probabilistic language model is we want to learn a function that can just
1,915
1,932
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1915s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
compute the probability or density of new sentences so we want to be able to compute Oh what is the probability of the sentence is it going to rain today and we're just going to give it a bunch of strings and somehow we're going to design a model that can compute this quantity so what does it mean to compute the probability of a string you know what what should the probability of that
1,932
1,951
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1932s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
sum is the cat sat on the mat B well you know there's some people who kind of think that this might not be the most well defined concept or there's a lot of reason for skepticism potentially Noam Chomsky in 1969 has a very famous quote but it must be recognized that the notion of the probability of a sentence is an entirely useless one under any known interpretation of this term of
1,951
1,971
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1951s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
this term so some people were quite skeptical to be fair this is well before that kind of any of the modern renaissance and he goes on to kind of explain a bit more that there's quite likely that statistical methods can work but it's a good example of kind of where we're coming from and some of the contrast in the field so let's instead kind of like talk about why this concept
1,971
1,991
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1971s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
might be useful like why do we want to know what the probability of this is and this is where I think it begins to see the connection between oh what does the generative and how like we end up using it or why might actually learn useful functionality for downstream tasks or for transfer and so you know we could compare for instance the probability of the sentence the cat sat on the mat so
1,991
2,011
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1991s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
the probability of the sentence the cat sets on the mat and you know we would expect that let's say we somehow have the true function here we don't know how to learn it yet but we just assume we have like the ground truth of the probabilities of these two sentences well it should assign more probability to the grammatically correct one and that gives you something like grammar
2,011
2,029
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2011s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
and that's you know an important part of language is understanding its structure and what are the valid sentences or not but you know should the probability of the sentence the cat sets on the mat be zero well no because sometimes people fudge their keyboard or miss type it should be much lower but it shouldn't be all the way to zero for instance and then you can kind of get to more
2,029
2,048
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2029s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
interesting sentences that you could query you could say you know what's the probability in the sentence the hyena stephannie met and compare that to the sentence the cat sat on the mat and you know we would say well as a human being asked this you would say well hyenas you know are wild animals they don't often sit on mats unless they're at the zoo or something so this kind of shows how to
2,048
2,067
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2048s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
do this to compute this probability correctly you would need to start to have world knowledge what is a common operator you know what is a common environment for a hyena you know what is even sitting on a mat mean and then you can ask other questions again you could start to get two conditional probabilities too depending on how you set up this generative model is you
2,067
2,086
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2067s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
might be able to query you know given that the prefix is the substring two plus two equals you know what should the probability of the completion for be it probably shouldn't be one because people sometimes joke that two plus two equals five but maybe if you had bit more context you would be able to disambiguate which of those two you might predict and then finally kind of
2,086
2,106
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2086s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
coming back to some of the data sets or tests we've already mentioned if you have the prefix that movie was terrible I'd rate it one star out of five you you really should know that that is a likely completion and so to do that completion and to generate that sentence and to know that string is likely you basically have to have that language model somehow I've learned what sentiment analysis is
2,106
2,127
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2106s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
and what is a little you know a likely relation between the concept of like one star or five stars the kind of you know reception of the movie or the description of the movie before that and so with that one it kind of becomes clear that in the limit these functions that these language models learn and compute should be quite useful traditionally we approach that as a
2,127
2,146
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2127s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
supervised learning problem right we were gonna like oh let's go build a data set let's go collect you know a bunch of crowd laborers and have them assign ratings to a bunch of different movie reviews that's what the Stanford sentiment tree Bank is but in the limit this kind of unsupervised scalable method of just like fit a probability distribution to strings of language
2,146
2,163
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2146s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
should eventually maybe be able to handle a test like this without any of the classic supervised learning framework being used and yet so that kind of extends much more broadly those are kind of some you know canonical example or some toy examples but this actually can be quite useful and this is kind of where language models got their start in many cases 30 or 40 years ago
2,163
2,186
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2163s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
or 20 years ago in kind of machine learning so they're often used for speech recognition and machine translation which again are traditionally approach to supervised tasks with pairs of transcripts that are somehow aligned and you know a major promise is that we can somehow leverage these language models to you know really help with these problems and for speech
2,186
2,206
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2186s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
recognition for instance you could prune the space of possible transcriptions from an acoustic model there's a variance example from Jeffrey Hanson of you know how to tell the difference between the sentence recognize each and recognize speech you know they're very similar from a you know a raw audio perspective but if you have context you know that these can be quite different
2,206
2,226
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2206s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
things and wrecking a nice Beach is just also a much less likely string than to recognize speech and for translation for instance you could rewrite possible translations based on a monolingual language model so if you have an English to French translation system and you have some proposal of the French translation you could say well hey language model that I've trained already
2,226
2,244
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2226s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
how likely do you think the sentence is in French and there's a lot of work on integrating this directly into decoders and using them as restoring mechanisms Oh statistical language model has really got their start often as in these tasks so let's move towards actually having a computational model of language so first maybe we'll do some pre-processing like lower case
2,244
2,266
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2244s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
so we'll take some maybe messed up text and turn it into just all lowercase to simplify it well then you know maybe said a vocabulary size to just like make the distribution easier to handle to set it to like you know a million tokens or something so we might substitute a rare word like countertop with like an unknown token just so we kind of don't have to deal with this like potentially
2,266
2,289
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2266s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
open-ended probability of observing a new novel where I've never seen before and then finally we'll use something like a tokenizer which will take a input string and return a sequence of tokens so it'll chunk it into a sequence somehow with kind of some rules or logic and you know this is another example of classic and LP work on designing tokenizer x' so we might take you know
2,289
2,312
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2289s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
the cats out of the man and choke it into just the words and you know throw that punctuation on the end there and then you know because this is machine learning we basically are we representing these words as you know unique identifiers or indices and that's again a way to get a window into how a machine learning model really sees natural language you know we come in as
2,312
2,330
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2312s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
humans with so much understanding in context and from like lived experience but you know if you try to train a naive supervised learning model and you started from random visualization it's a lot harder to understand what 223 in 1924 742 followed by 101 23 etc is and I think this helps you get into the mindset of when people talk about machine learning models being spurious
2,330
2,350
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2330s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
pattern matchers or just learning weird correlations that aren't true if you if you've looked at a bunch and it's like tried to do natural and processing tasks as a human where your inputs are represented in this format you'd probably be a lot worse than current machine learning models already are and it'd be understandable if you made kind of mistakes as an algorithm trying to
2,350
2,369
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2350s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
figure out how to make sense of any of this especially once you get to some more complicated task like do these two sentences logically reason or follow each other you could even just do a simpler thing like split it into spaces so there's a huge design space here and I'm just providing a few examples right now okay so there's character level there's byte level which would be kind of
2,369
2,389
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2369s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
working on you know if you just work on characters how do you deal with non ASCII text or text in there you know non a Roman numeral or sorry standard like lettering systems so you could work on like a standard encoding scheme like utf-8 byte stream you could also work on unicode symbols or code points and then there's kind of these middle grounds between word level and
2,389
2,413
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2389s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
character level which would be something like by parent coding and this one actually turns out to be super important so I'm kind of just covering it as part of general NLP methods and it's used by quite a lot of methods in the space now so what this does it starts with the character level vocabulary and it kind of just merges the two most common pairs of characters at a time so you might
2,413
2,433
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2413s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
have T and H be the most common pair of words or characters and then you'll combine them into a new token called you have th and you'll merge that and you'll resub stitute it in all of your words and then you'll run this loop again and so if you run this and kind of just keep merging and merging and merging it learns basically a tree of merges that quickly pop out words full words like
2,433
2,451
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2433s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
the and you know common endings like IMG and he and two and this learns something that kind of lets us handle potentially the full distribution of of language while also having maybe the efficiency of representing semantic chunks like words instead of operating on these characters which might result in strings that are five or ten times or five times longer and require like much more
2,451
2,474
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2451s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
compute and have much longer term dependencies that are difficult to handle then standard board models so if I clear encoding from recur Center is all over the place and is a very common middle ground to back off to character level if you see something rare you don't know or to handle like all these different languages while still having some sort of like kind of sensible
2,474
2,494
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2474s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
handling of common words and frequencies hey Alec is there a common kind of number of bite pairs that you want to end up with cuz it sounds like you start with byte level which is just 256 possibilities and then you could imagine that you can have many many by two pairs and sometimes it goes beyond pairs I think right where you recombine an existing pair with an initial just a
2,494
2,521
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2494s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
bite like the iron and the G yeah yeah when do you stop so yeah that's a good question usually you have a heuristic for merging only across or only within words so you won't merge across like word boundaries with like whitespace or things like that and that just helps with efficiency because otherwise you'll start wasting emerges on things like you know come in
2,521
2,543
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2521s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
like pairs of you know maybe fill our words or stop words and the other thing is you you could just in the limit run this all the way out to a full vocab but we often work on this kind of middle ground where you know to get good coverage of of natural language you often need a hundred thousand plus words and you know in the limit if you want to start having you know common names and
2,543
2,562
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2543s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
places you need really like million sized vocabularies and that can just be incredibly competition expensive so you'll often stick this in a middle ground of like 32 K bps and you're absolutely right that it'll merge all the way up to a full word like you will get things like you know neurobiology in the limit that would be merged all the way with BPA just by doing merges over
2,562
2,581
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2562s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
and over again yep thank you cool so how do we compute the probability of extreme well the dumbest model is we can just assume a uniform like prior over tokens and assume all of our independence we just product they're probably independent probabilities together to compute for any arbitrary sequence dumbest model but we'll start somewhere all right so let's get rid of some of
2,581
2,602
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2581s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
these dumb assumptions so we could suddenly like say well we know some words are more common than others and that kind of word co-occurrence matrix has that diagonal term which is just the frequencies or counts of words so we could use that instead and you know that would just allow us to say well the word these really comments were going to send more probability mass to it and you know
2,602
2,621
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2602s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
the word supercalifragilisticexpialidocious is just pretty rare so this would be called a unigram model where all we do is we just product proportional to the probabilities of the tokens from like the empirical distribution and again we can estimate that just by counting a time we can then go a bit farther and start to exploit context so again we've talked before about how important
2,621
2,639
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2621s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
context might eat and this is where you can start to see you language well begin to handle a potentially so you can say instead that we're instead of estimating just like the you know diagonal of that of that matrix we can use that full matrix basically and say well yeah then that you know we just saw the word the how often is the word cat after the word though and so we can kind of conditioned
2,639
2,660
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2639s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
on that previous token and you know use a modified version of that like look at our count table and start to handle a little bit of context so that's a bigram model by Gram language model but there's a problem of generalization here and this is where kind of counting methods eventually like hit their limit and yeah we can brute force them with all the data on the
2,660
2,678
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2660s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
internet but at the other day they're not flexible enough so let's say you've never seen a word before like self attention you can't assign zero probability to that if you're trying to optimize for like log loss or something because you just get an infinite loss and you know if we just start going to longer and longer strings this count method explodes and the observances of
2,678
2,699
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2678s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
every substring get rare and rare and this just kind of hits a wall so in the like 80s and 90s the way we kind of handle this is we kind of accepted that we couldn't handle the longer term dependencies here and we kind of use clever smoothing methods of mixture models where you might put a lot of your probability on you know the you know grammar by grammar trigram model which
2,699
2,720
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2699s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
is more expressive but you'll smear probability backing off if you don't see a word for instance or don't have a match to that unigram model or uniform model in the limit and so this was kind of what you saw a language models in the 80s and 90s spend a lot of their time on is they kind of were these very they were still basically count tables and statistical count tables but they
2,720
2,739
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2720s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
optimized for kind of achieving something looking more like generalization of a simple form of kind of combining these mixture models and so this is a good review paper if you ever want to kind of go back through the history of this and all the different methods develop there you start to get things that look more like representation learning and even multi-layer models so they'll start
2,739
2,757
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2739s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
doing things like clustering over parts of speech or substituting for that so it's a very hand engineered way of potentially adding expressiveness but it's a good history of kind of where these methods came from so you know since we're talking about NLP and language models is one of the core workhorses here kind of how do you evaluate and interpret a language model
2,757
2,776
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2757s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
well probabilities are often within running or of 0 since language is a huge discrete space in the sentence might you know or a document might just be very long and so the most common way of evaluating these models and saying how well does it do is we use a quantity that's not dependent on the length so we use like the average negative log probability per token
2,776
2,796
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2776s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
and you know this token definition again might be arbitrary character level or might be weird level and so if we're using character level we might convert from you know base e to base two and report bits per character or bits per byte you see a lot of common language modeling benchmarks work in this setting and word level language model is often exponentiate that quantity and report
2,796
2,816
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2796s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
what they call the perplexity and set so yeah it's just giving you bigger numbers and better improvements because you're working on expensive log scale so how do we ground these numbers they're kind of abstract or random quantities you know what is the difference between one point two three bits per character and one point two bits per character especially if you just spent pretty much your life
2,816
2,834
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2816s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
working on a paper and that's the number you got out so you know it's important to understand these quantities our data set dependent it's really easy to guess all zeroes it's really hard to guess the archive and you know you can start calibrating the scales by saying well random guessing would get you you know log two of you know one over 256 so eight bits per character and human
2,834
2,855
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2834s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
estimates from not the best studies but the only ones we've got I've kind of tried to peg on people in the range of like zero point six to one point three bits per character and the best of the models now are often a little bit lower than one bit per character so that range probably is lower for humans and we're somewhere you know getting okay but not matching humans on these kind of
2,855
2,875
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2855s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
quantities and you know way of grinding perplexities is gonna use the same random baseline so it ends up just matching the vocabulary size for like a standard model so random guessing would be a plexi of 50k and one way of thinking about perplexity is as like a branching factor of language so flexi to the N is like the space of possible generations of length then how many your
2,875
2,896
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2875s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
model might assign so you have a perplexity of 10 for a language model and you generate you know to two like word sequence there might be a hundred kind of high probability events in that space and human level estimates again Europe between five and ten and an example though again is this is always data set dependent always problem dependent if you have a lot of well
2,896
2,915
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2896s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
constrained context like in translation these numbers can be a lot lower and best models are often like three perplexity on translation so you're picking between maybe three as possible likely words and you know that kind of agrees with like maybe there's a few different ways for a human translate to census but it's not a huge space by any means so evaluation type 2
2,915
2,936
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2915s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
is kind of what we talked about so that evaluation type 1 is very much the generative model perspective of like well how good of a probabilistic model is this and so type 2 is instead kind of transfer and the things we're really talking about and caring about more there's a lot of ways we could use these language models so you could say how well does a better language mall
2,936
2,953
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2936s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
potentially improve the word error rate for your speech recognition system or the blue score for your translation system or the Younger suit for your document classification and this is kind of where NLP has really taken off leveraging these language models and kind of the history of the last five years has been discovering more and more ways we could use smarter and smarter
2,953
2,969
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2953s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
language models or better and better language models to do more and more things so let's go through kind of the history here of kind of the sequence of developing real context models models that can generalize better than these kind of count based methods we've so far been kind of using it for all of our discussion so the first one here is surprisingly you know like honestly this
2,969
2,991
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2969s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
paper is amazing if you go back and read it it's from yatra Ben geo and from 2003 and it has a ton of very modern things in it and has skipped connections like you see in things like resonates in 2003 you know it's learning distributed representations of words and this is kind of that core concept we mentioned right at the beginning of like representing a word by a vector with
2,991
3,011
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2991s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
learned values for each location this is like the paper that kind of really introduced this in the neural setting and they were doing like large-scale distributed asynchronous SGD on a cluster even back then in 2003 they had to do it because single mushy peas were so slow so this is like I think it's 64 128 CPU cluster and it would take them I think a month to train a model with like
3,011
3,036
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3011s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
three layers and you know sixty hidden units so this is a still a and grand model but we're using a multi-layer perceptron to compute the kind of conditioning on the context so instead of this kind of you know cow based method we have an MLP that looks at you know the index for reward t minus one the index forward to minus two and you know T minus let's say just a three word
3,036
3,061
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3036s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
context so these three of these vectors can and together of you know the last three words seen we then run it through a hidden layer and then we feed it through a soft max to try to predict what the next word would be so this is a trigram language model still but we've changed the model from a count based method to a distributed setting with an MLP and you know this was kind of the first
3,061
3,081
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3061s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
paper that heroically showed that they could match the performance of some of those super optimized and grand models but again it took like ten days or a month and it was on a giant cluster so you know neural language models really had some catch up to play compared to these smart quick count methods and this is a lot of what took this so long was just unfortunately they do need a lot
3,081
3,100
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3081s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
more compute so then the next major step here was kind of moving away from these fixed context windows which are kind of unsatisfying we know that as humans we can look back in pieces of text and condition on multiple sentences but these kind of methods always so far have had fixed context windows and have only been able to process or condition on just the last few words so this is kind
3,100
3,121
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3100s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
of where our n ends come in and Thomas week loves 2010 paper it's kind of the first modern deep learning version of this that kind of started working quite well so we replaced that MLP with a recurrent neural network and that allows for handling potentially unbounded context now it handles that context in a learn fashion so you'll get an input word vector at one time step and you'll have
3,121
3,143
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3121s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
this context buffer which is a learned memory state that the RNN kind of modifies and updates and you'll use that to kind of represent a running summary of everything you've seen that's important for predicting the next potential word this has potentially unbounded context but in practice we'll train it with method methods like truncated back prop where we only update
3,143
3,163
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3143s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
for and compute you know how to modify the the transition function of the state for up to like maybe 32 words or 64 words so it might be biased in that way but it's kind of just still can potentially learn to encode a lot of information into a learned memory system instead of kind of using like hard coded methods of just like keeping the explicit input presentations so here we
3,163
3,189
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3163s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
get again like probably one of the first real language models where on that previous one we were still using a type 1 evaluation we're just like how well can you predict the next word but here with my cloths paper they showed that if you ran this for a speech translation system you can get actually a much lower word error rate on you not only predict better and you start really
3,189
3,210
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3189s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
improving over the the traditional like in ground based language models but if you look at this word error rate table here you actually see that it improves the speech recognition system so your transcriber will make much potentially like you know over 1 to 2 so the k10 baseline here is 13.5 percent were word error rate so you messed up 13.5 percent words and using all these are nouns
3,210
3,233
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3210s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
together you could actually reduce that by over two points and so you're talking about like a 20 percent error reduction which is quite significant yeah this is like a lot of early language models were actually published in speech conferences because this was such a important and exciting application of them to start with and again you don't need to collect a bunch of speech transcription data
3,233
3,251
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3233s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
here in the limit you could just run this thing over like New York Times articles and then use it to help potentially with your speech transcriber and that's where a lot of the power comes in from an unsupervised scaleable method and transfer capabilities so we already showed samples from this one but it's kind of a slightly different version where all these models so far
3,251
3,270
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3251s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
have been operating on words and kind of pre-built tokenizer x' to split it off and chunk it and kind of fix vocabularies the exciting thing with character level models so it's the same kind of architecture or recurrent Network it approximates a richer transition function where you might have a different set of weights with multiplicative interactions this was
3,270
3,290
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3270s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
back when we thought optimization was hard so it's using second-order optimizers because RNs are scary and we still haven't gotten used to like just first-order methods working well and you know it begins to handle much longer short dependencies when you work on character level you're you're you know suddenly talk about sequences that are five times longer so you start having
3,290
3,310
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3290s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
models that maybe handle hundreds of time steps and that starts you know abstractly meaning maybe you could have a model that could actually parse a paragraph or parse a page and you know it wasn't a lot better than Engram models in terms of its perplexities but it was very easy to sample from and this was kind of one of the first I think demos that people might have seen online
3,310
3,326
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3310s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
of the language model back on some meats University of Toronto static website from like 20 2011 so the next male like not like quick question when you look at character level models versus word level models can you directly compare the perplexity uh if you're careful yes so you know it in the limit these are both just predicting a sequence and if you set it up correctly
3,326
3,358
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3326s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
you could just like you know here would be the simplest way to do it with a character level model sorry I should clarify you can go one way so you can you can compute for a character or byte level model the perplexities that it would assign to a ward level model but some word level models might have limitations like using unknown tokens or out of vocabulary that means they can't
3,358
3,378
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3358s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
actually compute probabilities of arbitrary sentences whereas that someone in the expressive benefits of a character level model so the simplest way to do this would be you would convert the word level model like the token sequence of processed like let's just say it's split on spaces you'll just rejoin on spaces and then compute the probabilities the the character
3,378
3,397
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3378s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
level model assigns and you'll have an adjustment factor you could just sum the probabilities over the full sequence and then renormalize by the relevant metric and we'll actually be using that later to talk about how to compare different language almost more appropriately but again you need to have the expressivity to handle an arbitrary string to be able to compute this and you know old models
3,397
3,416
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3397s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
because their computation is often worked with small vocabularies so they wouldn't truly be computing the probability of arbitrary strings because they might normalize them in various ways got it thank you yep so the next step is kind of going to multi-layer ellis TMS and also introducing the LS TM again even though it came out in 2000 and kind of gotten realized primarily by
3,416
3,438
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3416s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
you know one of the major people that we popularized it was Alex greys and kind of 2013 ish so this is gonna character level model except we now have a gated RNN which uses kind of these multiplicative gates and more complicated transition dynamics to better store state and to help compared to like a multiplicative Arn in with kind of credit assignment and just
3,438
3,459
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3438s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
trainability and you start to get the things that like can handle a kind of arbitrary arbitrary strings of text so you get you know something that's learning how to parse Wikipedia markdown or XML and Andre Carpathia kind of really popularized these models with like some blog post in 2015 showing that they're like you know work full of tech they work for XML they
3,459
3,479
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3459s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
were Python programs you know they're not generating valid things but they kind of can handle this they really have this flexibility of kind of you know feeling exciting from an unsupervised learning perspective you give them some data distribution of like Python programs or something and you just have a you know train over that and then you get something that looks like it's
3,479
3,495
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3479s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
really drawn from that distribution so we kind of like talking out through like a lot of the early work here and although there was one example with the Thomas pickle off paper of Thomas paper of actually having an application a lot of this was kind of just like competing for competing sake on type one evals or like look at the funny samples so again one of the very fascinating things about
3,495
3,518
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3495s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
the last few years of NLP has been how we figured out how to really use these things much more broadly across the board and this is where I think it really starts to get exciting so one of the first papers to do this it was the skip thought vectors paper from Jimmy tauros and collaborators in 2015 and so what they did is they proposed learning a RN and sequencing coder to provide
3,518
3,539
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3518s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
context to in a language model and basically to learn how to use a sentence level feature extractor so what I mean by that is let's say we have a sentence I could see the cat on the steps what this model is trying to do is it first ingest this context sentence in the middle and they call it skip thought vectors because it's you can think of this is basically that skip grande model
3,539
3,561
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3539s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
that was again you take a word in the center and then you predict the word before in the word after this is generalizing it to sentences and it's using an RN end to kind of learn to summarize the context of the long sequence and handle kind of predicting complex dependencies between multiple words so we encode that center sentence with an RNN we iterate over it in the
3,561
3,581
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3561s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
left-to-right fashion and then we have Anna linguish model that predicts the previous sentence so what might have happened before the sentence and then a language model that also predicts the suffix sentence that comes after it so what they then do is they say well you know a model that does this test very well should learn to kind of summarize this sentence in the middle
3,581
3,600
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3581s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
and for our nen's it's you know still these distributed representations so you have this state vector that's representing kind of an alert fashion all of the previous words you've seen so importantly that's now generalized from representations of single words to representations of sequences that can exploit context and potentially handle more complex properties and just big u8
3,600
3,621
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3600s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
meanings of words and they showed across the board that these models healthily outperformed classic methods like so Cibao with word effect would be the simplest version so you know what does a document how do you represent a document with word effect well one of the future observations you could take is to just average the embeddings of each of the each of the words in the document and
3,621
3,643
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3621s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
that would be what this like Cibao baseline here is on a bunch of different data sets and so you could instead say well we're gonna you know we somehow learned this sorry we somehow learned this sequence extractor we could run it take its feature representation for each sentence and use that instead and you kind of see that these models you know if you use the combined skip of the like
3,643
3,664
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3643s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
bi-directional models and using the forward and backward versions you can actually get these to start to outperform the words effect models kind of across the board and it wasn't really like this paper was kind of exciting especially from the breadth of things they do they have things like image captioning representations that they learn and they kind of show you know
3,664
3,681
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3664s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
with analysis methods like tease me that you see clustering according to classes you know it was kind of like on the edge where it showed some pretty exciting promise and it was you know a lot a lot stronger than potentially a super baseline but there were other still discriminative methods for like training models from scratch that were still matching it with like you know
3,681
3,704
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3681s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
well-designed comment architectures or things like this so although this had be like a very exciting kind of oh it's a learn feature extractor that's able to handle long term contexts and dependencies it it kind of worked but it wasn't like sweeping the sodas away it was you know exciting and honestly I think a lot of people ended up using it more as a language model where they saw
3,704
3,722
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3704s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
some cool demos of having to generate multiple sentences but it never really quite you know blew everyone away from its quality so you know this is like a good early hit but it didn't quite you know it wasn't a homerun by any means and so this is where androids paper from 2015 semi-supervised sequence learning kind of comes at it from a slightly different angle so again for skip thought vectors
3,722
3,745
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3722s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
we just used this vector representation as an input to a model and we fix the model itself and we just like train another model on top of this vector representation and it's a rebekah representation summarizing the whole sentence so maybe that's kind of a difficult test to summarize all the complexities of long sentences short sentences so what died all did instead
3,745
3,766
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3745s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
was they said we'll take this language model that we've learned and we're just going to fine-tune it directly we're not going to like precache the features like kind of words of Exile we're just gonna you know take it whatever parameters that language model learned predicting the sequence we're going to use that as an initialization point for training a supervised model for a downstream task
3,766
3,785
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3766s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
and this is the one that started to get good results and they were showing compared to standard supervised learning on you know datasets with like 20,000 labeled examples and stuff like that that these models could get quite far and so you see in the limit that you know if you you kind of have all of your different baselines here of you know word vectors feeding as inputs but then
3,785
3,806
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3785s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
we could use like a sequence auto encoder or sequence language model and fine-tune that and you start getting quite large drops here and what's kind of cool here is these two different rows here one of these is pre-training only on the IMDB movie reviews so basically the same data set it's a two-stage algorithm and then this third table here or this third row here is using a bunch
3,806
3,828
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3806s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
of unlabeled Amazon reviews and that's you know starting to get towards transfer learning starting to get towards well we can run this thing over a lot of data and as we get more compute we can just get more data from the internet we can feed in more and we see that that actually improves things significantly over only using like the small standard supervised learning
3,828
3,844
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3828s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
dataset in isolation some of this might have just been at the time that it was difficult to Train language models and train Armand's back in the day but you know as we'll see for the rest of this lecture the methods have kind of continued to hold on top of this and continue to make progress this is the first one where it got a strong soda and you know there were
3,844
3,863
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3844s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
strong bass lines before and people started like really I mean well to be fair it came out and not much work happened in the space for the next two years but it kind of and a lot of that was because it like really just killed it on these sentences datasets and not not as much elsewhere and this really took some further work to kind of figure out how do we make this a generalizable
3,863
3,883
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3863s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
approach that works kind of everywhere the same way that like plugging in word vectors does so moving back one one moment to a type one evil there was a followed paper or a neck in the next year that kind of really started to push on scale and compute used for training language models as we mentioned before they've kind of always been compute limited so this was a that Google paper
3,883
3,902
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3883s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
that showed kind of the first language of all that could generate something like a coherent sentence and a lot of it was using a larger data set so the billion word benchmark was a big data set at the time and they use a an HK hidden unit projection LST M which is basically a low rank factorization of like the transition transition matrix just to keep the parameter count down
3,902
3,925
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3902s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg