video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
5QaOt56cWhg
what evidence you have so now I have to introduce another piece of jargon the belief only exists in a big network of other beliefs and other mental states and one part of the network such as my belief were in the United States that only makes sense in relation to the whole network I'd have to believe the United States is a country that if it's on the surface of the earth and so on so
166
188
https://www.youtube.com/watch?v=5QaOt56cWhg&t=166s
John Searle - What is Belief?
https://i.ytimg.com/vi/5…axresdefault.jpg
5QaOt56cWhg
belief is not belief it looks like it's pretty simple on the surface truth I got this belief I believe I'm an American but in fact it's part of a vast network of intentionality and you can really only understand it by seeing how the network works and how its constrained by rationality and by perception are there different categories of beliefs such that the belief that you saw your dog in
188
213
https://www.youtube.com/watch?v=5QaOt56cWhg&t=188s
John Searle - What is Belief?
https://i.ytimg.com/vi/5…axresdefault.jpg
5QaOt56cWhg
your living room or the belief that you do not believe in God yeah those are two things I'd use the word believe but one is kind of a direct perception and the other is kind of an analysis of reality yeah but but I have they're both beliefs yeah I think that you're right to say that we have to make a categorization of our beliefs in this or so so to speak different degrees of centrality but in
213
240
https://www.youtube.com/watch?v=5QaOt56cWhg&t=213s
John Searle - What is Belief?
https://i.ytimg.com/vi/5…axresdefault.jpg
5QaOt56cWhg
fact there's some of my beliefs I think it is misleading to describe as beliefs and I think they are presuppositions that enable me to cope with a world do I believe that there is a real world out there independently of my representation see I'm gonna get on an airplane now when I call up the airline when I get on the computer to find out is the plane on time I don't then have to
240
266
https://www.youtube.com/watch?v=5QaOt56cWhg&t=240s
John Searle - What is Belief?
https://i.ytimg.com/vi/5…axresdefault.jpg
5QaOt56cWhg
as oh and by the way does reality exist I that's not something I can find out by even looking on the net because all of these activities presuppose the existence of reality so there are some beliefs I that are so fundamental that is probably not a good idea to construe them as beliefs and I mentioned earlier that Network these are part of something in addition to the network these are
266
292
https://www.youtube.com/watch?v=5QaOt56cWhg&t=266s
John Searle - What is Belief?
https://i.ytimg.com/vi/5…axresdefault.jpg
5QaOt56cWhg
what I call the background the whole system works against a background of what we take for granted we take for granted that entities are related to each other by cause and effect relation so we want to know what's the cause of cancer and it won't do to say well cancer is just one of those things it doesn't have any causes we won't accept that because our background
292
315
https://www.youtube.com/watch?v=5QaOt56cWhg&t=292s
John Searle - What is Belief?
https://i.ytimg.com/vi/5…axresdefault.jpg
5QaOt56cWhg
presupposition is things need a causal explanation and the background presupposition that makes sense of true belief is the idea that there's a way that things are that's independent of how we represent and how they are now sometimes that's not the case sometimes our beliefs are so enjoyed they're so ill formed we don't really know but for beliefs that really matter to us we
315
338
https://www.youtube.com/watch?v=5QaOt56cWhg&t=315s
John Searle - What is Belief?
https://i.ytimg.com/vi/5…axresdefault.jpg
5QaOt56cWhg
assume that there is a reality that corresponds to the belief but that belief in that reality is not just another belief it's a presupposition of making sense of the first belief some people say that when they believe in God that that is the most sure thing that they know yet many people know I think a lot of people for them the belief in God is the kind of background presupposition
338
361
https://www.youtube.com/watch?v=5QaOt56cWhg&t=338s
John Searle - What is Belief?
https://i.ytimg.com/vi/5…axresdefault.jpg
5QaOt56cWhg
they make sense of their lives only on the presupposition that there is a divine force I and there was a period in my life a rather long time ago when I accepted something like that when I was a small child but it later on it came to see me there's no rational ground for that whatever it's sad that there's no rational ground for it and a lot of people think well who the hell needs a
361
385
https://www.youtube.com/watch?v=5QaOt56cWhg&t=361s
John Searle - What is Belief?
https://i.ytimg.com/vi/5…axresdefault.jpg
5QaOt56cWhg
rational ground I have it on faith well okay but faith is not a reason a faith is not a ground for accepting something so I I think you're absolutely right that there are a lot of people for whom a certain metaphysical vision the existence of or the existence of spirituality or the existence of a certain spiritual nature of the universe that all of those are
385
406
https://www.youtube.com/watch?v=5QaOt56cWhg&t=385s
John Searle - What is Belief?
https://i.ytimg.com/vi/5…axresdefault.jpg
5QaOt56cWhg
background presuppositions of their whole being and a whole mode of life but I I don't share any of that I think it's all for the I think it's almost all hot air that they don't have any ground for these and many of them would admit they don't have any ground but for me that's a reason for not accepting it whereas I can my acceptance that there is a world that exists independently of me that
406
428
https://www.youtube.com/watch?v=5QaOt56cWhg&t=406s
John Searle - What is Belief?
https://i.ytimg.com/vi/5…axresdefault.jpg
5QaOt56cWhg
seems to me not at all like the belief in God it's not specific to this or that view it just says if when you investigate how things are there's a way that they are that enables you to investigate but to understand the nature of belief you're feeling the reality of the external world and the person who really believes in God as a fundamental basic belief in terms of just
428
452
https://www.youtube.com/watch?v=5QaOt56cWhg&t=428s
John Searle - What is Belief?
https://i.ytimg.com/vi/5…axresdefault.jpg
5QaOt56cWhg
understanding belief not understanding reality it's kind of the same thing yeah no I don't think it is and I'll tell you why the belief in God presupposes the belief in an external reality because if external reality should turn out that there is a god that's a feature of external reality if it should turn out that external realize such that there is no god that's a feature of external
452
474
https://www.youtube.com/watch?v=5QaOt56cWhg&t=452s
John Searle - What is Belief?
https://i.ytimg.com/vi/5…axresdefault.jpg
BnpB3GrpsfM
to be able to introduce Alec Radford Alec Radford is a research scientist at open the eye alec has pioneered many of the latest advances in AI for natural language processing you might be familiar already with GPT and GPT to which Alec led those efforts at open AI and of course earlier in the semester we covered BC GN which was the first Gann incarnation that could start generating
0
30
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=0s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
realistic looking images and that was also led by Alex it's a real honor to have Alec with us today and yeah now Mike please take it away from here yeah totally I'm super excited to be here and present because this course is like my favorite research topic unsupervised learning and yeah just really excited to chat with you all today so today I'm gonna focus on the
30
51
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=30s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
NLP and tech side and I'm just gonna start the timer and today I'll be talking about just kind of generally learning from text in a scalable unsupervised kind of fashion kind of give a history of the field and some of the you know main techniques and approaches and kind of walk through the methods and kind of where we are today as well as providing some commentary on
51
73
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=51s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
kind of supervised learning versus on supervised learning in NLP and why I think you know unsupervised methods are so important in this space yeah so let's I guess get started so learning from text you know one of the I think prerequisites to kind of start with is standard supervised learning requires kind of you know what we'd say is machine learning great data and what I
73
96
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=73s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
mean by that is kind of your canonical machine learning data set is something at least in an academic context is something like you go use a crowd worker pipeline and you very carefully curate gold labeled standards for some data you're trying to annotate and this is a pretty involved expensive process and you often are emphasizing kind of quality and specificity and preciseness
96
119
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=96s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
to the thing you care about the task you're trying to predict and maybe a very specific targeted data distribution and what this often means is you get a small amount of very high quality data and even some of the largest efforts in space just because you have paid human feedback often involved and sometimes your ensemble in predictions of three five or more laborers it's often a few
119
144
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=119s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
hundred thousand examples is like a big data set especially for NLP in computer vision you sometimes see you know things like imagenet where they push that to a million or ten million but those are kind of afar outliers and you know very many canonical NLP datasets might only have five or 10,000 labeled examples so there's not really a lot of machine learning great data out there at least
144
167
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=144s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
compared to what kind of the current learning complexities and efficiencies of current models are you know one of the primary criticisms of modern supervised learning deep learning in particular is how data intensive it is so we really have to get that number down and this lecture is basically going to be discussing all the variety of methods that have been developed for
167
186
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=167s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
using natural language that kind of is available beyond kind of the machine learning great data and unsupervised or scalable self supervised methods for hoping to somehow pre-trained do some auxilary objective or tasks or you know hand design some method that allows you to improve performance once you flip the switch and go to a supervised learning on the standard machine learning great
186
209
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=186s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
data or in the limit as we'll talk later get rid of the need entirely to have a classic supervised learning data side and potentially begin to learn tasks in a purely unsupervised way and evaluate them in a like zero shot setting so there's a variety of methods this lecture is going to focus primarily on auto regressive maximum-likelihood language models they're kind of the core
209
232
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=209s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
and I think they're the most common uniting thread that kind of carries the early days this field through to kind of the current modern methods but I want to you know make clear at the front that there's many proxy objectives and tasks that have been designed in actual image processing to somehow you know do something before the thing you care about in order to do better on my thing
232
251
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=232s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
you care about and there's quite a lot and in particular in the last year or two we've now seen that area really kind of grow dramatically and in many cases they now I'll perform the standard language model based methods that I kind of will as the core of the presentation and we'll talk more about the details of the differences as we get to those parts so some more motivation in intro as we've
251
276
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=251s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
kind of going so I think I think one of the ways to think about this is like what do we do with the Internet so you know the wild Internet appears and you can either have your glowing brain ask representation on the left we can laugh at or we can make it you know how messy and random and weird and difficult it might be for algorithms to learn from it on the right so that's good old
276
296
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=276s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
Geocities and so you know there's a lot of skepticism I think about kind of these approaches that might kind of at the highest level look kind of silly or kind of whimsical to be like let's just throw an algorithm at the internet and see what comes out the other end but I think that's actually kind of one of the like one seven summaries of basically what modern NLP has been seeing a lot of
296
319
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=296s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
success from and you know I think one of the reasons why is just the Internet is so big there's so much data on it and we're starting to see some very exciting methods of learning from this kind of messy large-scale and curated data and so there's a great tweet from from an LP researcher just kind of showing just how is are they big and you know kind of just massive the Internet is where you
319
343
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=319s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
can go and find an article about how to open doors and you know there's often a lot of arguments saying that oh you know we're not going to you know and it feels wrong in the limit to be like yes let's just throw algorithms at the internet and see what happens like that doesn't match human experience that doesn't match kind of the grounded embodied agents that you know we think of you
343
362
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=343s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
know intelligent systems and instead is this kind of just like processing bits or abstract tokens and so there's a lot of skepticism about this approach but I think that just quantities of scale and other methods play very well with current techniques and you know you see lots of arguments about things like oh there's this long tail and we're never going to be able to deal with
362
381
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=362s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
composition and really it's just maybe brute force can get us surprisingly far in the in the new term not saying that these methods or techniques are and I'll be all but at least today there's I think strong evidence that we shouldn't dismiss this somewhat silly approach at a high level so let's start with kind of I think what would be the like simplest starting
381
405
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=381s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
point that we can convert from this kind of high-level idea into something that looks like a machine learning algorithm so we process a bunch of texts on the internet let's say and we're going to build this matrix called the word word co-occurrence matrix and so what we can kind of think of is it's a square matrix where the ith entry corresponds to for a given word like water the count of
405
430
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=405s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
another word and whether they co-occur with each other so it might be you have to define when a co-occurrence is so that just means that the two happened to be present together and you might define a window of this for instance they both occur in the same sentence or within five words of each other or in the limit you can go quite far with like just happen to occur in the same document on
430
449
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=430s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
the internet and so you're just gonna brute force kind of countless it's just counting that's all it is we're just going over you know tons and tons of text and we're just building up this this table basically so just a lookup table and it just tells you oh the word steam and water co-occur 250 times or you know the word steam is just in the data set 3 to 24 times total or you know
449
469
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=449s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
the words hot and water you know 19500 forty times so that's all we're doing and this is a way you know one this is incredibly scalable you can just run a spark job over the entire internet with this kind of system you can quickly get this giant table and it's you know I'm not computationally intensive it's just counting and processing and tokenization this thing can be run on a
469
492
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=469s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
common desktop and get very far and it's simple it's just counting so how good is counting a bunch of stuff like we're we're talking about something incredibly basic it's just kind of how often do these two things occur together and I think you know one of one of the big takeaways that I'm gonna have a lot of during this presentation is just how far these simple methods that are scalable
492
513
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=492s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
and with large amounts of data can get so this is a great example of a paper called combining retrieval statistics and inference to answer elementary science questions it's from Clark at all AI - from 2016 and what they do is they take the same data structure this word word co-occurrence matrix they did let me start with the task so the task is elementary science questions
513
539
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=513s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
so it's just I believe through 5th grade kind of you know elementary school kind of simple settings questions so they're multiple choice therefore no possible answers and there are these kind of simple things like a student crumpled up a flat sheet of paper into ramble what property the one who changed hardness color master shape or you know what property of a mirror makes it possible
539
561
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=539s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
for a student to see an image in it is it volume magnetism reflectiveness or connectivity so this is a kind of thing that like you know again it's pretty basic in terms of like the high levels they're you know relatively simple facts and they don't require all that much in the form of reasoning or comprehension but there's still the kind of thing that we do give to is you know kids learning
561
581
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=561s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
about the world and so you might think that like oh you know this is the kind of thing where to understand a mirror you really need to you know exist in the world and to you know learn about all these properties or to have a teacher and how are we gonna get there we're just kind of this brute force thing that just counts a bunch of words and puts them into a table and then starts
581
598
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=581s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
looking them up and you know the takeaway here is that it can work surprisingly well so you can't quite pass these examples so the specific solver that we're gonna use talking about in a second is the PMI solver and that gets to about 60% but random guesses 25% so we basically almost you know have the error rate and get to addy with just this very dumb brute brute
598
622
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=598s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
force approach so what actually is this solver they call it the point-wise mutual information solver and what you can think of it as is it just scores all of these possible answers so we have this sentence of context of you know the question and then we have you know four possible answers so what we do is we loop over the basically the sentence and we just look for the word toward
622
645
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=622s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
co-occurrences and we just keep counting them up and we use this scoring formula which is the log of a ratio between two probabilities the first the P of XY is the joint which is basically the co-occurrence and so that gets you that count that's basically looking it up directly from that table the IJ entry for XY and then you normal by this kind of baseline assumption
645
669
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=645s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
which is that the words should not Co occur more than by chance so that would be just their gonna depend of probabilities multiplied together as you can imagine those may be quite small and product into two gether makes them even smaller but some words co-occur together so a mirror occurs with reflective or you know electricity occurs with lightning or you know crumpled up might
669
694
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=669s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
co-occur with like hardness and so that's all this method does is it kind of just says these basic associations between words and that can get you surprisingly far it doesn't feel like real learning you know maybe and it does it's definitely not very human-like but it's just an example of kind of the power of basic methods and how something that you know doesn't involve any you
694
717
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=694s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
know you know intelligence or hand waving that we might make about you know complicated systems it's just a big lookup table you know a smart job but you might run on the internet and it can give you surprisingly far so there's a problem with working with these word to word co-occurrence matrices they're huge so let's say we have a million word vocabulary so we have a million words by
717
736
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=717s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
a million words just to have the full version naively and then you might store it with in 32 hopefully you don't need them in 64 so that's four bytes so storing this whole matrix in memory in a dense representation is four terabytes you know that's still huge for today most machines don't have that much memory in them so and you know if we were to kind of like start working with
736
761
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=736s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
like how do we use this system or how do we kind of make it more general you know we just have this matrix and there's you can definitely design hand-coded algorithms to let go look up entries and query on it and we see that they can get quite far but you know we'd like to do more and how does this slot into NLP more broadly so we want to come up with a more compact but faithful
761
784
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=761s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
representation of the relations between the words and the information they represent and we could just say that we really just want to find a way of representing this J co-occurrence matrix as something more like what we know from deep learning and machine learning in general so here's the algorithm called glove from and in 10 Li the Stanford NLP in 2014 so we take that matrix of word word
784
805
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=784s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
co-occurrences like I mentioned it's cheap so you can run this thing when we like a trillion tokens and each entry X I X IJ would be the count of word I co-occurring context for J and what we're going to do instead is we're going to you know learning an approximation of this full matrix and the way we're going to do it is we're going to say we're going to redefined a word as a low
805
825
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=805s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
dimensional or at least compared to you know a million by a million matrix much more low dimensional vector so we're gonna learn a dense distributed representation of a word and all we're gonna say is this very simple model such that we're trying to predict the log prob or the or the log co-occurrence counts of the X IJ entry and then we're going to do it is we're gonna look up
825
847
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=825s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
the rector representation of word I and the vector representation of word J we're just gonna say their dot product should be proportional to the log occurrence count and that's all this is and so it's really simple and you can just use a weighted like square to error loss so that's what this this FX IJ is a basically a weighting function to account for the fact that some words are
847
871
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=847s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
way more common and you don't want to over train this thing on like those words and you might also want to like clip because you might have like extremely long tail frequency distributions and things like that so but at the other day you just have there your ID WJ and you had some bias terms and you're just trying to compare that to the log of the rock codes count so
871
893
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=871s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
this allows us to go from that giant m by m matrix which might be a million by a million to an M by n matrix where there's M words and each is an N dimensional vector and often this turns out that these can approximate that full co-occurrence matrix quite well and they're much much smaller dimensionality so they might be just 300 dimensions and you know there's a question of what does
893
912
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=893s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
this thing learn and how does it approximate that but empirically it just cannot compress it quite well and this might make sense because you can imagine that so many many words just never occur with each other all that often and in fact simple sparse storage of that full matrix can get a lot smaller already but then we work mostly with them distributed representations these days
912
932
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=912s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
in deep learning so we're going smash it into the framework we know there's another word of this yep the question so do you still have to first build the full matrix and then you run this or so this is a way of having had the full matrix you then run this as a way of like kind of compressing or we representing the matrix chronic Thanks mm-hmm so now as an example where you
932
958
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=932s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
don't have to build that full matrix so there's another variant of very similar kind of and I think usually a more well-known version of kind of an algorithms class called word effect and so weird effect is instead a kind of predictive framework where instead of saying we've got this kind of you know abstract like co-occurrence matrix then we're going to try to like compress it
958
978
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=958s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
and we represent it as word vectors we're gonna just work with natural sequence of text so you might have you know a five-word sentence like the cat sat on the mat and what you're gonna do is there's going to be a model that's trained to take a local context window like you know the cat said maybe two words of past context in two words of next context we're going to do an
978
999
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=978s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
incredibly simple linear operation like summing them and then we're just going to try to predict that word in the center so this is called the continuous bag of words representation continuous because it's a distributed representation bag of words because the operation that composes the context is just sum or a bag and then we just predict the output and we can
999
1,018
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=999s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
parameterize that as like the log probability of the word in the center of the context and there's the inverse version of this which is the skip grand model which given a central word of context tries to predict the window and so this uses kind of the more standard approach of like online training and it just streams over a bunch of examples of text you can use mini-batch training it
1,018
1,041
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1018s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
looks like your standard algorithms now the same way I mentioned some tricks with like the using the log co-occurrence or a real waiting function you need those same kind of things here again many words span many different ranges of frequencies where you might have words like the be literally 7% of all your data so you if you now usually train or direct algorithm without
1,041
1,059
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1041s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
subsampling or resampling based on the frequency distribution seven percent of your computes going to modeling the word though and then you know some important word in New York City or something or phrase is just basically lost in the noise so we use a real weighting function I believe it's the inverse fifth root so it just works and that just heavily truncates the frequency distribution so
1,059
1,083
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1059s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
they're basically doing the same thing today this is a predictive framework where it's it takes in a sequence and it tries to predict some subset of that sequence with a very simple linear model and you just have word of the same word event betting table we talked about but they both do about the same thing and they're kind of the canonical first round of distributed or scalable kind of
1,083
1,105
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1083s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
unsupervised cell supervised representations for a multi again there's there's no you know human supervision classically involved in these algorithms they just kind of have this automated procedure to just churn through large amounts of data and you know we're Tyvek came out of google in like 2013 and you know one of the first things that is written on a big city you
1,105
1,122
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1105s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
cluster with like a very efficient C++ implementation and shove a bunch of words through it and it works really well and so let's kind of talk about what this does so for this graph I'm gonna talk about how I'm gonna interrupt for a moment if you go back yep so on the Left it's the word words are represented by vectors and then your average and you're supposed to get two
1,122
1,146
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1122s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
vector representing the middle word on the right where did he embedding slid they're the same embeddings wte so they're both inputs and targets so you you would basically slice out some word WT from from your from your list you would then also pull a sequence of context to be predicted like the were before and the word after and then you would have the same prediction objective
1,146
1,173
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1146s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
like the whole or all of that word at that location and there's other approximations that kind of just embossing over right now how to do this efficiently because computing a full normalization of the predictions over like a full million size vocabulary is very expensive so you often you can use a tree structure or a sub sampling algorithm where you might normalize over
1,173
1,196
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1173s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
only a randomly selected subset and you can weight that subset and things like this all production negative sampling is a prediction some kind of inner product between WT & WT - - or yeah so that would be how you'd get the logit for the log problems it's a dumb profit as well yeah sorry I should've been clear about that operation thank you cool thanks Alec so
1,196
1,218
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1196s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
yeah what do we do with these things so this is where kind of a lot of the first wave of kind of modern you know modern modern is a contentious word but kind of NLP starting to leverage large-scale and supervised data started figuring out how to use these things so these examples on the left are with glove and what we see is kind of a suite of tasks so there's
1,218
1,241
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1218s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
the Stanford sentiment tree bank which is predicting for a sentence of a movie review is it a positive review that they like the movie or is it a negative review you know like movie IMDB is another central analysis data set but it's a paragraph of context t-rex six and 50 are classifying kind of types of questions like who what where when and SLI is a much fancier thing of
1,241
1,265
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1241s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
logical entailment so it's kind of measuring the relation between two sentences a premise sentence and a hypothesis sentence and you're basically trying to say given the premise does the following sentence follow logically from it it being tailed is it kind of irrelevant or containing information that's maybe correct but maybe not wrong which would be a neutral or is it
1,265
1,290
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1265s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
actually a contradiction with the previous sentence so you know it might be the first sentence is like you know a woman is walking a dog and then the second sentence is like a man is playing with a cat and that would just be a contradiction of the first sentence so that's s Noy and it's some sensible objective and it's kind of this more complex operation because it's doing
1,290
1,310
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1290s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
logical reasoning supposedly and it's doing it over semantic concepts like you might need to know the relations between playing an instrument or you know that saxophone is an instrument so that if the premise is a man playing saxophone you need to know that the hypothesis might be you know in tailing it if it's the man is playing a musical instrument so that one has like kind of an
1,310
1,330
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1310s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
interesting relation to some more semantic content and the final example here is squad which is answering dataset so you get a paragraph from Wikipedia and you have to predict you know given a question what the answer is from that paragraph and so for all of these data sets again this is a pretty broad suite of tasks you see multiple absolute percentage performance
1,330
1,351
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1330s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
jumps from slotting in word vectors compared to randomly initialized components of the models that were used to predict the so you can always do random initialization kind of standard canonical thing and deep learning or you could use these pre trained vectors and so they really do seem to help in terms of data efficiency and you can see in some cases like for question answering
1,351
1,372
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1351s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
that you can get a 10% plus absolute improvement here for glove glove plus code is another thing which we'll come to in a bit and you know why might these be helping so much so that's the kind of empirical data well on the right here we kind of have some of the work that God did to kind of inspect the properties of these word of vectors so they would for instance have a query vector like the
1,372
1,394
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1372s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
word frog and then they would show all of the different possible nearest words in terms of just cosine similarity to that first word so you can see that you know immediately it's the plural version of it frog two frogs and you know toad is very similar to frog Ronna is like I guess more scientific you name and then you get slightly farther on things like wizard so you can see how that can
1,394
1,416
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1394s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
simplify the problem space if we have a distributed model and we have an input that's asking a question about a frog if we don't have any knowledge of the structure of language or the relations between the word frog and toad it's you know naive or basically impossible for that model to then potentially generalize the same question asked about a toad instead but if we have this dense
1,416
1,437
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1416s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
distributed representation that is bringing together these words kind of into this similar feature space then you might expect that well if the you know representation frog is very similar the representation for toad the model might just be able to generalize and handle that and you know there's even more relations and properties which is beyond just similarity in that embedding space
1,437
1,457
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1437s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
you can also get very interesting relations like the concept of like kind of you know like the CEO to a company might all be expressed in kind of the same same subspace or the same direction in the embedding space or connecting a zip code to its city and you can see kind of how you know how could this be happening well co-occurrence is kind of get you this don't they
1,457
1,484
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1457s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
you know Honolulu if you want to predict this random number you've got to eventually figure out that oh these you know do occur together because they are the code of one to the other and so yeah you you get a lot of structure even though again all we were doing or you know one of the views of all these algorithms are doing is they're just processing very local relations in a
1,484
1,505
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1484s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
very simple fashion and it's just scaleable and simple but it can work quite well as a starting point now you know these aren't the end of obviously it's only thirty minutes in two or three hour lecture so there's a long way to go so kind of these are all cool and whatnot and they really did drive the first few years of modern deep learning NLP and helping to move these
1,505
1,528
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1505s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
models to much higher performance but kind of what might be the issues with them so you know obviously languages a lot more than just the counts of words it has a ton of structure on top of than in addition to words and and furthermore context is very important and these kind of fix static representations of words that we're learning are just insufficient in many cases so you might
1,528
1,550
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1528s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
have for instance three different sentences I went to the riverbank I made it withdrawal from the bank or I wouldn't Bank on it and all of them have the word Bank being used in a very different context and you know basically representing a different thing it's a now and a reverb or you know it or just a phrase of expression and so you really need to learn how to do more complex
1,550
1,571
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1550s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
things but if you're just counting whether two words happen to occur in the sentence or you know in word to vector kind of looking at a very short window and just using an averaging operator you you you can't really model all that much complexity so we need to do more and you know there's also just kind of the design space right now you have this million by 300 dimensional matrix so
1,571
1,593
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1571s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
it's like word vectors and then the question is just what do we do with that and you know obviously we figured out quite a lot of ways to use them but there's a lot that's still up to the practitioner and this often involves a lot of tests specific models slapped on top of it and that's where a lot of the first few years of research and NLP for deep learning went was kind of designing
1,593
1,612
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1593s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
all these tests specific models a model for doing question answering a model for doing summarization a model for doing sentiment analysis and they would all kind of take this common input of the word vectors and slap them in but then there was a huge amount of design on top of that and these models got progressively more and more complex with more and more details and so you can
1,612
1,630
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1612s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
kind of think this does like well we only really did the first step sure learning word vectors is great but they're really kind of like learning just the edge detectors in computer vision and they get us something but we know like in you know debugging for computer vision there's a lot more that goes into comment than just some edge detectors at the beginning system and
1,630
1,647
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1630s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
that's true for an LP as well so there's a lot more going on in language beyond just these input representations so kind of how do we get there well we're going to take a little bit of a detour into the history of language models and kind of walk through how this kind of method and kind of set of generative models kind of ended up providing one of the methods for moving beyond just word
1,647
1,669
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1647s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
vectors and kind of introducing the second wave of modern NLP methods that use unsupervised or self supervised methods so fun overview real quickly is kind of seventy years of history here on one slide where we kind of are looking at a language model what is a language model well it models language and it's a generative model so hopefully depending on how nicely it's set up we can draw
1,669
1,693
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1669s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
samples from it to understand kind of what distribution it's actually learned and how well it's actually approximated the real distribution of language so without getting in the details of how you sample you can kind of see this kind of list here so very early there's this thing called a three gram model from Claude Shannon himself in the 1950s and this kind of still makes basically
1,693
1,714
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1693s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
gibberish they also point to ninety-nine point six billion dollars from two hundred four six three percent of interest rate stores as Mexico and Brazil in market conditions well that's basically gibberish but notice that there's still a bit of like local correlation and structure it says a lot of numbers and then it mentions interest rates after six point three percent or
1,714
1,733
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1714s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
six three percent and that's like all kind of right and you can see how there's the tiniest bit of structure in there beyond just like what it would look like I could be just drew words independently according to their frequencies and then there's been a lot of investment in this kind of field in area over the last few years so Ilya sutskever in 2011 kind of
1,733
1,752
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1733s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
introduced a character RNN for the task and so here anytime there's a prompt it's highlighted in yellow which means it's a manually specified kind of prefix and then you condition on that and you sample from that so the meaning of life is the tradition of ancient human reproduction that's almost a sentence it is less favorable to the good boy for when to remove her bigger so it quickly
1,752
1,774
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1752s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
fell apart in the second part but it almost got something there and it's still gibberish but it at least shows another hint of structure and then there's a Rafale or falls paper from 2016 which is basically a much bigger word level version of that RNN from 2011 and just kind of use a scale and a lot more data and here's a sample drawn from it with even more new technologies
1,774
1,797
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1774s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
coming onto the market quickly during the past three years the increasing number of companies was now Ted called the ever-changing and ever changing environmental challenges online so that's basically a sentence at this point there's a weird thing where it repeats itself with ever-changing and ever changing but we've now got a phrase you know multiple phrases or clauses and
1,797
1,815
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1797s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
kind of longer term structure there so that's a big amount of progress and again as we talked about with word vectors a lot of their failure is that they don't exploit contexts and they're kind of these isolated representations of only single words so the fact that these language models were starting to learn context as you looked at and inspected their samples is kind of a
1,815
1,832
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1815s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
clue that they're going in the right direction towards some of the functionality behaviors we might want in a natural image processing so then the next major step came in 2017 2018 with the introduction of the transformer based architecture we'll talk a little bit about that later if that's appropriate but its handles long term dependencies much better through self
1,832
1,852
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1832s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
intention mechanisms and then you start to see potentially multiple sentences that kind of flow together and then the final one here is GPT - which can kind of take potentially a pretty low probability or difficult to understand prompt that you know probably isn't in the training data like scientist discovering a herd of unicorns living in a you know remote previously unexplored Valley in the
1,852
1,873
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1852s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
Andes Mountains and they're able to speak English and then it can write something that looks like a news article on the top of that this does cherry-picked all howl like 20 times i sat there till I got a good one but it's progress and most of these are cherry picked so it's cherry picks again cherry fix all the way down and yeah at that point you basically have something that
1,873
1,893
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1873s
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
https://i.ytimg.com/vi/B…axresdefault.jpg