video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
o3y1w6-Xhjg
the same distribution as random initialization but rescaled to match the pre trained weights so sort of here's here's what this concretely would look like if you initialize those pre trained weights you'd look something like this if you initialize with random initialization you of course destroyed all the features and it looks something like this and then this thing which we
2,224
2,242
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2224s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
call the mean var in it I mean it looks exactly like random initialization except that it's scaling is different because you sort of rescaled so how does this do if we initialize what this and train instead turns out that it actually helps a lot with convergence speed and we sort of students across different architectures and across our different tasks and so what's really interesting
2,242
2,263
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2242s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
here you didn't very and grains from the imagenet transfer the pre train weights exactly and and it was per later so not across the entire architecture that wouldn't make sense but sort of perlier take this and then initialize and so what's really interesting is that yeah this is this is a feature independent property because we're sampling i idea we've kind of destroyed all of the
2,263
2,283
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2263s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
features and we're only kind of keeping the scaling but these are these are just both for the the two large-scale medical imaging tells me like that and when we already know from and the relative order of magnitude of the amount of data on sorts first target times because uh a dramatic impact here like did you look at like Burt did you do all the stuff we were doing the exact same things in 2014
2,283
2,322
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2283s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
but instead of training on the billion or a dataset we were training on like penn treebank or something like this and that difference like the discrepancy like and I've seen I know at least internally at Amazon I have some colleagues we're doing some stuff that was like how many so many unsupervised examples what I like look the fire fiber choosing it by the cost of like how much
2,322
2,345
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2322s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
would I pay to get a million unsupervised examples versus a hundred extra labeled ones or something like that and these numbered skin you know could be it could be a very big difference so I wonder with 1 million versus two hundred thousand is are sort of on the same order of magnitude and that's why you see yeah so actually this specific experiment we also so like I'm
2,345
2,364
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2345s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
not covering this here but we tried a bunch of things where we varied the data and so it's interesting but this is actually pretty robust to varying the data so you see the same sort of like convergence like you actually see a speed-up even when your data is much smaller one thing you do see is that with these really large image that architectures if you have something as
2,364
2,381
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2364s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
small as like five thousand data points that's when you see like a little bit more of a gap between transfer learning versus random initialization maybe it's like two percent but then by the time you've gotten to fifty thousand examples that gap is like almost gone and and like there's really no reason why you'd a priori you want to use like kind of imagenet sized architecture
2,381
2,398
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2381s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
on like 5000 examples like I think that's sort of like also like kind of yeah merits sort of further study we've been talking about over parametrizations I think that's like an interesting related question but yep the main point here is that this is like this is just purely a feature like a property of scaling and it's kind of a purely feature independent property and so I
2,398
2,416
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2398s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
think there are a whole bunch of open questions here especially related to this kind of scaling part so specifically is there sort of maybe some scaling rule that explains this convergence speed up we looked at this a little bit but sort of not extensively and there are differences but I think it seems like it should be possible to maybe pin this down and then I think we
2,416
2,434
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2416s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
also did some very preliminary experiments on natural images and I think we're seeing sort of similar effects and there are kind of interesting questions we can all skier like you know if we train and then sort of reinitialize but just preserve the scale like you know do you see a difference in convergence speed that's like one of the basic questions we could
2,434
2,450
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2434s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
try and try and answer and then I think there are sort of other questions that also came up through this process sort of like kind of maybe really kind of getting at sort of similarity of representations at initialization versus after training and sort of how I do things vary between large and small models because at least I mean looking at the weight certainly they're learning
2,450
2,468
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2450s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
very different filters so sort of understanding that better would be really interesting and then I think there's also kind of scope here to actually formalize things a little more so I'm sort of seeing medical imaging partially also as this way of seeing a place where transfer learning does provide some benefits some convergence benefits but in a scheme where like the
2,468
2,486
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2468s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
the source and target distribution are extremely different from each other and so just understanding better what might be happening there and maybe saying something formal there could also be very interesting and with that thanks for thanks for coming [Applause] oh yeah absolutely I mean like honestly if you look at how much how long you train rent from random initialization
2,486
2,526
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2486s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
versus training on imagenet plus training on the medical data like training from random initialization is going to be way faster like this but this question makes scent like sort of the reason this question is very important is because people are just downloading their models from github and then sort of just doing the fine-tuning so at that point you're like oh well you
2,526
2,542
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2526s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
know this thing is readily available so do I want to do this or do I want to do do something else yeah yep now we didn't do 200,000 so I can chat with you offline about that but you don't need to do 200,000 - to kind of get a good similarity measure you can do something smaller than that the trade-off there is between sort of like the number of data points you're using and serve the actual
2,542
2,571
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2542s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
number of like kind of vectors you're trying to find similarities over this is why people so so okay so I think there are two parts to this firstly like people I think there's some outs of people like using transfer learning simply because they've seen other people do it and kind of you train it and then you can do this process and it sort of works and you're like oh great
2,571
2,608
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2571s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
but I think I think in sort of settings where people are doing extensive experimentation like I know we think Google like part of the reason transfer learning is popular is I mean they have resources but you're still trying to run a lot of experiments and part of the reason transfer learning is popular it's because of this speed up you see in convergence I think like you have to be
2,608
2,626
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2608s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
a little careful about this because um part of the reason I think you also see this fee up and convergence is because you're also committed to this imagenet architecture so like in the paper we sort of studied this further but you know I think like we're meaningful feature reuse is happening if it is happening is really in the lower layers and so one way to get sort of similar
2,626
2,644
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2626s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
speed ups but maybe have a better architecture is like you kind of just reuse some of the weights and then you sort of reinitialize stuff and sort of train that away and this ties into like kind of all kinds of other interesting questions so like there's been this interesting point in in deep learning about this co-adaptation problem which is suppose I have an initialization but
2,644
2,663
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2644s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
I only keep part of it and then I kind of reset everything else how will those work together and and it's kind of interesting because for some settings that's been a problem for this it doesn't appear to be a problem so that's also another interesting question to study I think yeah yep yeah absolutely so I mean I guess I guess Zack and I were we like I guess
2,663
2,699
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2663s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
this is a discussion we mentioned briefly earlier so if you have a very very small amount of data you will see a bit of a difference so I think we had to get to five thousand data points on the image net architectures at least and so there we saw maybe like a two percent difference instead of like a fraction of a percent difference but then by the time we got up to like fifty thousand
2,699
2,717
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2699s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
o3y1w6-Xhjg
data points that kind of that difference was really gone and then when we tried on a much smaller architecture so bear in mind before we were training this enormous image and net architecture so we tried on a much smaller architecture and then there it didn't really seem like there was much of a difference so I think I think yes you do see a difference but part of this is like a
2,717
2,732
https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2717s
Towards Understanding Transfer Learning with Applications to Medical Imaging
https://i.ytimg.com/vi/o…axresdefault.jpg
njKP3FqW3Sk
hi everyone, let's get started. Good afternoon and welcome to MIT 6.S191! TThis is really incredible to see the turnout this year. This is the fourth year now we're teaching this course and every single year it just seems to be getting bigger and bigger. 6.S191 is a one-week intensive boot camp on everything deep learning. In the past, at this point I usually try to give you a
0
35
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=0s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
synopsis about the course and tell you all of the amazing things that you're going to be learning. You'll be gaining fundamentals into deep learning and learning some practical knowledge about how you can implement some of the algorithms of deep learning in your own research and on some cool lab related software projects. But this year I figured we could do something a little
35
55
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=35s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
bit different and instead of me telling you how great this class is I figured we could invite someone else from outside the class to do that instead. So let's check this out first. Hi everybody and welcome MIT 6.S191 the official introductory course on deep learning to taught here at MIT. Deep learning is revolutionising so many fields from robotics to medicine and
55
88
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=55s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
everything in between. You'll the learn the fundamentals of this field and how you can build some of these incredible algorithms. In fact, this entire speech and video are not real and were created using deep learning and artificial intelligence. And in this class you'll learn how. It has been an honor to speak with you today and I hope you enjoy the course!
88
124
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=88s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
Alright. so as you can tell deep learning is an incredibly powerful tool. This was just an example of how we use deep learning to perform voice synthesis and actually emulate someone else's voice, in this case Barack Obama, and also using video dialogue replacement to actually create that video with the help of Canny AI. And of course you might as you're watching this video you might
124
158
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=124s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
raise some ethical concerns which we're also very concerned about and we'll actually talk about some of those later on in the class as well. But let's start by taking a step back and actually introducing some of these terms that we've been we've talked about so far now. Let's start with the word intelligence. I like to define intelligence as the ability to process information to inform
158
181
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=158s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
future decisions. Now the field of artificial intelligence is simply the the field which focuses on building algorithms, in this case artificial algorithms that can do this as well: process information to inform future decisions. Now machine learning is just a subset of artificial intelligence specifically that focuses on actually teaching an algorithm how to do this without being explicitly programmed to
181
209
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=181s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
do the task at hand. Now deep learning is just a subset of machine learning which takes this idea even a step further and says how can we automatically extract the useful pieces of information needed to inform those future predictions or make a decision And that's what this class is all about teaching algorithms how to learn a task directly from raw data. We want to
209
234
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=209s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
provide you with a solid foundation of how you can understand or how to understand these algorithms under the hood but also provide you with the practical knowledge and practical skills to implement state-of-the-art deep learning algorithms in Tensorflow which is a very popular deep learning toolbox. Now we have an amazing set of lectures lined up for you this year including
234
258
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=234s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
Today which will cover neural networks and deep sequential modeling. Tomorrow we'll talk about computer vision and also a little bit about generative modeling which is how we can generate new data and finally I will talk about deep reinforcement learning and touch on some of the limitations and new frontiers of where this field might be going and how research might be heading
258
280
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=258s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
in the next couple of years. We'll spend the final two days hearing about some of the guest lectures from top industry researchers on some really cool and exciting projects. Every year these happen to be really really exciting talks so we really encourage you to come especially for those talks. The class will conclude with some final project presentations which we'll talk about in
280
300
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=280s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
a little a little bit and also some awards and a quick award ceremony to celebrate all of your hard work. Also I should mention that after each day of lectures so after today we have two lectures and after each day of lectures we'll have a software lab which tries to focus and build upon all of the things that you've learned in that day so you'll get the foundation's during the
300
325
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=300s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
lectures and you'll get the practical knowledge during the software lab so the two are kind of jointly coupled in that sense. For those of you taking this class for credit you have a couple different options to fulfill your credit requirement first is a project proposal I'm sorry first yeah first you can propose a project in optionally groups of two three or four people and in these
325
352
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=325s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
groups you'll work to develop a cool new deep learning idea and we realized that one week which is the span of this course is an extremely short amount of time to really not only think of an idea but move that idea past the planning stage and try to implement something so we're not going to be judging you on your results towards this idea but rather just the novelty of the idea
352
373
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=352s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
itself on Friday each of these three teams will give a three-minute presentation on that idea and the awards will be announced for the top winners judged by a panel of judges the second option in my opinion is a bit more boring but we like to give this option for people that don't like to give presentations so in this option if you don't want to work in a group or you don't want to give a presentation you
373
400
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=373s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
can write a one-page paper review of the deep learning of a recent deepening of paper or any paper of your choice and this will be due on the last day of class as well also I should mention that and for the project presentations we give out all of these cool prizes especially these three nvidia gpus which are really crucial for doing any sort of deep learning on your own so we
400
426
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=400s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
definitely encourage everyone to enter this competition and have a chance to win these GPUs and these other cool prizes like Google home and SSD cards as well also for each of the labs the three labs will have corresponding prizes so it instructions to actually enter those respective competitions will be within the labs themselves and you can enter to enter to win these different prices
426
451
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=426s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
depending on the different lab please post a Piazza if you have questions check out the course website for slides today's slides are already up there is a bug in the website we fixed that now so today's slides are up now digital recordings of each of these lectures will be up a few days after each class this course has an incredible team of TAS that you can reach out to if you
451
477
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=451s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
have any questions especially during the software labs they can help you answer any questions that you might have and finally we really want to give a huge thank to all of our sponsors who without their help and support this class would have not been possible ok so now with all of that administrative stuff out of the way let's start with the the fun stuff that we're all here for let's
477
498
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=477s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
start actually by asking ourselves a question why do we care about deep learning well why do you all care about deep learning and all of you came to this classroom today and why specifically do care about deep learning now well to answer that question we actually have to go back and understand traditional machine learning at its core first now traditional machine learning
498
517
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=498s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
algorithms typically try to define as set of rules or features in the data and these are usually hand engineered and because their hand engineered they often tend to be brittle in practice so let's take a concrete example if you want to perform facial detection how might you go about doing that well first you might say to classify a face the first thing I'm gonna do is I'm gonna try and
517
540
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=517s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
classify or recognize if I see a mouth in the image the eyes ears and nose if I see all of those things then maybe I can say that there's a face in that image but then the question is okay but how do I recognize each of those sub things like how do I recognize an eye how do I recognize a mouth and then you have to decompose that into okay to recognize a mouth I maybe have to recognize these
540
562
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=540s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
pairs of lines oriented lines in a certain direction certain orientation and then it keeps getting more complicated and each of these steps you kind of have to define a set of features that you're looking for in the image now the key idea of deep learning is that you will need to learn these features just from raw data so what you're going to do is you're going to just take a
562
581
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=562s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
bunch of images of faces and then the deep learning algorithm is going to develop some hierarchical representation of first detecting lines and edges in the image using these lines and edges to detect corners and eyes and mid-level features like eyes noses mouths ears then composing these together to detect higher-level features like maybe jaw lines side of the face etc which then
581
604
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=581s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
can be used to detect the final face structure and actually the fundamental building blocks of deep learning have existed for decades and they're under underlying algorithms for training these models have also existed for many years so why are we studying this now well for one data has become much more pervasive we're living in a the age of big data and these these algorithms are hungry
604
630
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=604s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
for a huge amounts of data to succeed secondly these algorithms are massively parallel izybelle which means that they can benefit tremendously from modern GPU architectures and hardware acceleration that simply did not exist when these algorithms were developed and finally due to open-source tool boxes like tensor flow which are which you'll get experience with in this class
630
653
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=630s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
building and deploying these models has become extremely streamlined so much so that we can condense all this material down into one week so let's start with the fundamental building block of a neural network which is a single neuron or what's also called a perceptron the idea of a perceptron or a single neuron is very basic and I'll try and keep it as simple as possible and then we'll try
653
679
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=653s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
and work our way up from there let's start by talking about the forward propagation of information through a neuron we define a set of inputs to that neuron as x1 through XM and each of these inputs have a corresponding weight w1 through WN now what we can do is with each of these inputs and each of these ways we can multiply them correspondingly together and take a sum of all of them then we take this single
679
707
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=679s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
number that's summation and we pass it through what's called a nonlinear activation function and that produces our final output Y now this is actually not entirely correct we also have what's called a bias term in this neuron which you can see here in green so the bias term the purpose of the bias term is really to allow you to shift your activation function to the left and to
707
731
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=707s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
the right regardless of your inputs right so you can notice that the bias term doesn't is not affected by the X's it's just a bias associate to that input now on the right side you can see this diagram illustrated mathematically as a single equation and we can actually rewrite this as a linear using linear algebra in terms of vectors and dot products so instead of having a
731
755
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=731s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
summation over all of the X's I'm going to collapse my X into a vector capital X which is now just a list or a vector of numbers a vector of inputs I should say and you also have a vector of weights capital W to compute the output of a single perceptron all you have to do is take the dot product of X and W which represents that element wise multiplication and summation and then
755
782
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=755s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
apply that non-linearity which here is denoted as G so now you might be wondering what is this nonlinear activation function I've mentioned it a couple times but I haven't really told you precisely what it is now one common example of this activation function is what's called a sigmoid function and you can see an example of a sigmoid function here on the bottom right one thing to note is that this function takes any real number
782
808
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=782s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
as input on the x-axis and it transforms that real number into a scalar output between 0 & 1 it's a bounded output between 0 & 1 so one very common use case of the sigmoid function is to when you're dealing with probabilities because probabilities have to also be bounded between 0 & 1 so sigmoids are really useful when you want to output a single number and represent
808
830
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=808s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
that number as a probability distribution in fact there are many common types of nonlinear activation functions not just the sigmoid but many others that you can use in neural networks and here are some common ones and throughout this presentation you'll find these tensorflow icons like you can see on the bottom right or sorry all across the bottom here and these are
830
851
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=830s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
just to illustrate how one could use each of these topics in a practical setting you'll see these kind of scattered in throughout the slides no need to really take furious notes at these codeblocks like I said all of the slides are published online so especially during your labs if you want to refer back to any of the slides you can you can always do that from the
851
870
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=851s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
online lecture notes now why do we care about activation functions the point of an activation function is to introduce nonlinearities into the data and this is actually really important in real life because in real life almost all of our data is nonlinear and here's a concrete example if I told you to separate the green points from the red points using a linear function could you do that I
870
897
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=870s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
don't think so right so you'd get something like this oh you could do it you wouldn't do very good job at it and no matter how deep or how large your network is if you're using a linear activation function you're just composing lines on top of lines and you're going to get another line right so this is the best you'll be able to do with the linear activation function on
897
917
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=897s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
the other hand nonlinearities allow you to approximate arbitrarily complex functions by kind of introducing these nonlinearities into your decision boundary and this is what makes neural networks extremely powerful let's understand this with a simple example and let's go back to this picture that we had before imagine I give you a train network with weights W on the top right so W here is 3 and minus 2 and the
917
943
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=917s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
network only has 2 inputs x1 and x2 if we want to get the output it's simply the same story as we had before we multiply our inputs by those weights we take the sum and pass it through a non-linearity but let's take a look at what's inside of that non-linearity before we apply it so we get is when we take this dot product of x1 times 3 X 2 times minus 2 we mul - 1 that's simply a
943
974
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=943s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
2d line so we can plot that if we set that equal to 0 for example that's a 2d line and it looks like this so on the x axis is X 1 on the y axis is X 2 and we're setting that we're just illustrating when this line equals 0 so anywhere on this line is where X 1 and X 2 correspond to a value of 0 now if I feed in a new input either a test example a training example or whatever
974
1,002
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=974s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
and that input is with this coordinates it's has these coordinates minus 1 and 2 so it has the value of x1 of minus 1 value of x2 of 2 I can see visually where this lies with respect to that line and in fact this this idea can be generalized a little bit more if we compute that line we get minus 6 right so inside that before we apply the non-linearity we get minus 6 when we
1,002
1,031
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1002s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
apply a sigmoid non-linearity because sigmoid collapses everything between 0 and 1 anything greater than 0 is going to be above 0.5 anything below zero is going to be less than 0.5 so in is because minus 6 is less than zero we're going to have a very low output this point Oh 200 to we can actually generalize this idea for the entire feature space let's call it
1,031
1,056
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1031s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
for any point on this plot I can tell you if it lies on the left side of the line that means that before we apply the non-linearity the Z or the state of that neuron will be negative less than zero after applying that non-linearity the sigmoid will give it a probability of less than 0.5 and on the right side if it falls on the right side of the line it's the opposite story if it falls
1,056
1,080
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1056s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
right on the line it means that Z equals zero exactly and the probability equals 0.5 now actually before I move on this is a great example of actually visualizing and understanding what's going on inside of a neural network the reason why it's hard to do this with deep neural networks is because you usually don't have only two inputs and usually don't have only two weights as
1,080
1,103
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1080s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
well so as you scale up your problem this is a simple two dimensional problem but as you scale up the size of your network you could be dealing with hundreds or thousands or millions of parameters and million dimensional spaces and then visualizing these type of plots becomes extremely difficult and it's not practical and pause in practice so this is one of the challenges that we
1,103
1,124
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1103s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
face when we're training with neural networks and really understanding their internals but we'll talk about how we can actually tackle some of those challenges in later lectures as well okay so now that we have that idea of a perceptron a single neuron let's start by building up neural networks now how we can use that perceptron to create full neural networks and seeing how all
1,124
1,145
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1124s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
of this story comes together let's revisit this previous diagram of the perceptron if there are only a few things you remember from this class try to take away this so how a perceptron works just keep remembering this I'm going to keep drilling it in you take your inputs you apply a dot product with your weights and you apply a non-linearity it's that simple
1,145
1,166
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1145s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
oh sorry I missed the step you have dot product with your weights add a bias and apply your non-linearity so three steps now let's simplify this type of diagram a little bit I'm gonna remove the bias just for simplicity I'm gonna remove all of the weight labels so now you can assume that every line the weight associated to it and let's say so I'm going to note Z that Z is the
1,166
1,192
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1166s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
output of that dot product so that's the element wise multiplication of our inputs with our weights and that's what gets fed into our activation function so our final output Y is just there our activation function applied on Z if we want to define a multi output neural network we simply can just add another one of these perceptrons to this picture now we have two outputs one is a normal
1,192
1,216
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1192s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
perceptron which is y1 and y2 is just another normal perceptron the same ideas before they all connect to the previous layer with a different set of weights and because all inputs are densely connected to all of the outputs these type of layers are often called dense layers and let's take an example of how one might actually go from this nice illustration which is very conceptual
1,216
1,243
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1216s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
and and nice and simple to how you could actually implement one of these dense layers from scratch by yourselves using tensor flow so what we can do is start off by first defining our two weights so we have our actual weight vector which is W and we also have our bias vector right both of both of these parameters are governed by the output space so depending on how many neurons you have
1,243
1,273
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1243s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
in that output layer that will govern the size of each of those weight and bias vectors what we can do then is simply define that forward propagation of information so here I'm showing you this to the call function in tensor flow don't get too caught up on the details of the code again you'll get really a walk through of this code inside of the labs today but I want to just show you
1,273
1,295
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1273s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
some some high level understanding of how you could actually take what you're learning and apply the tensor flow implementations to it inside the call function it's the same idea again you can compute Z which is the state it's that multiplication of your inputs with the weights you add the bias right so that's right there and once you have Z you just pass it
1,295
1,315
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1295s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
through your sigmoid and that's your output for that now tension flow is great because it's already implemented a lot of these layers for us so we don't have to do what I just showed you from scratch in fact to implement a layer like this with two two outputs or a percept a multi layer a multi output perceptron layer with two outputs we can simply call this TF Harris layers dense with units equal to two to indicate that we have two
1,315
1,345
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1315s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
outputs on this layer and there is a whole bunch of other parameters that you could input here such as the activation function as well as many other things to customize how this layer behaves in practice so now let's take a look at a single layered neural network so this is taking it one step beyond what we've just seen this is where we have now a single hidden layer that feeds into a
1,345
1,368
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1345s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
single output layer and I'm calling this a hidden layer because unlike our inputs and our outputs these states of the hidden layer are not directly enforced or they're not directly observable we can probe inside the network and see them but we don't actually enforce what they are these are learned as opposed to the inputs which are provided by us now since we have a transformation between
1,368
1,393
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1368s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
the inputs and the hidden layer and the hidden layer and the output layer each of those two transformations will have their own weight matrices which here I call W 1 and W 2 so its corresponds to the first layer and the second layer if we look at a single unit inside of that hidden layer take for example Z 2 I'm showing here that's just a single perceptron like we
1,393
1,419
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1393s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
talked about before it's taking a weighted sum of all of those inputs that feed into it and it applies the non-linearity and feeds it on to the next layer same story as before this picture actually looks a little bit messy so what I want to do is actually clean things up a little bit for you and I'm gonna replace all of those lines with just this symbolic representation
1,419
1,438
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1419s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
and we'll just use this from now on in the future to denote dense layers or fully connected layers between two between an input and an output or between an input and hidden layer and again if we wanted to implement this intensive flow the idea is pretty simple we can just define two of these dense layers the first one our hidden layer with n outputs and the second one our output layer with two outputs we can cut
1,438
1,465
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1438s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
week and like join them together aggregate them together into this wrapper which is called a TF sequential model and sequential models are just this idea of composing neural networks using a sequence of layers so whenever you have a sequential message passing system or sequentially processing information throughout the network you can use sequential models and just
1,465
1,487
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1465s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
define your layers as a sequence and it's very nice to allow information to propagate through that model now if we want to create a deep neural network the idea is basically the same thing except you just keep stacking on more of these layers and to create more of an more of a hierarchical model ones where the final output is computed by going deeper and deeper into this representation and
1,487
1,511
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1487s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
the code looks pretty similar again so again we have this TF sequential model and inside that model we just have a list of all of the layers that we want to use and they're just stacked on top of each other okay so this is awesome so hopefully now you have an understanding of not only what a single neuron is but how you can compose neurons together and actually build complex hierarchical
1,511
1,536
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1511s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
models with deep with neural networks now let's take a look at how you can apply these neural networks into a very real and applied setting to solve some problem and actually train them to accomplish some task here's a problem that I believe any AI system should be able to solve for all of you and probably one that you care a lot about will I pass this class to do this let's
1,536
1,560
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1536s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
start with a very simple two input model one feature or one input we're gonna define is how many let's see how many lectures you attend during this class and the second one is the number of hours that you spend on your final projects I should say that the minimum number of hours you can spend your final project is 50 hours now I'm just joking okay so let's take all of the data from
1,560
1,586
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1560s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
previous years and plot it on this feature space like we looked at before green points are students that have passed the class in the past and red points are people that have failed we can plot all of this data onto this two-dimensional grid like this and we can also plot you so here you are you have attended four lectures and you've only spent five hours on your final exam
1,586
1,609
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1586s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
you're on you're on your final project and the question is are you going to pass the class given everyone around you and how they've done in the past how are you going to do so let's do it we have two inputs we have a single layered set single hidden layer neural network we have three hidden units in that hidden layer and we'll see that the final output probability when we feed in those
1,609
1,633
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1609s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
two inputs of four and five is predicted to be 0.1 or 10% the probability of you passing this class is 10% that's not great news the actual prediction was one so you did pass the class now does anyone have an idea of why the network was so wrong in this case exactly so we never told this network anything the weights are wrong we've just initialized the weights in fact it has no idea what
1,633
1,664
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1633s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
it means to pass a class it has no idea of what each of these inputs mean how many lectures you've attended and the hours you've spent on your final project it's just seeing some random numbers it has no concept of how other people in the class have done so far so what we have to do to this network first is train it and we have to teach it how to perform this task until we teach it it's
1,664
1,686
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1664s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
just like a baby that doesn't know anything so it just entered the world it has no concepts or no idea of how to solve this task and we have to teach at that now how do we do that the idea here is that first we have to tell the network when it's wrong so we have to quantify what's called its loss or its error and to do that we actually just take our prediction or what the network
1,686
1,711
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1686s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
predicts and we compare it to what the true answer was if there's a big discrepancy between the prediction and the true answer we can tell the network hey you made a big mistake right so this is a big error it's a big loss and you should try and fix your answer to move closer towards the true answer which it should be okay now you can imagine if you don't have
1,711
1,736
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1711s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
just one student but now you have many students the total loss let's call it here the empirical risk or the objective function it has many different names it's just the the average of all of those individual losses so the individual loss is a loss that takes as input your prediction and your actual that's telling you how wrong that single example is and then the final the total
1,736
1,761
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1736s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
loss is just the average of all of those individual student losses so if we look at the problem of binary classification which is the case that we're actually caring about in this example so we're asking a question will I pass the class yes or no binary classification we can use what is called as the softmax cross-entropy loss and for those of you who aren't familiar with cross-entropy
1,761
1,787
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1761s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
this was actually a a formulation introduced by Claude Shannon here at MIT during his master's thesis as well and this was about 50 years ago it's still being used very prevalently today and the idea is it just again compares how different these two distributions are so you have a distribution of how how likely you think the student is going to pass and you have the true distribution
1,787
1,813
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1787s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
of if the student passed or not you can compare the difference between those two distributions and that tells you the loss that the network incurs on that example now let's assume that instead of a classification problem we have a regression problem where instead of predicting if you're going to pass or fail to class you want to predict the final grade that you're going to get so
1,813
1,835
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1813s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
now it's not a yes/no answer problem anymore but instead it's a what's the grade I'm going to get what's the number what so it's it's a full range of numbers that are possible now and now we might want to use a different type of loss for this different type of problem and in this case we can do what's called a mean squared error loss so we take the actual prediction we take the the sorry excuse me we take the
1,835
1,860
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1835s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
prediction of the network we take the actual true final grade that the student got we subtract them we take their squared error and we say that that's the mean squared error that's the loss that the network should should try to optimize and try to minimize so ok so now that we have all this information with the loss function and how to actually quantify the error of the
1,860
1,880
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1860s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg
njKP3FqW3Sk
neural network let's take this and understand how to train train our model to actually find those weights that it needs to to use for its prediction so W is what we want to find out W is the set of weights and we want to find the optimal set of weights that tries to minimize this total loss over our entire test set so our test set is this example data set that we want to evaluate our
1,880
1,908
https://www.youtube.com/watch?v=njKP3FqW3Sk&t=1880s
MIT 6.S191 (2020): Introduction to Deep Learning
https://i.ytimg.com/vi/n…axresdefault.jpg