video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
xtOg44r6dsE
rewards now the next parameter to consider is the type of problems that are solved using supervised unsupervised and reinforcement learning so under supervised learning we have two main categories of problems we have regression problems and we have classification problems now guys there is an important difference between classification and regression basically
313
335
https://www.youtube.com/watch?v=xtOg44r6dsE&t=313s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
classification is about predicting a label or a class whereas regression is about predicting a continuous quantity now let's say that you have to classify your emails into two different routes so here basically we'll be labeling our emails as spam and non-spam mails for this kind of problem where we have to assign our input data into different classes we make use of classification
335
358
https://www.youtube.com/watch?v=xtOg44r6dsE&t=335s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
algorithms on the other hand regression is used to predict a continuous quantity now a continuous variable is a variable that has infinite number of possibilities for example a person's weight so someone could be 180 pounds or they could be 180 point 10 pounds or 180 point 1 1 0 pounds now the number of possibilities for weight are limitless and this is exactly what a continuous
358
384
https://www.youtube.com/watch?v=xtOg44r6dsE&t=358s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
variable is so regression is a predictive analysis used to predict continuous variables here you don't have to label data in two different classes instead you have to predict a final outcome like let's say that you want to predict the price of a stock over a period for such problems you can make use of regression algorithms coming to unsupervised learning this type of
384
407
https://www.youtube.com/watch?v=xtOg44r6dsE&t=384s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
learning can be used to solve association problems and clustering problems association problems basically involve discovering patterns in data finding co-occurrences and so on a classic example of Association rule mining is a relationship between bread and jam so people who tend to buy bread also tend to buy jam over here it's all about finding associations between items
407
431
https://www.youtube.com/watch?v=xtOg44r6dsE&t=407s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
that frequently co-occur or items are similar to each other apart from Association problems unsupervised learning also deals with clustering and anomaly detection problems clustering is used for cases that involve targeted marketing wherein you are given a list of customers and some information about them and what you have to do is you have to cluster these customers based on
431
454
https://www.youtube.com/watch?v=xtOg44r6dsE&t=431s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
their similarity now guys Digital AdWords use a clustering technique to cluster potential buyers into different categories based on their interests and their intent anomaly detection on the other hand is used for tracking unusual activities an example of this is credit card fraud where in various unsupervised algorithms are used to detect suspicious activities then there is reinforcement
454
478
https://www.youtube.com/watch?v=xtOg44r6dsE&t=454s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
learning now this type of learning is comparatively different in reinforcement learning the key difference is that the input itself depends on the actions we take for example in robotics we might start in a situation where the robot does not know anything above the surrounding it is in so after it performs certain actions it finds out more about the world but the world it
478
502
https://www.youtube.com/watch?v=xtOg44r6dsE&t=478s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
sees depends on whether it chooses to move right or whether it shows to move forward or backward in this case the robot is known as the agent and its surrounding is the environment so for each action it takes it can receive a reward or it might receive a punishment now the next parameter is the type of data used to train a machine when it comes to supervised learning it's quite
502
526
https://www.youtube.com/watch?v=xtOg44r6dsE&t=502s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
clear and simple the machine will be provided with a label set of input and output data in the training phase itself so basically you feed the output of your algorithm into the system this means that in supervised learning the machine already knows the output of the algorithm before it starts working on it now an example is classifying a data set into either cats or dogs alright so if
526
550
https://www.youtube.com/watch?v=xtOg44r6dsE&t=526s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
the algorithm is fed an image of a cat the image is labeled as a cat similarly for a dog so guys this is how the model is taught it's told that this is a cat by labeling it after the algorithm is taught it is then tested using a new data set but a point to remember here is that in the training phase for a supervised learning algorithm the beta is labeled alright the input is also
550
575
https://www.youtube.com/watch?v=xtOg44r6dsE&t=550s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
labeled and the output is also labeled in unsupervised learning the machine is only given the input data so here we don't tell the system where to go the system has to understand itself from the input data that we give to it so it does this by finding patterns in the data so if we try to classify images into cats and dogs in unsupervised learning the machine will be fed images of cats and
575
598
https://www.youtube.com/watch?v=xtOg44r6dsE&t=575s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
dogs and at the end it will form two groups one containing cats and the other containing dogs now the only difference here is that it won't add labels to the output okay it will just understand how cats look and cluster them into one group and similarly for dogs coming to reinforcement learning there is no predefined data the input depends on the actions taken by the
598
621
https://www.youtube.com/watch?v=xtOg44r6dsE&t=598s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
agent now these actions are then recorded in the form of matrices so that it can serve as a memory to the agent so basically as the agent explodes the environment it will collect data which was then being used to get the output so guys in reinforcement learning there is no predefined data set given to the machine the agent does all the work from scratch the next parameter to consider
621
645
https://www.youtube.com/watch?v=xtOg44r6dsE&t=621s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
is training in supervised learning the training phase is well defined and very explicit the machine is fed training data where both the input and output is labeled and the only thing the algorithm has to do is map the input to the output so the training data act like a teacher or a guide over here now once the algorithm is well trained it is tested using the new data when it comes to
645
669
https://www.youtube.com/watch?v=xtOg44r6dsE&t=645s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
unsupervised learning the training phase is big because the machine is only given the input and it has to figure out the output on its own so there's no supervisor here or there's no mentor over here in reinforcement learning there is no predefined data and the whole reinforcement learning process itself is a training and testing phase since there is no predefined data given
669
691
https://www.youtube.com/watch?v=xtOg44r6dsE&t=669s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
to the machine it has to learn everything on its own and it starts by exploring and collecting data the next parameter we're going to discuss is the aim of each of these machine learning types the main aim or the end goal of a supervised learning algorithm is to forecast an outcome now obviously that is the basic aim of all these machine learning types but the whole supervised
691
713
https://www.youtube.com/watch?v=xtOg44r6dsE&t=691s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
learning process is built in such a way that it can directly give you a predicted outcome because supervised learning algorithms have a very well-defined training phase unsupervised learning is all about discovering patterns and extracting useful insights now since the algorithm is only fed the input it has to find a way to get to the output by finding trends and
713
734
https://www.youtube.com/watch?v=xtOg44r6dsE&t=713s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
associations in the data coming to reinforcement learning the agent here is a lot like a human child just like how a baby is clueless about the world initially the agent also has no idea about its environment but as it explores the environment it starts learning it learns from the mistakes it makes and it basically learns from its experience now let's look at the approach followed when
734
759
https://www.youtube.com/watch?v=xtOg44r6dsE&t=734s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
it comes to supervised learning it's quite simple like I mentioned earlier all that the algorithm has to do is map the known input to the known output in unsupervised learning the algorithm has to find patterns in data trends in data and keep exploring the data until it reaches the output the approach followed by reinforcement learning is a trial and error method the trial and error method
759
783
https://www.youtube.com/watch?v=xtOg44r6dsE&t=759s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
best explains reinforcement learning because the agent has to try out all possible actions to learn about its environment and to get maximum rewards the next parameter is feedback now in supervised learning there is a direct feedback mechanism since the machine is trained with build input and output for unsupervised learning there is no feedback mechanism because the machine is unaware of the
783
806
https://www.youtube.com/watch?v=xtOg44r6dsE&t=783s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
output during the training phase now in reinforcement learning the feedback is in the form of rewards or punishments from the environment so when an agent takes a suitable action it will get a corresponding reward for that action but if the action is wrong then it gets a punishment so rewards and punishments can be thought with respect to a game now in a game when you win a state you
806
828
https://www.youtube.com/watch?v=xtOg44r6dsE&t=806s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
get extra coins but when you fail you have to go back to the same state and try again now let's look at some of the popular algorithms supervised learning has algorithms like linear regression which is mainly used for regression problems it also has algorithms like support vector machines decision trees and so on and these can also be used for classification problems coming to
828
851
https://www.youtube.com/watch?v=xtOg44r6dsE&t=828s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
unsupervised learning we have algorithms like key means C means for clustering analysis and algorithms like a priori and Association rule mining to deal with Association problems now reinforcement learning is just being explored recently a few algorithms include Q learning and the state action reward state action algorithm next up we have applications so guys supervised learning is widely
851
877
https://www.youtube.com/watch?v=xtOg44r6dsE&t=851s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
used in the business sector for forecasting risks risk analysis predicting sales profit and so on coming to unsupervised learning so guys the recommendations you see when you shop online like for example if you buy a book on Amazon right you get a list of recommendations now these are all done by unsupervised learning algorithms other applications include anomaly
877
902
https://www.youtube.com/watch?v=xtOg44r6dsE&t=877s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
detection credit card fraud detection and so on now reinforcement learning is used in self-driving cars in building games and all of that one famous example is the alphago game I'm sure all if you have heard of that so guys those were the major differences between supervised unsupervised and reinforcement learning so now let me give you a few examples of problems that can be solved using
902
924
https://www.youtube.com/watch?v=xtOg44r6dsE&t=902s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
supervised unsupervised and reinforcement learning algorithms all right so our first use case is to study a bank credit data set and make a decision about whether to approve the loan of an applicant based on his profile so here we are going to be given a bank credit data set now the information that you see over here is for each of the customers so every customer's account balance purpose
924
948
https://www.youtube.com/watch?v=xtOg44r6dsE&t=924s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
credit amount value savings everything is given in the data set and you have to predict whether you can approve the loan of an applicant based on his bank account balance based on his purpose his credit amount and his savings so for this problem you can make use of the supervised learning algorithm known as key and an algorithm or key in your is neighbor algorithm now let's look at our
948
971
https://www.youtube.com/watch?v=xtOg44r6dsE&t=948s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
next use case now here we have to establish a mathematical equation for distance as a function of speed so basically over here you're going to predict the distance that a car can travel based on its speed so guys the best algorithm to use for such a problem is the linear regression algorithm so the linear regression algorithm is basically used to predict continuous
971
993
https://www.youtube.com/watch?v=xtOg44r6dsE&t=971s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
quantities and in this case we have to predict the distance which is a continuous quantity and like I mentioned earlier a linear regression is a type of supervised learning algorithm okay moving on to our next few skills now the problem here is to cluster a set of movies as either good or average based on a social media outreach all right now if you read the problem statement
993
1,017
https://www.youtube.com/watch?v=xtOg44r6dsE&t=993s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
properly you can see the word cluster alright this clearly means that this is a clustering problem and clustering problems fall under unsupervised learning so here we're going to make use of a algorithm known as k-means algorithm to form two clusters okay one cluster is going to contain popular movies and the other is going to contain non popular movies based on their likes
1,017
1,038
https://www.youtube.com/watch?v=xtOg44r6dsE&t=1017s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
on social media now moving ahead the our next problem statement is to perform Market Basket analysis by finding association between items bought at the grocery store again over here you can see the keyword association this means that this is an association problem now Association problems fall under the unsupervised learning algorithms and here we can make use of the a priori
1,038
1,062
https://www.youtube.com/watch?v=xtOg44r6dsE&t=1038s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
algorithm to do this so here what you have to do is basically if and find association between different items so if a person bought bread and butter together it means that there is an association between these two items so in this problem you just going to find the association between different items and you're going to make use of the unsupervised learning algorithm
1,062
1,082
https://www.youtube.com/watch?v=xtOg44r6dsE&t=1062s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
cause the a priori algorithm so guys this is the last use case and over here the problem statement says that you're going to place an agent in any one of the rooms and basically the rooms are represented as 0 1 2 3 4 & 5 and the goal here is to reach the outside of the building now this is clearly a reinforcement learning problem all right to solve this you can make use of the
1,082
1,106
https://www.youtube.com/watch?v=xtOg44r6dsE&t=1082s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
cue learning algorithm and your end goal is to reach room number 5 so guys here you can see that there is no data set because the data set is going to be developed by the agent itself so guys over here the agent is responsible for collecting the data all right he's going to explore the environment collect useful information and then he's going to use this information to get to room number 5 so
1,106
1,128
https://www.youtube.com/watch?v=xtOg44r6dsE&t=1106s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
xtOg44r6dsE
guys that was it for our use cases and with this we come to the end of today's video I hope all of you enjoyed it if you have any doubts or any queries regarding the session please leave them in the comment section and we'll get back to you at the earliest so guys thank you so much for watching this video have a great day I hope you have enjoyed listening to this video please
1,128
1,148
https://www.youtube.com/watch?v=xtOg44r6dsE&t=1128s
Supervised vs Unsupervised vs Reinforcement Learning | Data Science Certification Training | Edureka
https://i.ytimg.com/vi/x…axresdefault.jpg
yhItocvAaq0
supervised learning updates the parameters of a neural network to match predicted class labels with the ground truth labels the construction of these ground truth class vectors is typically done with one hot encoding but other techniques such as label smoothing and knowledge distillation have been developed to overcome the limitations of one hot encoded ground truth class
0
18
https://www.youtube.com/watch?v=yhItocvAaq0&t=0s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
labels Mehta pseudo labels that uses the meta learning framework to dynamically adapt the target distribution or ground truth class labels throughout training of a student classification network to maximize its resulting validation set accuracy this is done by training the classification model or student network on pseudo labels labeled by a teacher network the teacher network is then
18
39
https://www.youtube.com/watch?v=yhItocvAaq0&t=18s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
updated to maximize the classification models accuracy on the validation set after it trains and updates itself through back propagation supervised learning on the pseudo labels from the teacher network this involves an interesting gradient through a gradient operation to train the teacher network Mehta pseudo labels achieves 80 6.9% top 1 imagenet accuracy through
39
58
https://www.youtube.com/watch?v=yhItocvAaq0&t=39s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
semi-supervised learning with additional data and also impressive performances in the limited data settings the authors also introduced a reduced MPL framework to avoid the memory bottleneck of having two high capacity models in memory for the meta learning framework this video will explain meta pseudo labels from researchers at Google ai this video will explain meta pseudo labels from
58
82
https://www.youtube.com/watch?v=yhItocvAaq0&t=58s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
researchers at Google ai meta sudo labels is a new way to use meta learning to adapt the ground truth class labels during training by using a teacher network to label data and then a student network that learns from those labels a quick overview of the meta pseudo labels algorithm is that a teacher model is trained along with a student model to set the students target distributions
82
101
https://www.youtube.com/watch?v=yhItocvAaq0&t=82s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
and adapt to the students learning state so typically these target distributions are these one hot encoded vectors where you might have like zero cat one dog and then zero for all of the other classes in the case of say C fart n so the idea here is to have the teacher network produce the way of labeling the data points to say zero point zero three cat zero point seven dog zero point zero
101
122
https://www.youtube.com/watch?v=yhItocvAaq0&t=101s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
four horse these kind of distributions are going to be assigned by the teacher network rather than heuristic lis encoded with something like one hot encoding label smoothing or even the knowledge distillation pipeline with to temperature tuning so then the idea is to adapt these target distributions to the students learning state so the way this pipeline works is that the
122
139
https://www.youtube.com/watch?v=yhItocvAaq0&t=122s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
teacher network parameterised by phi is again taking the same training data set and then produce a pseudo label distribution and then the student network is going to try to fit this label distribution that was produced by the teacher network so it's going to do back propagation using the cross entropy loss function between the predictions from the student Network y-prime and
139
156
https://www.youtube.com/watch?v=yhItocvAaq0&t=139s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
then these pseudo labels Q Phi of X so it's going to back prop this and then update the parameters to theta T plus 1 so now these new parameters that have been updated by training on the pseudo labels from the teacher network are then going to be evaluated to provide a reward signal for the teacher network by taking those parameters and then having them classify a held-out validation set
156
176
https://www.youtube.com/watch?v=yhItocvAaq0&t=156s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
so the performance on the validation set is the reward signal that goes back through the teacher by taking a gradient through a gradient or do something we'll get into more in the later on in the video the idea is that the teacher model is going to be training changing this distribution of class labels to maximize the performance of the student Network and the held-out validation set these
176
194
https://www.youtube.com/watch?v=yhItocvAaq0&t=176s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
are some examples of the most common target distributions our ground truth class label vectors Y that are used in machine learning the most common of which is one hot encoded vectors this is how datasets like C fart n are labeled if this is the case of a dog image in the class label corresponding to that image you'd have a one in the position of the dog index and then zero
194
211
https://www.youtube.com/watch?v=yhItocvAaq0&t=194s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
everywhere else for the other class labels so say one dog zero cat zero truck ship this is how the sea far 10 data set it's labeled so one problem with labeling data sets one hot and code vectors is that the model is going to have these overconfident or over fitted predictions to this kind of a class label distribution you can say it applies any probability density to
211
229
https://www.youtube.com/watch?v=yhItocvAaq0&t=211s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
another class it's gonna have a penalty from the cross entropy loss for doing so so if it's a C's this dog image and tries to do label it as 0.75 dog and 0.2 cat he's unsure whether it's a cat or a dog it's gonna be penalized for that as if the cat is just as different from a dog as a truck or a ship or a frog or these other C Bartek classes so one solution to the overconfident
229
251
https://www.youtube.com/watch?v=yhItocvAaq0&t=229s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
predictions are overfitting on the one hand code of vectors is label smoothing so label smoothing is where you apply this uniform weight to all the other class labels in the class suitable vector and then another solution to assigning these target distributions knowledge distillation knowledge distillation in the form of self-training with noisy student currently has the state VR for imagenet
251
269
https://www.youtube.com/watch?v=yhItocvAaq0&t=251s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
classification and it's also a really powerful technique for model compression such as say distilled Bert or you have these high capacity models then you use the high capacity model to produce a new class label distribution that is better than the one hunter for training the student network and then the student network learned that a combination of this distillation class distribution as
269
289
https://www.youtube.com/watch?v=yhItocvAaq0&t=269s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
well as the ground truth what hot encoded vectors so these are some examples of different target distributions that have been explored in machine learning and are commonly used to prevent overfitting and then to you know train these models with supervised learning so code from the paper is that from the success of these heuristic tricks it is clear that how to construct
289
305
https://www.youtube.com/watch?v=yhItocvAaq0&t=289s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
the target distribution plays an important role in the algorithm design and a proper method could lead to a sizeable game motivating this exploration for meta learning the target distributions during training so again we have this problem of what should be this target distribution should we have one hot encode to class level vectors should we smoothing out the labels by
305
322
https://www.youtube.com/watch?v=yhItocvAaq0&t=305s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
putting uniform weight on the other class labels or should we use this teacher-to-student pipeline and knowledge installation but the solution explored in this paper is to metal learn the pseudo label distribution or the targets that the student networks and we trying to fit during training there are two phases of learning in the meta pseudo labels framework in phase one the
322
338
https://www.youtube.com/watch?v=yhItocvAaq0&t=322s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
student learns from the teacher the parameters theta of the student classification network are updated by taking the cross entropy loss between the predictions piece of theta of X and then the pseudo label distribution that is produced when you pass these X data points through the teacher Network parameterised by Phi denoted Q sub 5 X to denote this new pseudo label
338
357
https://www.youtube.com/watch?v=yhItocvAaq0&t=338s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
distribution that comes out of the teacher phase 2 is the teacher learns from the students validation loss this is a much more complex way of structuring this loss this gradient through a gradient meta learning idea of training the teacher network so the teacher network is evaluated on the validation set performance of the student network after it updated as
357
374
https://www.youtube.com/watch?v=yhItocvAaq0&t=357s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
parameters to theta t plus 1 so the way that this reward is propagated back into the teachers parameters Phi because you have to take the derivative of how much each of these five parameters and their labels and data points impacts the gradient of the student network to change it in the direction of this validation loss it's difficult to completely derive this idea of gradient
374
394
https://www.youtube.com/watch?v=yhItocvAaq0&t=374s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
through a gradient and in if you're using frameworks like tensorflow PI torch you can utilize automatic differentiation to automatically implement this for you and you don't have to exactly know the math of how say this parameter from x1 input to the hidden state a in the teacher Network gets this lost signal from the student network that is then updated with the gradient of this new label data
394
413
https://www.youtube.com/watch?v=yhItocvAaq0&t=394s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
set and then moved in the direction of the validation loss but the idea the high-level idea and I think maybe visualizing these two networks even though in practice the teacher network is gonna be a multi-layer perceptron like this but the student network is one of these high capacity classification models like ResNet wide ResNet or efficient that so the idea is you want
413
429
https://www.youtube.com/watch?v=yhItocvAaq0&t=413s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
to say update this weight from the input to a hidden state in the teacher network and you're gonna try to take the partial derivative with respect to this way in the teacher network with respect to the validation loss on the the student network after at theta t plus 1 on that validation set so this is trained with policy gradients on this reward signal because this isn't like Y prime minus y
429
450
https://www.youtube.com/watch?v=yhItocvAaq0&t=429s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
there's no ground truth with respect to the validation loss of the student achieves so you're just taking that validation loss reward and treating that as like a reward and like say pac-man or Atari and using policy gradients to update the parameters but basically say like if this parameter contributed a lot to the output and then you get a high reward do more of this like increase the
450
474
https://www.youtube.com/watch?v=yhItocvAaq0&t=450s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
weight from this connection to do more of it to get more of this reward that's kind of a high-level idea of policy gradients but the idea is that in order to get this derivative to find out like how much of this weight contributed to the validation loss you have to take a gradient through a gradient which is a pretty complex idea that's maybe better explained in the next equation from the
474
491
https://www.youtube.com/watch?v=yhItocvAaq0&t=474s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
paper hopefully this equation from the paper were further explain the idea of taking a gradient through a gradient to update the parameters v from the teacher network with respect to the parameters theta T plus 1 that are evaluated on the validation set stadia was we're taking a gradient through a gradient so the parameters theta T plus 1 that are responsible for this validation loss
491
508
https://www.youtube.com/watch?v=yhItocvAaq0&t=491s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
reward signal we're trying to update the teacher network with our updated by taking the parameters theta T and then updating them with a gradient so we want to know how much each parameter infi is responsible for the gradient that updates this so you're taking the derivative with respect to the-- of this validation loss well fee is you know contributes to this validation
508
528
https://www.youtube.com/watch?v=yhItocvAaq0&t=508s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
loss through the gradient so you have to find out you know how much each of these parameters and the fee network you know as in something like this how much does this parameter the fee network or this parameter contribute to the gradient and then the gradient is what updates the parameters and gives you this new validation loss so you're taking the partial derivative of feet with respect
528
547
https://www.youtube.com/watch?v=yhItocvAaq0&t=528s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
to this gradient through a gradient idea which is a little complicated but you know really an interesting idea with the meta learning and this medicine labels algorithm the next idea introduced in the meta pseudo labels is to avoid this memory requirement of having two high capacity classification models in memory because say you have an efficient net as the teacher network as well as the
547
564
https://www.youtube.com/watch?v=yhItocvAaq0&t=547s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
student network now you have to keep both these models in memory especially when you're doing the gradient update for the teacher Network si idea to avoid that is do first train a large teacher network on the label dataset and then use it to produce this new distribution on the unlabeled data and then you use a smaller teacher network so say you originally trained the
564
580
https://www.youtube.com/watch?v=yhItocvAaq0&t=564s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
teacher network with like an efficient net and then you move it to a multi-layer perceptron because all it's doing now is adjusting the original distribution that was produced by this high capacity model so this high capacity model is already producing a pretty useful target distribution as in knowledge distillation and then the smaller teacher has enough capacity to
580
597
https://www.youtube.com/watch?v=yhItocvAaq0&t=580s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
be adapting it during training as in the meta learning framework the authors are going to test the performance of meta pseudo labels in the limited data setting as well as the semi supervised learning setting semi-supervised learning is responsible for most of the image net state of the arts like self training with noisy student where they have the labeled image net and they also
597
613
https://www.youtube.com/watch?v=yhItocvAaq0&t=597s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
leverage this unlabeled jft 300 million data set to get more performance out of the model also the billion scale weekly semi-supervised learning framework from facebook uses the labeled image net data set and the unlabeled Instagram images that are weakly labeled with their hashtags to take advantage of the semi-supervised learning framework which is probably going to be the paradigm
613
632
https://www.youtube.com/watch?v=yhItocvAaq0&t=613s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
that leads forward since it's so easy to get this unlabeled data compared to label data so they're experimenting with meta pseudo labels on the efficient net architecture for the student network on the full-si 410 image net street view house numbers data set plus extra unlabeled data so in the case of C 410 this is tiny images in image net it's why FCC 100 million in street view house
632
650
https://www.youtube.com/watch?v=yhItocvAaq0&t=632s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
numbers there's an additional like 500,000 data points that come with the data set to use optionally if you want to test out these kind of algorithms so they achieve 98.6% see of our 10 accuracy and then 6.9% topple and imagenet then here are some other papers to check out if you're interested in semi-supervised learning that have also come out recently and are
650
668
https://www.youtube.com/watch?v=yhItocvAaq0&t=650s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
really successful in this kind of space these are the results of meta pseudo labels in the semi-supervised learning framework compared with supervised learning and then the self training with noisy student pipeline you see gains in the sea far ten dataset small gains in the street view house members and then big gains on the image net dataset one reason that the authors point towards
668
683
https://www.youtube.com/watch?v=yhItocvAaq0&t=668s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
these small gains for the street view house numbers is that the extra unlabeled data in street view house numbers is in it's in the distribution so there is this distinction between out of distribution data and in distribution data so the case of image net where you're trying to take in this new data from this Y of cc100 million data you would call that out of distribution data
683
701
https://www.youtube.com/watch?v=yhItocvAaq0&t=683s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
because it's not like the image net data whereas the street view house numbers that extra data is like really the same exact data that the training set has in terms of this like kind of underlying distribution idea so the idea here is that the meta Studio labels this adaptive adjustment of changing the labels during training is more crucial when the extra unlabeled data is more
701
722
https://www.youtube.com/watch?v=yhItocvAaq0&t=701s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
out of distribution so if you're dealing with a computer vision problem and you're curious that this algorithm is gonna work for your problem it's interesting to say you know it's the unlabeled a do you have how out of distribution is it how noisy is it and this MPL meta student-level framework is likely to have a bigger gain if this is noisier data compared to in distribution
722
739
https://www.youtube.com/watch?v=yhItocvAaq0&t=722s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
data the authors also test the meta pseudo labels algorithm in the limited data setting where you have say only 4,000 labeled images in C far 10 1000 in Street View house numbers or 10% of the labeled data in image net in this case you see the performance of meta pseudo labels compared to supervised learning with all the labels Simms CL are fixed match or unsupervised data augmentation
739
758
https://www.youtube.com/watch?v=yhItocvAaq0&t=739s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
which are all other algorithms that are successful at doing this kind of learning with limited data this plot shows the performance of these models with respect to the different limited data settings changes as you increase the percentage of labelled data points the interesting part about this plot is this top left area where you have the smaller percentage of labeled data and
758
775
https://www.youtube.com/watch?v=yhItocvAaq0&t=758s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
you see a huge gain of the unsupervised data augmentation plus the menace to toe labels algorithm compared to supervised learning or the Rand augment data augmentation algorithm this table shows the gains of meta pseudo labels with respect to the sea far 10 limited data setting and in the street view house number limited data setting see the performance of different algorithms like
775
792
https://www.youtube.com/watch?v=yhItocvAaq0&t=775s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
just training with supervised learning on Limited data using that label smoothing way of putting uniform weights on the other class labels then using supervised learning plus meta pseudo labels and then stacking medicine labels with the ran augment and unsupervised data augmentation algorithms these are some of the algorithms that are explored to stack on top of meta pseudo labels
792
810
https://www.youtube.com/watch?v=yhItocvAaq0&t=792s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
unsupervised data augmentation enforces predictions Y given X to be consistent with the same X data point after it's gone through a data augmentation so in some cases data augmentation might mean like rotating an image translating it or horizontally flipping it in the case of natural language processing it might mean translating the sentence to German and then translating it back to English
810
829
https://www.youtube.com/watch?v=yhItocvAaq0&t=810s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
which is known as a back translation these are the different ways of augmenting data points and then forcing this cycle consistency to have similar predictions on the data point before and after it's been augmented another these algorithm says stacked on top of meta Steudle labels is Rand augment Rand augment is this automated data augmentation algorithm similar to like
829
847
https://www.youtube.com/watch?v=yhItocvAaq0&t=829s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
auto augment or population-based augmentation but the idea here is to have this simpler parameterization of the space that actually shows to work better with respect to constructing these automated data augmentation pipelines and the next sort of algorithm to be looking at as well and comparing this with is self-rated with noisy student which is this really popular way
847
866
https://www.youtube.com/watch?v=yhItocvAaq0&t=847s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
of doing knowledge isolation which is where you take the pseudo labels you apply a lot of noise with respect to training the student model on that teacher target distribution which is the whole idea of meta studio labels is looking at the ways to structure this target distribution to train these neural networks on the authors explore the behavior of the teacher network in
866
884
https://www.youtube.com/watch?v=yhItocvAaq0&t=866s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
the meta studio label framework so they've learned the Medus to delay bol teacher fits the validation gradient it's not just label Corrections and is not only a regularization or preventing overfitting strategy the authors explore this idea that the teacher encourages the students training gradient to be similar to the students validation gradient on this two moons
884
901
https://www.youtube.com/watch?v=yhItocvAaq0&t=884s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
data set because it's really difficult for them to do this kind of cosine similarity between validation and training data gradients with these larger data sets like CFR 10 or imagenet so they show the cosine similarity between the training and validation data as gradient with respect to the training progress and showing that the medicine learning framework the teacher is trying
901
920
https://www.youtube.com/watch?v=yhItocvAaq0&t=901s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
to steer the gradient in the direction of this validation data sets gradient as well the next idea is to explore whether the teacher network is so performing label correction or trying to mimic the behavior of supervised learning with perfect labels so this plot is showing that if it was doing this then these accuracies of the student network should be high as well
920
937
https://www.youtube.com/watch?v=yhItocvAaq0&t=920s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
and have a similar kind of curve as supervised learning so this is showing that the teacher network isn't just trying to fit the training data it's trying to help with this regularization and preventing overfitting this visualization shows the teacher network isn't just doing preventing of overfitting with respect to how its labeling this data can you see interesting behaviors with respect to
937
955
https://www.youtube.com/watch?v=yhItocvAaq0&t=937s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
the developments of how it's labeling each of these data points and see if our 10 data set throughout the training you see that it the label with the highest confidence doesn't change it doesn't get steeper between 50 and 75 percent it doesn't do things like flipping labels or dampening distributions in obvious ways that are typically heuristically explored with respect to you know doing
955
973
https://www.youtube.com/watch?v=yhItocvAaq0&t=955s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
regularization in the class label space another interesting algorithm in the space of meta learning these different components that make up the supervised learning problem is generative teaching networks generative teaching networks have a generator that uses this gradient through a gradient in order to generate this data set that is used to train the student network so it could be
973
990
https://www.youtube.com/watch?v=yhItocvAaq0&t=973s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
interesting to see if you could stack this meta suit of labeling or having this adaptive labeling with the generated data set as well also definitely a confusing gradient or you could maybe stack the generator and the label are sort of similar to how like alphago zeroes combines the policy and value network into one architecture but it's definitely an interesting kind of
990
1,009
https://www.youtube.com/watch?v=yhItocvAaq0&t=990s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
space of emerging algorithms these meta learning algorithms that are generating data generating adaptive labels during the training a lot of different areas where meta learning is being developed and producing these interesting algorithms thanks for watching this explanation of meta pseudo labels a really interesting meta learning algorithm that adapts the target
1,009
1,027
https://www.youtube.com/watch?v=yhItocvAaq0&t=1009s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
distribution for the student network as it's learning throughout training to maximize the accuracy on a held out validation set this is a really interesting use of meta learning and this gradient through gradient training to have the teacher-student paradigm where the teacher is taking apart this different component of the supervised learning framework particularly in this
1,027
1,043
https://www.youtube.com/watch?v=yhItocvAaq0&t=1027s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
yhItocvAaq0
case the target distribution and then the student is learning with the teacher network in this simultaneous I like dual optimization or coevolution framework of training these two models in the meta learning idea this is responsible for really high accuracy on image net with the full data set plus extra Abell data as well as interesting performances on the limited data setting
1,043
1,062
https://www.youtube.com/watch?v=yhItocvAaq0&t=1043s
Meta Pseudo Labels
https://i.ytimg.com/vi/y…axresdefault.jpg
a6v92P0EbJc
hi there today we're looking at neural architecture search without training by Joseph Miller Jack Turner amis Torquay and Eliot J Crowley on a high level this paper performs neural architecture search by looking at the correlation matrices of the Jacobian of the of the data when you pass it through the network and it does so at initialization so you pass the data look at the
0
28
https://www.youtube.com/watch?v=a6v92P0EbJc&t=0s
Neural Architecture Search without Training (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a6v92P0EbJc
Jacobian and if it's very correlated then the network is bad and if it's very uncorrelated then the network is good and by simply observing that they can already achieve a very good score on neural architecture search benchmark alright that was a high level and maybe a bit too simplified but that's sort of what's going on ok let's dive in so what's neural architecture search neural
28
56
https://www.youtube.com/watch?v=a6v92P0EbJc&t=28s
Neural Architecture Search without Training (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a6v92P0EbJc
architecture search is the discipline of you are given a data set let's say here we have a data set which could be something like C 410 which is an image data set and you are given a sort of a training procedure let's say Adam or SGD for 100,000 steps or something like this with many batches of size 64 ok and you're given a loss function which the loss function here could be the cross
56
87
https://www.youtube.com/watch?v=a6v92P0EbJc&t=56s
Neural Architecture Search without Training (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a6v92P0EbJc
entropy between the outputs of the network which we'll call L and the label Y and your task is now to find a neural network architecture that conforms to these specifications but gives the lowest possible loss or de sorry the highest possible validation accuracy in this case so this here would be like the Train and then you'd have the test accuracy or the validation accuracy okay
87
116
https://www.youtube.com/watch?v=a6v92P0EbJc&t=87s
Neural Architecture Search without Training (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a6v92P0EbJc
so you could decide well I'm gonna go with you know first like three convolutional layers each one having like a real ooh non-linearity but you could also say well I'm going to build like a skip connection from here to here you could also say that I'm going to down sample by you could have maybe a bigger stride and so on so the kernel size of the convolution you can vary until now
116
141
https://www.youtube.com/watch?v=a6v92P0EbJc&t=116s
Neural Architecture Search without Training (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg
a6v92P0EbJc
people have done this by hand right in effect we all use like the same 10 to 20 different architectures so if it's an image problem we tend to go for like a res net or a wide ResNet like a vgg style architecture someone has come up with those at some point with each of those discover that it works well and we don't really do much exploration we simply kind of use the same things over
141
170
https://www.youtube.com/watch?v=a6v92P0EbJc&t=141s
Neural Architecture Search without Training (Paper Explained)
https://i.ytimg.com/vi/a…axresdefault.jpg