video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
rk7fIhCH8Gc | so this is where we lose the high dimension and get a system that we can actually plug into computer and analyze and here uh rd you know is that one dimensional system but this is a little you know i want to get to the results or conclude so i will not be explaining this in detail but again i was mentioning this lecture by pierre francesco obani in lesos where he actually derives this equation | 3,019 | 3,046 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3019s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | in details peter put on his video does that mean i should be concluding maybe sorry you have five minutes five minutes great that's fine so let me just like okay this i will not be explaining every detail but let me just explain what this is so here i actually the claim is that the dynamics in this of this classifier in for the data set that is this that is this high dimensional gaussian mixture | 3,046 | 3,079 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3046s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | behaves in the same way as the following scholar stochastic process for this variable h of t t is just the iteration time that and the stochastic process actually has several types of noise so it has for instance a noise that corresponds to the regularization lambda which is just the ordinary reach legalization but it also has a noise that plays exactly the same role of the rich | 3,079 | 3,106 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3079s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | regularization but that came from the dynamics that is not there explicitly so this is some kind of implicit rich regularization that comes from this variable s of t that was the variable in the in the stochastic guiding descent that was deciding which sample is there and which sample is not there so that's that's kind of an interesting you know interpretation of | 3,106 | 3,130 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3106s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | what this stochastic model stochastic process actually is this is a interpretation of how kind of the implicit regularization might be coming out in these type of problems the second term is directly the noise coming from the stochastic grading descent because you might be saying that here i have batches that are still extensively big so maybe the noise doesn't matter so much so it | 3,130 | 3,156 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3130s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | actually does it's still explicitly there even in this effective dynamics and then there is a dynamical noise here that is just gaussian with some covariance matrix mc that is consistently computed you know for via set of closed equations and then there is some memory kernel here mr that also needs to be consistently computed from a set of equations so i put these | 3,156 | 3,184 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3156s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | in very small because you know these are these are kind of hard to grasp in just like one minute but it's again some some work that comes from recent works in statistical physics and that can be directly adapted to this problem as we did in this paper here again open problem would be of course challenge once once one goes in detail through what this color stochastic process is can we | 3,184 | 3,209 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3184s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | actually prove that it is equivalent to what the grad what the stochastic gradient flow is doing in in this problem and you know once you compute it numerically all these uh quantities uh from these equations then you can compute everything else including the training loss the test laws the generalization here the corresponding accuracy is this is this is highlighted in this set of | 3,209 | 3,238 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3209s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | equations so with that what you can do is you can plot for instance a picture like that where i plot the generalization error and the training error in the intersect the generalization here in the main part is a function of the time and now the points that's just running the stock the persistent stochastic gradient descent numerically on this data set so this is just a plain simulation of | 3,238 | 3,263 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3238s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | this simple neural network and the lines that's a result that i get from the dynamic community fury so it is you know you can see that it is describing the whole trajectory what is the generalization error at any given time for the persistent stochastic guiding descent so you see that at large times it is going somewhere but at intermediate times there would be | 3,263 | 3,287 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3263s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | some early stopping to do here and depending on the batch size and order on the persistence time is not exactly the same curve and quite interestingly even the the orange points actually would be the normal stochastic gradient descent and the line corresponding that is plotted there is is in a sense not quite justified by our theory we just kind of ad hoc discretize the theory in | 3,287 | 3,313 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3287s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | the same way we would discretize to do the canonical stochastic descent it still seems working which is a bit puzzling to us it really kind of we don't really know why should it but so we didn't maybe even need to do this this persistent stochastic guidance descent for this theory to work or maybe yes maybe there is some small error that doesn't show up on you know on this comparison with numeric | 3,313 | 3,336 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3313s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | so this we don't know yet and you can you know look at other things for instance this um this case of two clusters is quite interesting because if you don't do the stochastic gradient descent but fulbright guiding this and that would be this this picture that is of course just a special case of the equations that i just wrote it has this specular behavior that if | 3,336 | 3,359 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3336s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | you initialize so r is the variance at initialization if you initialize at zero with really small variance then after one iteration you reach the base optimal error and then the gradient descent is actually driving you away from it and training accuracy is growing you know this is a regime where you're interpolating the training accuracy goes to one but the test error is getting worse | 3,359 | 3,383 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3359s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | so actually after one iteration you were perfectly optimal but then the gradient is driving you away from the perfect generalization not perfectionist the optimal generalization point so that's a kind of specular property of this particular two cluster model that we that we discovered in this paper and and you can look at other things such as you know looking | 3,383 | 3,409 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3383s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | how different the dynamics is when you are changing the batch size and comparing how the dynamics changes when you're changing the ridge legalization of the loss and this is what's on these two pictures so i see that as i'm changing the batch size the time scale where i start to decrease is changing because the number of iterations i need with smaller batch size is | 3,409 | 3,434 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3409s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | bigger so this intuitive but otherwise if the curves look kind of comparable to the ones if i was adding more and more regularization in the sense that the blue ones that is just gradient descent no real backsides one is as if i was regularizing only a little in this case and regularizing a lot is actually corresponding to the smallest batch slice in this in this | 3,434 | 3,462 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3434s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | picture but you know that's that's just like observing how the how the curves look like so there's no like formal statement here so this was the last figure that i wanted to show you and just to conclude so you know i was telling you about this dynamical mean field theory and the results coming from it that is able to track the full trajectory of the grounding descent or | 3,462 | 3,484 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3462s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | the stochastic realities and for a range of our synthetic models for the data and there are of course many directions in which this this we would want to uh extend it including all the open problems that i stated we would like to have more more math and rigor into that but also deduce more of insights just by looking of what the dynamic and your equations are telling us as a | 3,484 | 3,511 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3484s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | function of all the hyper parameters and the nature of the noise that i was describing in the in the equations and we can look at other data models these are not the only ones for which this this this can be written and we can look at networks that actually have hidden variables and of variants of the gradient descending stochastic guardian descent that have momentum for instance | 3,511 | 3,534 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3511s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
rk7fIhCH8Gc | etc so this is hopefully to come and i just flashed back the list of papers from the beginning that that i covered in this store and open for the discussion if there is still time for the discussion thanks very much lenko that was that was a fascinating uh talk we've actually run over time so um we probably should take questions offline but um you know thank | 3,534 | 3,562 | https://www.youtube.com/watch?v=rk7fIhCH8Gc&t=3534s | Insights on Gradient-Based Algorithms in High-Dimensional Learning | |
0O1UXKh-Yck | i am sapna sharma and i am here to present a paper i am here along with ragini to present a very interesting paper published by six google researchers in may 2019 the researchers are david birthlord nicholas scarlini ian goodfellow avital oliver nicholas papernote and colin raffle the title of the paper is mix match a holistic approach to semi-supervised learning | 0 | 35 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=0s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | so next slide please so we will be covering the background and purpose to see why mix match was [Music] required the key terms to understand the algorithm we will be giving a brief description of the algorithm and the results of the experiments performed by the authors so we are all aware that there uh we are all aware of the three main classes of the machine learning that is | 35 | 79 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=35s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | the supervised machine learning the unsupervised machine learning and the semi supervised machine learning while the supervised machine learning needs the ground tooth that is the label data to build a model the unsupervised machine learning predicts unlabeled data using clustering techniques now the major concern for most data scientists is the labeled data | 79 | 107 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=79s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | as we ourselves face the same problem while building a model for wound tissue classification it is very difficult to get an expert to label the whole set of data thus the scarcity of label data is the major con constraint for supervised machine learning now here is where the semi-supervised machine learning plays a major role which takes the advantages of both supervised machine learning and | 107 | 136 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=107s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | unsupervised machine learning to give us label data and according to the authors of the paper mixmatch claims to have developed a technique to label the unlabeled data with much better accuracy than the present day there may supervise techniques as per the abstract of the paper mix match unifies the current dominant approaches used in semi-supervised learning | 136 | 170 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=136s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | to produce a new algorithm that works by guessing low entropy labels for data augmented unlabeled examples and mixing labeled and unlabeled data using mix up so before going to the actual algorithm let us just have a look at the key terms which will be used to understand the algorithm the consistency regularization it applies uh data augmentation to semi supervised | 170 | 204 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=170s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | learning by leveraging the idea that a classifier should output the same class distribution for an unlabeled example even after it has been augmented in other words labels should not change when noise is added the model called pi model is used mixmatch utilizes a form of consistency regularization through the use of standard data augmentation for images such as random horizontal | 204 | 237 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=204s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | flips and crops now the entropy minimization it basically means that we need to reduce the randomness in the prediction of the unlabeled data or the classified decision boundary should not pass through the high density region of the margin marginal data distribution this is done by outputting low entropy prediction on the unlabeled data so mix match also implicitly achieves entropy | 237 | 274 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=237s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | minimization by adding a loss function and using a sharpening function on the target distribution for unlabeled data the traditional regularization is again a method applied to avoid overfitting of the model we have two types of regularization l one or the lasso regression and l two or the ridge regression the range regression adds squared magnitude of coefficient | 274 | 307 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=274s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | as penalty to loss function and mix match uses the squared or the l2 laws on prediction and guest labels and lastly the main teacher to overcome the problem of inconsistency in using the exponential moving average of label prediction on each training set on large data a mean teacher a method that averages model weights instead of label prediction is used mean teacher improves the test accuracy | 307 | 345 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=307s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | and enables training with fewer models fewer labels so i will be giving a brief description of the steps involved in mix match as the four points are like the data augmentation label guessing entropy regularization and mix up so in data augmentation it is common approach to compensate the scarcity of the unlabeled data and data augmentation is done by applying transformations on the | 345 | 385 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=345s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | input data points such such that the label remains unchanged data augmentation is done both on labeled and unlabeled data the individual augmentations are used for generating a guest label we'll be seeing more about it label guessing for each unlabeled example mix match produces a guess for the examples label using the model's prediction this guess is later used in unsupervised | 385 | 418 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=385s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | lost term the entropy regularization to enforce the fact that the classified decision boundary should not pass through the high density region of marginal data distribution the classifier outputs low entropy prediction on unlabeled data this is done by adding loss term which minimizes the entropy for the unlabeled data mix match applies sharpening function to reduce | 418 | 450 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=418s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | the entropy of the label distribution and this is done by adjusting the temperature of the categorical distribution now the last is the mix-up a mix-up is a recently proposed method for training deep neural networks where additional samples are generated during training by convexly combining random pairs of images and their associated labels by doing so mixer regularizes the neural network | 450 | 486 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=450s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | to favor simple linear behavior in between training examples mix-up also reduces the memorization of corrupt labels increases the robustness of adversarial example mix match uses mix-up with a slight modification as the final step of its algorithm yes okay brilliant so uh thank you sapna for um giving a brief introduction about the paper before i get into the algorithm i wanted | 486 | 538 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=486s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | to just stop for a bit and ask if there are any questions anybody has so far okay i guess not so i start from um talking about more on the algorithm bits where if you see this is an image taken from the paper where it says that the algorithm basically augments the unlabeled image and tries and classifies it based on the number of augmentations and then draws an average to guess | 538 | 582 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=538s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | the label of this unlabeled image that it has after it has this average getting predicted or the guess label getting predicted that less label gets sharpened uh what sharpening does is it basically moves the line of prediction or the line where the decisions are made away from the the higher density association of the data points uh because that makes it easier | 582 | 615 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=582s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | to give a consistent prediction and also increases the confidence of the predicted labor as you see in label guessing we only guess the label of the unlabeled data however we do augmentations on both labeled and unlabeled data to come to this um label guessing wherein the label is guest based on whatever is in the label data as well as in the unlabeled data but | 615 | 650 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=615s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | transformations are mostly done on the unlabeled data as many times as required but according to what the authors have seen and gotten the results uh k that is the number of augmentations uh is two that given a result which did outperform all the standard state-of-the-art methods after this the sharpening formula or the equation that they've used where they do have another tuning | 650 | 685 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=650s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | parameter called st which is the temperature required to sharpen the image prediction or the label that is guessed based on the average prediction that is obtained in the previous step they average they take the decision boundary away and make the prediction more confident so that when the loss is calculated on the unlabeled data it it is not affecting the the final prediction | 685 | 723 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=685s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | after sharpening what remains is two sets of data one is the augmented label data with the sharpened predictions and the label data which already has its own labels so what is done is a mix-up based on this parameter tuning which is called alpha so what basically makes up does is let's say uh you have an image of a tiger and a cheetah and you want to see how much of one image corresponds to | 723 | 761 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=723s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | a tiger and how much one image corresponds to cheetah so what you do is you mix these up together saying 80 20 and have a new label saying that that image is now classified as eighty percent of a tiger and twenty percent of vegeta and that mix up is this alpha value which is again tuned based on your data sets and availability of the number of data or images that are collected i | 761 | 796 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=761s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | forgot to mention the sharpening temperature value that they the authors have taken as a constant for their experiments it's 0.5 yeah i think it's 0.5 after the first pattern let me just quickly go back and see um i think it's in the next slide so we can look yeah it's 0.5 so what they they've tuned it uh to sharpen the picture sharpen the the prediction then they tune in the | 796 | 832 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=796s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | alpha parameter to have a proper mix up of the labeled and unlabeled data and they form in a big bunch of mixed data set which has unlabeled as well as label data set and i think we've been just talking too much of labeled and unlabeled this is the most important factor of a semi supervised learning method wherein you have your training set that comprises of labeled as well as unlabeled data | 832 | 867 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=832s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | the reason being it increases the chances of getting more label data which in turn will be fed to your model to have a larger training set than what was initially available so all this is getting done to ensure that your training set increases in number than what you have earlier the last step in the algorithm is calculating the loss because we have a label set the first | 867 | 897 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=867s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | loss that gets calculated is the cross entropy loss for supervised learning and the second one asapna mentioned is the l2 loss for the unlabeled learning i would stop here and ask if there were any questions okay i go forward so this slide explains what i just talked about but in a more um direct way and what all they have tuned in so the key factors to keep in mind for | 897 | 936 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=897s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | this algorithm to perform better or worse is tuning these hybrid parameters as the augmentations sorry k i'm sorry then the the sharpening temperature t the the parameters for mix up that's alpha and the weight of the unsupervised consistency loss lambda which is a hundred in this case um the authors performed two sets of experiments one was a normal semi-supervised | 936 | 973 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=936s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | experiment set up where they compared it to the state of art uh semi-supervised learning models and um the other one is um the the method in which they cut out all the additional um transformations or sharpening or um any extra effort that they've put in to get into the efficiency or the accuracy level that the model performs to show that which uh bit in the algorithm performed the best | 973 | 1,009 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=973s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | to get in the desired results of the desired accuracy at the end so for their semi supervised experiment set up uh they used a wide resonant model with 28 layers and a growth of factor two the data sets were our standard data sets at the zipper 10 say 400 the svhn which is the street view house numbers and the stls uh the models to compare were the main teacher as satna explained | 1,009 | 1,035 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1009s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | in her brief introduction about what main teacher does it is one of the semi-supervised learning methods wherein the labels are based on the exponential moving average a virtual adversarial training again a semi-supervised method and the pseudo labeling again a semi-supervised standard model so you see from the results where the model that they used initially was the cipher | 1,035 | 1,069 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1035s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | 10 data set a supervised training of 50 000 examples was trained to to given the said accuracy but as comparison with mismatch it just used in 250 labels to given the desired output similarly they did it with svhn again with 70 3250 examples with no unlabeled data at all and they got in the said with a label of labels of 250 again um it given the said efficiency so what i mean by 250 | 1,069 | 1,110 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1069s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | labels and 2 and 73 257 examples is that the entire data set was used for training purposes for this particular supervised uh experiment out of the 7 73 257 only 250 labels were used in to train the remainder of the unlabeled data set to get in the performance that it shows in the results and this is the abolition study set up where i said that they talked about | 1,110 | 1,150 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1110s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | adding or removing additional components that they used to come to this level to show which of the exact step in the algorithm performed better than the rest so you see the first one is about the mix and match where it's 100 mix and match and it gave an error rate of 11 for 250 labels and an error rate of six percent which is quite good for 4000 labels they removed the | 1,150 | 1,180 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1150s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | distribution averaging or the guess labels and said the number of transformations as one it gave in a 17 error rate again no sharpening it given 27 they had an um they do mix up they did it with label data mixer they did it with unlabeled data mix-up and you see in this that the mix up is benefiting the performance of this model um going to the end uh concluding what we have understood | 1,180 | 1,221 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1180s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | from the paper or the basic purpose of what the authors wanted to showcase using all the components that they have in creating the algorithm it is seen that the hyper parameters that they've used is the augmentations the sharpening the mix and match and the loss that they've calculated it seems that they are have contributed quite well in the performance of the model | 1,221 | 1,248 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1221s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | but out of that in the as you seen in the previous slide that the mix-up seems to be the most important factor contributing to this a mix-up again is a tuning hyperparameter and if you tune it further you probably might get in better results if you don't tune it as much you might get in less results so these were the positives that i could get or we could see in the paper however | 1,248 | 1,280 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1248s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | there is a bit of a negative or that that comes out through this algorithm that the time and cost needed to generate all the transformations the mix up and have a neural network set up on it does take a lot of time and it also requires additional gpus to be set in which perhaps could be seen as an additional overhead but it still does perform a lot better also | 1,280 | 1,317 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1280s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | it leaves needs a lot less data to start to train in terms of having any expert coming in signing up the training set saying that yes a label is an a label b label smb label so you see that this method does showcase that it is one of a better ways to go forward in semi-supervised space for further reading i have listed a couple of uh papers to go ahead and understand | 1,317 | 1,352 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1317s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | where the thought for building up this model came through um and i think this is the last slide if you have any questions i should be glad to answer them um did you try to use the implementation that they provided on online yes i i did try it i tried on lagoon data set and i could share in my uh code base with you to give it a try so so the the the results were were | 1,352 | 1,393 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1352s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | really good did it improve uh well we don't have enough of um data for the wounds data sets so i would say it did perform good uh i'm not really sure whether it will perform better uh because the data was quite less but i do see that it will go ahead and do a better job gotcha awesome thanks any further questions i don't have a question but i was wondering if you could sure | 1,393 | 1,452 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1393s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | um the network that you tried to run it would won't that that's one type you use or was it which category of one did you use all of them it's a semi-supervised learning space so labels are given for all and then i took in half of that as an unlabeled set and not have labels for it okay could you maybe share that on the channel yes appreciate it sure thank you | 1,452 | 1,494 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1452s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | so yeah i have a question uh so what are the other like i'm not quite like uh quite familiar with semi-supervised vanny algorithms so what is the like what's the state of the art between before this algorithm what what what did they compare their girlfriends to in this paper um so he didn't quite understand your question so so before so when they introduce these | 1,494 | 1,522 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1494s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | algorithms so they have they should compare it to something that how were things done before this uh mix of algorithms for semi-supervised so they did us uh if you see in here um uh they've used in a supervised method wherein they used all the 50 000 examples that were labeled so they needed labels for 50 000 items or entities and they had to train based on that but with the mix match coming in | 1,522 | 1,558 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1522s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | they reduced the number of training set or labeled set to 250. if you know what i mean so so let me go back and i'll explain a bit of a semi-supervised learning space so semi-supervised learning space needs both a bit of a label set which is a trained set and a lot of unlabeled data set because what it does it takes a little bit of the label set and a bit of | 1,558 | 1,587 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1558s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | unlabeled set and forms in a mix of labeled and unlabeled and the reason being that it tries and labels all the unlabeled um examples that it has in this mix so that the end training sample has got more no i i know i know i know like the idea behind but i i i'm not sure what i was just asking what was the state of the art before like before me this mix of algorithms | 1,587 | 1,624 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1587s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | for semi-supervised it was a simple semi-supervised approach wherein you have iterations where you take in from your unlabeled set perform some basic transformations or have a have some sharpening done for your confidence uh of your label set having say uh prediction probability or maybe have co-training methods that have two classifiers learning on the same view etc and | 1,624 | 1,654 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1624s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | then give your prediction so mix match as such introduces all of these methods together so they they didn't stop at just using simple uh transformations or simple decision boundary making algorithms or a mixed match they this is a combination of all three together so state of art didn't have this the novelty of this paper is mixing all of these uh semi-supervised learning approaches | 1,654 | 1,689 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1654s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | together in one to come up with uh their holistic approach so that's why the name okay so basically though like uh those uh pseudo label and v80 and uh was it these are also semi supervised approaches right yes okay uh i have a question so suppose we take three examples right extending what you said cheetahs tigers and we add a third one leopards um and uh so cheetah and tiger are in | 1,689 | 1,737 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1689s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | the 250 labels that are used and leopards are in the 4000. does that mean all the cheetahs and all the tigers in your data set are labeled or are some of them unlabeled it could be some of them unlabeled or they could be some of them labeled and it can be both it all depends on the percentage of the unlabeled data that you thank you hi i'm curious about one thing you say | 1,737 | 1,785 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1737s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | that you tested with the wound data set uh did you try perform it against fast ai platform yes yes with first ai yeah and how how your i mean how is their performance with this method compared to plasma eye for the training uh set i did get in about a 64 64 to 69 percent uh accuracy on the training set initially and with this it did bump up to about 70 but i wasn't sure whether it was | 1,785 | 1,822 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1785s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | uh the right thing that i was doing so that's i'm still playing around with it but uh i know where the where the tuning needs to be done so i should be able to have a better accuracy on that this is on the training set i've not yet gone on to testing the model on the test data they'd probably be higher than that yeah yeah okay thank you me any further questions | 1,822 | 1,900 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1822s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
0O1UXKh-Yck | should we say that's it then okay thank you very much uh if you have any questions um please get back to me i should be glad to answer them and i will share in the link to my notebook i am still a little not worse than not books i write it in uh prehistoric language as an editor and things like that but i will move on to notebooks soon and i'll post it there | 1,900 | 1,939 | https://www.youtube.com/watch?v=0O1UXKh-Yck&t=1900s | MixMatch: A Holistic Approach to Semi-Supervised Learning | |
R5xV9BTYXL4 | and when you're living by the hormones or stress not a time to create no out of time to open your heart not a time to learn sometimes we do a meditation we start opening our heart and start elevating the body's energy and then those emotions can drive certain thoughts of your future you have to understand that if 95% of who you are is a set of unconscious programs then the | 0 | 22 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=0s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | first step is lighting a match in a dark place this will be one of the most powerful videos you watch today right now you're about to discover the three secrets to unlock the power of your mind with dr. Joe Dispenza I hope you enjoy [Music] how do we change our energy and how do we sustain it for extended period of time how long is that time need to be until we really started to see that | 22 | 50 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=22s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | sometimes immediate okay so so our research and we've done in the last six years because we were seeing so many incredible incredible things going on in our workshops I mean people stepping out of wheelchairs and all kinds of crazy things my church look kind of like a mega church but hopefully not that based in science yeah but but but isn't it amazing that some of these churches when | 50 | 74 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=50s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | people get to believe whether they have science backing it or not they just it's the belief when they step out into the unknown step out of your body and who your size right instantaneously instantaneously and we do see that a lot and some people like yeah that's yeah can that be possible how can it be possible we we've done research now I just assembled a team of scientists | 74 | 94 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=74s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | we've done 8,500 brain scans I can tell you I can tell you when a person's about ready to change I can tell you why really don't change I can tell you what it takes to change so what's it take to change well do you change most people keep their attention always their awareness on their body it keeps their attention on everything in their environment with people and things there | 94 | 115 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=94s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | the brain is always scanning everything around us to determine what's known as safe and unsafe right and you know we do that all the time so our research shows that the moment you take your attention off your body and you go from somebody to nobody you take your attention off the people in your life and go from who you identify with from someone to no one and | 115 | 139 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=115s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | so many people spend their whole life building an identity of being someone take your attention off your cell phone your computer your car and go from something to nothing take your attention off where you're sitting where you need to be someplace you have to go go from somewhere to nowhere and take your attention off time linear thinking about the predictable future they're familiar | 139 | 159 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=139s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | past and fall into the generous present moment and go from some time to no time then all you're left with is consciousness and that's the moment are no longer playing by the same rules matter to matter and there's a very elegant moment that takes place in the brain in fact I was just showing my research to a group of researchers and Santa Cruz this past week and they were | 159 | 186 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=159s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | blown away and I said now watch this person this person is going to have a transformational moment they said how do you know I buy I've seen enough of these and the next moment the whole brain just lights up that person is switched on they'll never be the same person again they're having a transcendental moment and we could actually predict it and teach it now it's a formula hmm just | 186 | 206 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=186s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | like you doing sports if it just becomes a formula and then you change the formula and you add to it right so when you no longer are you're identifying with your body your environment and time that's the moment your pure consciousness now you're just an idea you're an awareness awareness awareness that has nothing to do with local space and time and now if you're no longer yond you're | 206 | 230 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=206s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | anything you can go beyond and that's when the brain because the brain doesn't change the brain it takes a long time it takes a long time for the personality to change the personality for the ego to change the ego the programs that change the programs takes forever matter takes a long time to change matter but when you're in this moment you're no longer playing by those rules consciousness is | 230 | 251 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=230s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | the phenomenon above matter in fact consciousness is beginning to activate or manipulate circuits in the brain people just think the brain is creating conscious no consciousness is executing the brain right so then if the brain can change then the mind doesn't change the brain mine is the brain in action is consciousness that changes it so when people begin to disengage and | 251 | 273 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=251s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | get beyond themselves you are at your absolute best when you get beyond yourself and getting the person to that point how does someone get to that point yeah so we teach them that formula we teach them to that point where all of a sudden they reach that generous present moment where they just feel connected and when they're in that place all the things they thought they wanted they | 273 | 295 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=273s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | actually no longer want because they feel like they already have them so then imagine living your life from that place you would be less judgmental you would be less frustrated us impatient reactive and so so the formula then is that it requires a clear intention which is a coherent brain and when you're living stressed out and something goes wrong and you're | 295 | 320 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=295s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | threatened or you can't predict an outcome or you have the perception that something's getting worse or you can't control it you switch on that fight-or-flight nervous system they've talked about now here's what happens when that occurs you start shifting your attention from one person to one problem the one thing to another person to another place because your brain is trying to predict the next | 320 | 339 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=320s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | moment well every one of those people and things in places has a neurological network in your brain so as you shift your attention from one to the next it's like a lightning storm in the clouds your brain starts firing very incoherent lis when your brain is incoherent you're incoherent and when you're living by the hormones of stress not a time to create no not a time to open your heart not a | 339 | 360 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=339s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | time to learn not a time to trust and it's a time to run fight or hide so people spend 70% of their time of their life living in the state Wow so think about it so miserable yes so then when you're under stress if there's if there's a cougar around the corner you're not going to sit down and meditate sit still right but but so I'm a tree you got the survival genes | 360 | 383 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=360s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | switched on and nobody is gonna believe in possibility when you're living in survival right yeah so then when you're living in stress what happens is you narrow your focus on the cause you now your focus on matter the object the thing and so people get switched on and all of their attention is on their outer world when the hormones of stress kick on the body gets an arousal now your | 383 | 406 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=383s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | attention is on the body and of course when you're under stress you're trying to predict the future based on the past and now you're literally enslaved into three-dimensional reality so then how do you get what you want you gotta try harder you force it more you got a war cardio to fight for it it's matter trying to change matters Austin people just burnout right so then we now know | 406 | 425 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=406s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | that when you go from a narrow focus on something and you begin to open your focus you create sense and awareness that the act of opening your focus causes you to stop thinking and if you stop thing can you no longer activate those circuits and you start to slow your brainwaves down hmm as you slow your brainwaves down you start connecting to that autonomic nervous system the thing | 425 | 447 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=425s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | that's giving you life and all of a sudden when you get beyond yourself it says he's gone let's step in and just clean up this mess before he gets back really and its job is to create order and balance your body will start to do that for you the innate intelligence will step right in once you've connect you got to connect so you got to know how to change your brainwaves you can't | 447 | 463 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=447s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | change your brainwaves you stay in an active state you're basically moving furniture around you're analyzing your life within some disturbing emotion and I can tell you after looking at all those brain stands if you're analyzing your life within some disturbing emotion you're going to make your brain worse in fact you are thinking in the past right so you teach people the formula how to | 463 | 483 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=463s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | open their focus change their brainwaves connect to that invisible field and all of a sudden different compartments of the brain start synchronizing the front of the brain starts talking to the back of the brain the right side starts talking to the left side and all of a sudden what sinks in the brain links in the brain all of a sudden you see this person starting to feel more like | 483 | 502 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=483s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | themselves and when you see those two hemispheres the brains start lighting up watch out because that person's gonna feel really hope they're gonna start loving life they're gonna feel like they're gonna be in love with life because the union of polarity and duality is wholeness at the exact same time coherent brain when you're resentful when you're judgmental when | 502 | 522 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=502s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | you're impatient your heart beats out of rhythm why you're stepping on the gas and you're stepping on the brake at the same time your body and its intelligence living in survival is saying t-rex is back there but you're not running because you're sitting across the table looking at somebody smiling and your body's revved up right so the heart is beating and rhythmically and when that | 522 | 542 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=522s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | happens you're you're squandering or you're using all the body's life force and turning it into chemistry right using all that energy to survive as opposed to think beyond right right so you're drawing from your vital life force that invisible field around your body and you're turning into the chemistry you actually are going to shrink your own feel the hormones of | 542 | 560 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=542s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza | |
R5xV9BTYXL4 | stress caused us to be materialists right we when one of the stress were we're using our senses to determine reality so now you feel more like matter unless like energy more separate from possibility so then to teach a person then how to regulate that heart center and we do this we've done 6000 heart scans why because if I can teach you how to get in that heart state and I can | 560 | 584 | https://www.youtube.com/watch?v=R5xV9BTYXL4&t=560s | DO THIS Everyday To Unlock The FULL POWER Of Your Mind! | Joe Dispenza |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.