video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
La9oLLoI5Rc | most people when they have a thought they just think that that's the truth and I think one of my greatest Realizations in my own journey was just because you have a thought it doesn't necessarily mean it's true so if you think 60 to 70 thousand thoughts in one day and we do and 90% of those thoughts are the same thoughts as The day before and you believe that your thoughts have something to do with your destiny | 1,191 | 1,214 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1191s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | Your life's not gonna change very much Because the same thought leads to the same choice the same choice leads to the same behavior the same behavior creates the same experience and the same experience produces the same motion and so then the act of becoming conscious of this process to to begin to become more aware of How you think how you act in how you feel? It's called metacognition | 1,214 | 1,237 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1214s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | and so then why is that important because the more conscious you become of those unconscious states of mind and body the Less likely you're gonna go unconscious during the day and that thought is not gonna slip by your awareness unchecked because your It means to know thyself and the word meditation means to become familiar with So as you become familiar with the thoughts the behaviors and the emotions of the old self | 1,237 | 1,265 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1237s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | You're retiring that old self as you fire and wire new thoughts and condition the body into a new emotional state if you do that Enough times it'll begin to become familiar to you. So it's so important Just like a garden if you're planting a garden, you've got to get rid of the weeds You got to take the plants from the past year and you got to pull them out the rocks that sift to the top that | 1,265 | 1,289 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1265s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | Are like our emotional blocks they have to be removed that soil has to be tenderized and broken down We have to we have to make room to plant the new garden So primarily we learn the most about ourselves and others when we're uncomfortable because the moment you move into that uncomfortable state normally a program jumps in When that program jumps ins because the person doesn't want to be in the present moment and engage it consciously | 1,289 | 1,313 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1289s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | So when you teach people how to do that with a meditative process Turns out that when they're in their life They're less likely to emotionally react they're less likely to be so rigid and believe the thoughts they were thinking they're more aware of when they go unconscious back into a habit and that is what starts the process of change and So we have to unlearn | 1,313 | 1,334 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1313s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | Before we relearn we have to break the habit of the old self before we reinvent a new self We have to pre synaptic connections and sprout new connections. We have to unfired unwire and refire and rewire. We have to unmemorable Body to a new mind into a new emotion like the program and reprogram that's the act and it's a two-step process Yeah, I like the way that you call that out as an action | 1,334 | 1,358 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1334s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | There was another thing that you said that I thought was really powerful about how insights themselves are essentially inert. They don't do anything What what then do we do with an insight? How do we take a breakthrough moment and make sure that it's not just a breakthrough moment Like I guarantee people watching right now are having like a hundred aha moments for sure | 1,358 | 1,377 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1358s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | That was definitely the case for me as I was researching you and when you said that I was like and that's the danger that You have the AHA and then nothing. Yeah Yeah and that's it's a it is a danger because then people will will shrink back into mediocracy and they'll use the insight to Excuse them from taking a leap. They'll say yeah, you know, I have a chemical imbalance in my brain. Yeah, my father was | 1,377 | 1,400 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1377s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | Really overbearing he was a perfectionist. That's why I am the way I am you know people they come up with stuff to to excuse themselves. The insight is Actually giving them permission to stay limited and it's an amazing idea because they'll say to you And they really want to get over their anxiety. But let's ok. Let's take your ex-husband. Let's put him in a straitjacket | 1,400 | 1,424 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1400s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | Let's duct tape them and shoot them to the moon know what I mean. What are you gonna do now? You still have to make those changes. And so then the person's enemy dies or they're something shifts in their life And that person's gone. They'll find another person to hate. This is just how we function as human beings. We just slide another Reason to feel those emotions. So I think I think when people start to understand this, you know, | 1,424 | 1,449 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1424s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | I think knowledge is power but knowledge about yourself is self empowerment. So how much of this is really learning to? just bifurcate the world into there's negative emotions that have negative neuro chemistry associated with and you said that in those states if you're living in a perpetual state of stress hormones and things like that illness is like a step away and | 1,449 | 1,471 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1449s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | Then just the other side of that is understanding but there's this whole other side of positive energy which happiness joy Empowerment whatever that you know neurochemical cocktail is but that when you're on that side Your immune system is more likely to function. Well, like is that Just sort of bringing it down to like a really basal. Yeah, that's sort of one of the biggies | 1,471 | 1,494 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1471s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | Well, let's talk about it in terms of survival or creation As I said 70% of the time people live in stress and living in stress is living in survival now All organisms in nature can tolerate short-term stress, you know a deer gets chased by a pack of coyotes when it out runs the Coyotes it goes back to grazing and the event is over and The definition of stress is when your brain and body are knocked out of balance out of homeostasis | 1,494 | 1,522 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1494s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | The stress response is what the body and Nate Lee does to return itself back to order. So you're driving down the road Someone cuts you off you jam on the brakes You may give them the finger and then you settle back down and the event is over and boom now. Everything's back back to normal But what if it's not a predator that's waiting for you outside the cave, but what if it's your coworker? | 1,522 | 1,546 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1522s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | Sitting right next to you and all day long you're turning on those chemicals because they're pushing all your emotional buttons When you turn on the stress response, and you can't turn it off Now you're headed for a disease because no organism in nature can live an emergency mode for that extended period of time It's a scientific fact that the hormones of stress down regulate genes and create disease long term affects | 1,546 | 1,573 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1546s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | Human beings because of the size of the neocortex we can turn on the stress response just by thought alone Which I think about our problems and turn on those chemicals That means then our thoughts Could make us sick So if it's possible that our thoughts could make us sick. Is it possible that our thoughts could make us? Well, the answer is absolutely Yes So then what are the emotions that are connected to survival? Let's name them | 1,573 | 1,598 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1573s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | anger aggression hostility hatred competition fear Anxiety worry pain suffering guilt shame unworthiness the envy jealousy. Those are all Created by the hormones of stressin and psychology calls them normal human states of consciousness I call those altered states of consciousness So then we tend to remember those traumatic events more because in survival, you better be ready if it happens again | 1,598 | 1,628 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1598s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | that's an and in one's survival gene is switched on you could have ten really great things that happen to you in your day and you just have one bad thing that happens and you cannot take your attention off that bad that that unhappy thing because The survival gene is switched on it's really interesting How does epigenetics come into play and all this like what's actually happening? You've talked pretty profoundly about? | 1,628 | 1,652 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1628s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | Proteins and like really at a deep level how we're signalling to our genetics to create these kinds of changes What does that actually look like? Well epigenetics epi means above the gene and Many years ago after the DNA helix was discovered by Watson and Crick They said the blueprints of life, you know, all diseases are created from genes it turns out less than 5% more like 1% of people on the planet are born with a genetic condition like type 1 diabetes or | 1,652 | 1,684 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1652s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | Tay-sachs disease or sickle cell anemia the other 95 to 99 percent Are created by lifestyle and by choices you can take to identical twins Exact same genome one dies at 51. The other one dies at 85 same gene different environment, so All of a sudden they said we lied That was wrong. It's not genes that create disease. It's the environment that signals the gene that creates disease. Well, ok, but | 1,684 | 1,712 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1684s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | That's not the whole truth too because you could have two people working side by side in the same factory One gets cancer after being exposed to a carcinogenic for 25 years both working for 25 years The other one has no cancer at all. So there must be some internal order That would cause one person to not get it while another one does So is it possible then if? | 1,712 | 1,736 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1712s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | The environment signals the gene and it does and the end product of an experience in the environment is called an emotion Can you signal the gene ahead of the environment by embracing an elevated emotion? We've done the research on this where we measured 7,500 different gene expressions in a group of people it came to an advanced event for four days and we Had them doing a seated meditation a walking meditation a laying down meditation a standing meditation and at the end of four days | 1,736 | 1,767 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1736s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | Just four days The common eight genes that were upregulated two genes to suppress cancer cells and tumor growth Two genes for neurogenesis the growth of new neurons in response to novel experiences and learning the gene that signals stem cells To go to damaged areas and repair them the gene for oxidative stress was upregulated We started seeing all these genes that are very very healthy to cause the body to flourish | 1,767 | 1,794 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1767s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | Imagine if people were doing that for three months. We also measured telomeres the little Shoestrings on the end of DNA that tell us our biological age. We asked people to Do the work meditation five out of seven days for 60 days Measure their telomeres that determine their biological age sixty days later seventy four percent of the people lengthen their telomeres 40 percent | 1,794 | 1,820 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1794s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | significant change twenty percent a very remarkable change That means that they got a little bit of their life back if it lengthened by ten percent They got 10% of their life back. That's incredible Before I ask my last question tell these guys where they can find you online Sure. My website is just dr. Joe Dispenza dot-com. You can follow us on Facebook Twitter Instagram | 1,820 | 1,842 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1820s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | We're all over and then my final question. What's the impact that you want to have on the world? I? think that the end game for me is to empower people to Such a degree that they realize that they need less things outside of them to make them happy less things outside of them to regulate their moods and their behaviors and that they begin to use the kind of the power that we | 1,842 | 1,867 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1842s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | All have access to and into really and to change the world to make a difference so that there's more peace There's more homeless. There's more connection that we support and love each other and we serve better and and I think that we have to start for the most part if everybody's working on themselves and and Trying doing their best to present the greatest ideal of themselves to the world. I think the world would be a better place. And so | 1,867 | 1,896 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1867s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | That's my passion and I'm witnessing it happening now The more than I ever thought I would was incredible Joe. Thank you so much for being here and amazing having you Guys Go watch this man's videos They are some of the best explanations of what's going on inside the mind that I've ever come across There were of several that I literally have people in my life that I'm going to force to sit down and watch these things | 1,896 | 1,922 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1896s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | It's just incredible explanations of how you create yourself out of the things you do Habitually the way that you think creates a feeling the way that you feel creates thinking that matches that and then you get in this cycle and that coming down to that personality ultimately being a finite set of patterns in your brain I think is really really illuminating in terms of how we actually experience the world and | 1,922 | 1,946 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1922s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | I think when people understand that that it's within your control that you don't have to believe every thought that you think that you can step outside of that that you can leverage metacognition to think about your thinking and Deconstruct and decide what you want to think about and start focusing on that and create an entirely different version of yourself that has new | 1,946 | 1,963 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1946s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
La9oLLoI5Rc | Elevated feelings that's over on the side of the positivity empowering yourself I think it's really incredible and he gets deep into the mechanistic stuff Which I love you guys will not regret diving deep into this man's world. I think you will get some incredible revelations All right, if you haven't already be sure to subscribe and until next time my friends be legendary. Take care | 1,963 | 1,983 | https://www.youtube.com/watch?v=La9oLLoI5Rc&t=1963s | How To BRAINWASH Yourself For Success & Destroy NEGATIVE THOUGHTS! | Dr. Joe Dispenza | |
q7PjrmGNx5A | hi there today we'll look at self training with noisy student improves image net classification by chidze sie mintan luang eduard hovi and kwok v li so this paper takes an imagenet classifier that's been trained on the imagenet dataset and uses that classifier as a teacher model to label a whole bunch of unlabeled images and then it trains a student model that | 0 | 27 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=0s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | is larger than the original teacher model on those teacher labeled images and that turns out to improve the classification on the imagenet validation set now that there is a couple of things that make this all work and today we're going to explore how this paper does it and what they say is important if you enjoy content like this as always don't hesitate to share it out or tell | 27 | 56 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=27s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | your friends about it and if you're not subscribed yet then do so um i would appreciate that and you'll get more content so win-win so this this paper is about semi-supervised learning in um in effect so it's at the intersection actually of semi-supervised learning knowledge distillation and transfer learning so what do we mean by semi-supervised learning usually in | 56 | 83 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=56s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | supervised learning you'll have some sort of data set and the data set will contain let's say it's an image net it's image data set so the data set will contain images this is an image with like some sort of cat on it and it will contain the labels according to that so cat now in semi-supervised learning you you assume that so this is supervised learning in semi-supervised | 83 | 111 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=83s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | learning you assume that only part of your data set has the labels so like only this part down here has the labels and the upper part does not have the labels so that's semi-supervised learning it's often the case when it's very expensive to get labels so you can only get labels for a couple of images in your data set but very often in semi-supervised learning you still | 111 | 135 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=111s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | assume it's the same data set there is a slightly different setup here that's called transfer learning so in transfer learning what you'll have is you'll have your data set that has the labels but it's very small so you'll notice i've drawn it smaller that means you have very little that is also the case when it's very expensive to get labels but also it's expensive to get the data | 135 | 159 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=135s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | itself this is often the case like say in medical data where not only is it expensive to get labels for like a ct scan it's actually expensive to get the ct scan so what the goal in transfer learning is is to say well i do i do have only this small data set but i do have this giant other data set over here now can't i it's not the same it's maybe they're not ct so these are ct | 159 | 188 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=159s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | scans maybe these are x-rays right they're fairly similar similar technology um if you slice the ct it will give you sort of an x-ray can i you know train my model pre-train my model on x-ray data and then fine-tune it on the ct data so that's called uh transfer learning usually now this can be done with or without labels so it can be that for the x-ray data set you do have the | 188 | 218 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=188s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | labels or you don't have the labels there are techniques for all of those now what we're going to look at today is kind of this situation right here it's the transfer learning situation where you do not have the labels for this x-ray data set but other than in this x-ray example what we're going to look at is the small data set is going to be our imagenet database so our original | 218 | 248 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=218s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | picture with label database so you'll see immediately the difference here is that in the transfer learning setting we usually assume that the data set we want to train on is fairly small here you know imagenet is already sizeable but what we have is we have a much larger database of unlabeled images that we can just get from the internet so we can scrape the internet for any | 248 | 276 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=248s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | kind of pictures and that will be our unlabeled data set and what we'll try to do is somehow incorporate this unlabeled data set here into the training process to get better on the imagenet data set okay so this is the the problem statement is you have the imagenet dataset and you have a second much larger data set of unlabeled images and you somehow want to make use of them | 276 | 299 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=276s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | so i hope you see how this is sort of connected to the others it's essentially sort of a transfer semi-supervised learning setting but with the exception that usually in transfer learning you assume that the the labeled data set is like super small which is not the case here and that's going to result in us being able to apply a different technique so this different technique is | 299 | 323 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=299s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | called the noisy student now usually what you might do in a transfer learning setting is you might want to start with that big data set right because that's the data set that's sizeable enough to allow you to train a really big model on it and then you fine tune and you you sort of hope that the information transfers over here on the other hand what we want to | 323 | 345 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=323s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | do is we start with the imagenet data set so first we train this in a supervised learning fashion into our model now this model is going to be called the teacher model we know how to do this we know to train imagenet models right so we can train this into a teacher model that has a reasonable accuracy on the imagenet data set step two we're going to take that big | 345 | 372 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=345s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | data set over here and use the teacher model to label the unlabeled images so for each image for each image coming in here the teacher so maybe this is again another cat the teacher will say that's a cat okay so that gives you the big data set where now you have images along with labels just the labels aren't true labels they're generated by the teacher and then in the third step you train | 372 | 408 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=372s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | this big data set you train on this big data set and that's what you call your student model and then the student model in this paper will see how can we make it such that the student is then better at the original imagenet task than the teacher ever was which seems counterintuitive at first because all of the information that the student is trained from is basically what the teacher already | 408 | 435 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=408s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | knows right all the labels here come from the teacher therefore the student shouldn't be able to outperform the teacher but in this case the student will be able to outperform the teacher and their argument here is that this is mainly due to the fact that you use noise in this training procedure so when you train the student what you'll do is you'll use noise and one of the types of noise is | 435 | 463 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=435s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | that you severely augment this data right here in order to train the student now we've known for a long time that data augmentation for example in the frameworks of self-supervised learning and so on can have a very large benefit to training and here the fact that we incorporate this at extra data and we use noise and augmentations on it is going to result in a student that can | 463 | 489 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=463s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | sort of learn more about the data than than the teacher did know okay this this is basically it and as you can see this is kind of their main final result where they say on imagenet our top one accuracy sort of increases right here and uh even on these kind of subsets of imagenet or these are sort of corrupted sets of imagenet they make even more substantial improvements as you can see | 489 | 522 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=489s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | here now we'll go into what these corrupted subsets are but you know just for now these here are very difficult variants of imagenet they can be severely corrupted or or distorted and so on and you can see that the model improves severely over the previous state of the art which basically means that this model is more robust and that's a direct consequence of the noise | 522 | 548 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=522s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | now one last thing i should say is that the student here is also larger than the teacher so that's also one thing that makes the student better so what you will make is the student model is larger than the teacher model as a model as the architecture so in combination with the noise right here with the noise in combination that means the student model is probably able to | 548 | 575 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=548s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | capture more of the variance of the data it's larger it has more parameters it can learn more about the data together with the noise it can probably be a more robust and that's what makes it generalized better and we'll also see as we see here it's more robust to these transformations and it's also going to be more robust to adversarial perturbations so the the technique again is uh | 575 | 601 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=575s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | illustrated here as as we said it's pretty simple first so step one step one train the teacher model with label data as you would step two you infer the pseudo labels on unlabeled data step three you make a student you make sorry with step three over here train an equal or larger student model with combined data and noise injected so they don't they use the original label data here | 601 | 636 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=601s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | and the pseudo-labeled data right here in order to train the student but still this the student doesn't have more information more label information than the teacher had it simply has this teacher labeled teacher labeled unlabeled data also to train on now the crucial part here is well first of all that the student can be larger and second of all that there can be | 636 | 661 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=636s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | noise and the noise comes in three different forms so first of all you use data augmentation which we've already seen this is sort of like random cropping or mild rotations color jitter whatever they use a rand augment here which is a specific technique to apply these augmentations um they use dropout which is a fairly old technique where you in the student model that you train | 661 | 686 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=661s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | you randomly drop out connections which makes it more robust and more generalizing and then you also use stochastic depth now stochastic depth is a technique when you train a model what you'll do during training instead of always passing your data forward through the layers like this you use some sort of a drop out but with entire layers so what you'll do is you'll pass your data forward and | 686 | 712 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=686s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | then randomly you'll skip a layer and then pass it forward again now these these might seem weird first because uh yeah it might seem weird but in if you know that most models especially computer vision models nowadays are residual networks which means that their layers look like so you have the input you have some computation and then you have the output and then there is | 712 | 739 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=712s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | already a residual connection that basically adds the original signal together to the result of the computation so all you do in this stochastic layer dropout or this stochastic depth right here is you basically disable use you disable this connection right here and all the signal has to flow through here if you read the residual the resnet original resnet paper they make it | 739 | 766 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=739s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | pretty clear why the residual connection is a good idea basically they say these computations here they if you have a very deep network each layer only has to basically um do very a little bit of computation that that can be bypassed uh fairly efficiently for a lot of data points so it's not that hurtful to bypass a layer and in this case they actually use it to | 766 | 792 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=766s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | just bypass some of these small computations and inject some more robustness into the student model so with these three strategies to bring noise into the training process one is on the data and two is on the student model itself they train the student model and then fourth and this is what we didn't have before fourth or maybe we put four here make the student a new teacher so now | 792 | 821 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=792s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | you can iterate you can use the student model that you just trained to again label the unlabeled data and then you can use another student model again under the influence of noise to train from that student model and so on and you can go on and they do up to like three iterations of this where they always take the new the student as the new teacher and then use a new student model to train from | 821 | 849 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=821s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | that teacher and they get better and better as they do this of course there's like a diminishing returns but it's pretty impressive that this even works right the new students in fact aren't even larger than the old students it's just that the students are larger than the original teacher model in most of these cases so here's the algorithm written down you'll require labeled | 849 | 874 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=849s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | images right here and unlabeled images which are the ones with the tilde so first you learn the teacher model which minimizes the cross entropy on labeled images this we already know this right this is the label this is the image according to the label and you train the teacher model which is this thing here and you can see here noised so already in the teacher training process | 874 | 900 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=874s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | you want to introduce this noise you want to introduce these data augmentations these are as i said these are standard techniques to make models more robust and therefore more generalizable yeah we know from these from these self-supervised papers that these augmentations are very powerful and the way you design them basically if you one of these augmentations is a random crop which | 900 | 925 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=900s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | means if you have an image you randomly crop out like part of that image and then that's your training sample and not the entire thing so by doing this you basically teaching the model to ignore the exact location and scale of things on an image and you can do this because you as a human know that you know i can zoom in i can zoom out into something and it won't | 925 | 948 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=925s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | change what's on the picture and so that's you use these augmentations to kind of heuristically tell the model what it should be invariant to and that is that is a very powerful technique uh to regularize basically to to robustify these deep deep methods and this is used the same here so already in the teacher model we train with these noise and then step two use a normal i.e not noise | 948 | 979 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=948s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | teacher model to generate soft or hard pseudo labels for the clean i.e not distorted unlabeled images and this is important they stress this here that when you when you label the unlabeled images you want to use the model that is without the noise and you do it on the not distorted unlabeled images so when you infer the labels it's very important that you have | 979 | 1,004 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=979s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | clean accurate labels without any sort of noise in them so label noise is not something that they have found to help in this case so not label noise on the teacher that is so you can see right here on the unlabeled images we'll use that teacher model without the noise um to infer the labels now they say these can be hard model hard labels or soft labels so what does that mean if we generate | 1,004 | 1,032 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1004s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | hard pseudo labels that means that the y here is simply going to be either 0 or 1 or 2 or 3 and so on so just the index of the class whichever class is most likely that's going to be our label this is exactly how the supervised data sets come right so this is what you'll think first when you see that however soft pseudo labels means that the y will be a distribution | 1,032 | 1,057 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1032s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | so instead of being of class 0 it will be sort of let's say 90 percent of class zero but also five percent class one and five percent class two right so you'll output the distribution um instead of the just the label and they have found that the soft pseudo labels work slightly slightly better than the hard pseudo labels okay thanks so they use the soft pseudo labels here | 1,057 | 1,094 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1057s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | because they work slightly better but you can do it with hard or soft labels the important thing is that you use the teacher to generate as accurate as possible labels for your unlabeled data then third we've already seen this learn an equal or larger student model which minimizes the cross entropy loss on labeled images and unlabeled images with noise added to the student model so | 1,094 | 1,120 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1094s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | as you can see labeled images and unlabeled images so we're in this semi semi supervised learning setting right now you take in both together with noise and noise here is in bold which means they stress it again this is important so you can see that the loss is composed of two different things these are the true images of your original model and you use that | 1,120 | 1,148 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1120s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | and this means you noise the student model and that noise can be on the data or in the model itself and here also the unlabeled images that you have labeled with the teacher you do the exact same thing so you train on both of these data sets and step four is if you want to do iterative training use the student as a teacher and go back to step two now they have uh some more tricks when | 1,148 | 1,175 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1148s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | they do this iterative training they are also up the batch size during the iterative training and so on so they do a lot of things to make the student learn something more something better than the teacher and i think this the whole paper it doesn't it doesn't state it explicitly but i think the whole paper everything they do here is to kind of force or allow the student | 1,175 | 1,200 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1175s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | to become better than the teacher by by giving more noise by making the student larger by making the batch size for the student larger and so on so you you want to sort of inject as much invariance as you can and that will make the student learn more so they say here noising student when the student is deliberately noised in its it is trained to be consistent | 1,200 | 1,230 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1200s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | to the teacher that is not noised when it generates the pseudo labels in our experiments we use two types of noise input noise and model noise all right first data augmentation is an important noising method in noisy student training because it forces the student to ensure prediction consistency across augmented versions of an image specifically in our method the teacher | 1,230 | 1,258 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1230s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | produces high quality pseudo labels by reading in clean images while the student is required to produce to reproduce those labels with augmented images as an input second when dropout and stochastic depth function are used as noise the teacher behaves like an ensemble at inference time when it generates pseudo labels whereas the student behaves like a single model | 1,258 | 1,284 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1258s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | in other words the student is forced to mimic a more powerful ensemble model we present an ablation study so this it's a bit weird what they say here um don't be confused you use the dropout and the stochastic depth on the student model and they they say here if you do this the teacher behaves like an ensemble at inference time whereas the student behaves like a | 1,284 | 1,310 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1284s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | single model and yeah it's it's a bit of a weird formulation but it's it's true like the teacher the teacher will produce these same uh the label for different pathways through the student if you use dropout and kind of stochastic depth and therefore the student is kind of required to approximate each time each forward pass has a different forward pass through the layers through | 1,310 | 1,335 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1310s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | the connections with dropout and it's forced to approximate that teacher label with all of these um different things so you see that you you put in a lot a lot of techniques so they have even other techniques um there is one additional trick and it's not and it's not one actually they have so many tricks and if you look at their experimental setup that it's crazy | 1,335 | 1,360 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1335s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | like they describe exactly we reduce the learning rate like this and the batch size like this and so on so to get state of the art on imagenet it's not enough to just have a good idea of a new thing to do what you you have to have the good idea and then execute it almost like really well um because you have to regard all of these additional tricks that people have figured out over the | 1,360 | 1,385 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1360s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | years in any case they say it works better with an additional trick data filtering and balancing specifically we filter images that the teacher model has low confidence on since they are usually out of domain images so that goes to a point where if you see we have this imagenet label data set right and we have the larger data set now the larger dataset simply contains | 1,385 | 1,410 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1385s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | images and there is no guarantee that the images are actually of the classes that we have in the imagenet data set right here we have a thousand classes here there's no guarantee that these images fit into any of those classes yet we still ask the teacher model to put them in some of these classes now you can filter out part of those images um if you can look at the teacher model and you look | 1,410 | 1,438 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1410s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | at its confidence so when it outputs a distribution if if there's just two labels let's say if it outputs a distribution like this that's wildly different than if it outputs a distribution like this both are class 1 labels but one is much more confident than the other so what you want to do is you want to filter out these low confidence labels because you know the model isn't | 1,438 | 1,462 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1438s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | really sure but it has to assign a class but that's usually an indication that it is an out of domain image so if they filter this it works better and then also to ensure that the distribution of the unlabeled images match that of the training set we also need to balance the number of unlabeled images for each class as all classes in imagenet have a similar number of labeled images | 1,462 | 1,487 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1462s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | for this purpose we duplicate images in classes where there are not enough images for classes where we have too many images we take the images with the highest confidence okay so this is just another technique this has basically nothing to do with their core idea but this is just another thing uh where they say okay we can treat this big uh thing that we scrape from | 1,487 | 1,512 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1487s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | the internet you know we can somehow filter and balance it smartly and that will work even better all right so let's go into the experiments of course there um so what they do i think where is the graphic what they do is they take an image net sorry they take an efficient net right here and they trade they first train an efficient net um a smaller efficient net as we said | 1,512 | 1,548 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1512s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | for to be the teacher and then they train a larger efficient net for the student the best model in our experiments is a result of three iterations of putting back the student as a new teacher we first train an efficient at b7 on imagenet as the teacher model so you can see in the table right here what the b7 achieves the efficient net b7 here you can see it has 66 million parameters which is | 1,548 | 1,579 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1548s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | fairly small compared to these other kind of previous state-of-the-art methods on imagenet right so they first train this and that will achieve something like an 85 percent accuracy now if you just train a larger model this efficient net l2 right here that has you can see 480 million parameters so a lot of more mainly parameters but you just train it on the same data set on imagenet you | 1,579 | 1,604 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1579s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | will get a 0.5 improvement and you can see that here with noisy student training with the exact same model so it has the same amount of parameters you'll actually get an 88.4 so i like a more than a three percent improvement and that's with the same model just with this different training procedure and inputting these 300 million unlabeled images that you have laying around but the | 1,604 | 1,633 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1604s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | all the information about all the label information comes from the imagenet dataset and comes from this efficientnetb7 teacher model so that's basically you can it's a testament that out of this out of this 85 you can make this 88 uh just by smartly using the information that the model that this model has learned about the data and transferring it to new data so they train | 1,633 | 1,663 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1633s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | an efficient net b7 that's the small model as a teach model then by using the b7 model as the teacher we trained an efficient net l2 model with the unlabeled batch size set to 14 times the labeled batch size and they stress that it's important that you up the batch size that's another thing that makes the student learn more than the teacher then we trained a new efficient net so | 1,663 | 1,690 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1663s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | by the way these this 14 times it's also it can be done because now you have more data right so you can also up the batch size then we trained a new efficient net l2 model with the efficient net l2 model as the teacher lastly we iterated again and used an unlabeled batch size of 28 times the label batch size the detailed result of the three iterations and so on okay so you can see | 1,690 | 1,715 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1690s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | that it's a fairly complicated procedure but you can gain and gain and gain by simply up upping the um by simply upping the or iterating on this procedure and i think they have it somewhere here yes so as you can see if iteration one you train the efficient net l2 you start it with the the b7 you train the efficient at a2 with a batch size 14 times larger and | 1,715 | 1,744 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1715s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | you gain significantly right this gains about two percent over the original efficient net then you iterate again with the same batch size and you get uh like as a 5.5 improvement and you iterate again with an even larger batch size and you get a 0.3 improvement so there's diminishing returns but still you can see that you know the more with the introduction of noise with the | 1,744 | 1,768 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1744s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | introduction of the larger model with the introduction of the larger batch size these are all things that help the student basically become better than the teacher all right so they do a bunch of other experiments so their main comparison is right here where they say look if we if even if we train the same model with this noisy student training we can make you know pretty large gains over the | 1,768 | 1,798 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1768s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | model over the same model where we do not train it with this noisy student training so this really seems to help you know due to the noise due to the additional data they do a lot of ablation studies so that's pretty interesting and they also do these studies on this special imagenet data set for example imagenet c you can see that there are quite a bit of distortions right here i don't even see | 1,798 | 1,825 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1798s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) | |
q7PjrmGNx5A | if you can see it on this video but this is a swing so the swing right here is like something like this but you almost can't see it and you see that the bold on the left is always the prediction of their model while the thing on the right is the prediction of the original model so this model they claim is significantly more robust to these kinds of perturbations | 1,825 | 1,851 | https://www.youtube.com/watch?v=q7PjrmGNx5A&t=1825s | Self-training with Noisy Student improves ImageNet classification (Paper Explained) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.