video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
NTz4rJS9BAI | whatever those cases are right and so that's the notion of what machine learning people mean by generalization I actually think generalization is one of the machine learning is full of words that confuse the the listener they're just designed to confuse you so generalization when I say that to you that means that you know you learn how to throw a baseball and then you can | 181 | 204 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=181s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | throw a softball that is not what we mean in machine learning at all and machine learning mean if you throw a baseball you know how to throw a baseball as long as this regulation baseball made to regulation wait you're gonna be able to throw that baseball and that's not at all what would moral people think right so the idea is that you give me an examples from a | 204 | 222 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=204s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | distribution I would like to find a good prediction function on these examples I have some loss function that I believe is a reasonable thing that I would like to minimize an expected value meaning that if I have some new thing that comes from this distributions I'd like to be able to do well that's not known and so what you do instead is you replace it with the sample average and the | 222 | 241 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=222s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | empirical risk and you minimize this and like the only thing we're allowed to do what's wrong the only thing we're to do now is minimize this using one of two algorithm and they're like three things we can do we minimize it that way and then of course our generalization error is just the difference between these two things and so the question is you know if we can compute this one when is that | 241 | 263 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=241s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | a good or bad proxy for the other one right that's our whole that's what we mean by realization it's a very it's just a badly named term but it's very interesting couple problem and it's kind of like the core problem in machine learning right and the core theorem of machine learning is that the the population error that we really care about is just equal to the training | 263 | 281 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=263s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | error plus the generalization error right sorry that's like the foundation of ten thousand ten thousand papers right we've all done that at one point in our lives right so we can measure this one and then we do a lot of thinking about why this should be small I mean this requires like the associative property there's stuff to be done here right I think we can equip a | 281 | 308 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=281s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | little it about derive versus actually Matic right so I think right so what can we take away from this we know that if you have a small training error that means that the risk itself is just you're really only in leveraging generalization error right so if your training error for example is zero you're just hopefully know somehow that the generalization error is small and | 308 | 328 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=308s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | sorry and zero training error we know does not imply overfitting even though that's another thing that we do tend to sometimes gets lost in the weeds like for example the the paper I really like which one test of time and we're at knits last year by um Bruce K and Mbutu right has this has this thing saying that you shouldn't train you shouldn't run an SGD too long because at some | 328 | 351 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=328s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | point all of these terms should be one over square root of n but we know that's not really true right I mean just because this is whatever squared event doesn't mean this one is one over square root of n and just cuz this is zero doesn't mean this one's small so you know we it's useful it's a useful way of thinking about it but sometimes we over fit to what kind of these papers say all | 351 | 369 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=351s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | right so don't want to say it's not always true but it's a useful way of thinking about it another way that this one ends up sometimes is presented this is how I learned it not everybody possess it this way is that you decompose is this it's it's the same same proof though it's you did you put the error it's three parts right so there you compare your error of your | 369 | 388 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=369s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | estimator versus the error the best thing in that in the class of stuff you've been looking at right and then you compare the error the best in the class - like what the true prediction function the best population risk monument monitor is and then you are stuck with some bias at the end which is this is just the irreducible error that you can never get away from | 388 | 405 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=388s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | and to whichever one you like more I don't care I like this one just because it allows me to attack this most my least favorite figure I'm kind of gonna be like the core the core of the talk today right so this is my least favorite figure it's from hasty and tip shirani I like those guys buns in this figure we read way too much into this right so somehow the idea here is that we have to | 405 | 426 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=405s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | balance bias and variance in order to get good model complexity again you can do that you can do that but this is by no means by knowing is the only way to generalize and we know this okay I learned this because of deep learning and to be fair I learned this before and had forgotten it and I'm gonna talk to you about that in a second but I learned it again recently because of deep | 426 | 446 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=426s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | learning that you can just get make these models gigantic and and they generalize so in a very unpopular paper that I know lots of people who are have my back tear right now don't like sorry maybe just one anyway so we read we read a bunch of experiments on this if our 10 dataset everybody's favorite dataset where we have chickens frogs deer in trucks and you know you rut so this is a | 446 | 476 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=446s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | 10 class classification problem it's relatively high dimensional has three thousand dimensions because the pixels 50,000 data points and if you take you know what we found is that you can either get a loss to be nonzero here this is the loss is the log loss which is not the classification accuracy you can work really hard to get this to get your generalization error down and in | 476 | 496 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=476s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | this case the test error or you could just run it to zero essentially just taking this configuration and turning off all the regularization round to zero and while you see a drop in accuracy you don't see a gigantic drop so the test error increases but only by about 5% and moreover if you just pick a bigger model now you're considerably better than the original Alex net if you pick an even | 496 | 518 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=496s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | bigger model you keep going down she wants here I'm sure he tried something even larger we just ran out of time so right so somehow here that the the the regularization parameters turned out to be just knobs that you can tune in terms of you could also tune architectures you could do lots of things and keep pushing this error down and indeed we saw the same thing on image | 518 | 540 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=518s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | net where here we have chicken frog Deere and truck much more clear right I think at that point anyway and so we looked at an inception model inside Google I really don't like this experiment I just want to show it just to kind of give evidence that this happens and larger datasets the reason I don't like this experiment is that because of the way that these things | 540 | 560 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=540s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | were trained Inc site Google at the time we were able to run about six experiments six runs I think have we used I think one of the more valuable things that's happened to the community is this add-on benchmark and we just used the DA benchmark we've been enabled around hundreds of experiments much better here but all we were able to do here was toggle flags inside some Google | 560 | 578 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=560s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | 3 models and so in particular you know in this case what we could do is we could turn off the LT regularization and we could turn off the the drop out sorry the UM data augmentation and you could still get perfect dieter palatial and even note that the top five accuracy here is only 19 point three sorry tight error top tight five errors only ninety point three you try to get to something | 578 | 600 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=578s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | that has a 19 point three percent error it's really hard so it's significantly worse than what the inception model is getting but again it's not it's not catastrophic about this would have won this would have beat Alex net the first time around so still if it's good accuracy yeah let's ever talk like that regularization before my control variance so what yeah | 600 | 636 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=600s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | so okay there is talk of implicit regularization I guess what I would say is that we started to just look at more models when we started to do is stop looking at just this individual models or for we just started download more this actually became a big trend in the research group um it's there's something that Becca roll-off started kind of kind of pushed us on this direction and the | 636 | 655 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=636s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | mood became sitting here kind of really pushed us even further in this context it huh Baz an experimental resource so you just get pull some some models and then see what happens and a bunch of things so this is the scatter pot I don't really buy that parameters the neural network is actually meaningful but you can see that essentially as you make the number of parameters bigger the | 655 | 674 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=655s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | models continue to get bigger blue line is just the minimum error and the red is just a scatter plot isn't models at a particular model size I'm just taking the minimum error of the Reds yeah I mean like I'm not saying that they're all true I'm just saying you could do a pull request over here I go fine I say I can call Fernando get him on this one this is even better what you do | 674 | 741 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=674s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | here is just remove models and so now you see the trend this is there's no Alex net or vgd on here they kind of ruin that but they're just up here some ways fundings like the Curt you really care about this lower envelope but this is the same thing kind of happens on imagenet you just keep making them bigger well you'll know he's like we have log axis on the X and and and and | 741 | 760 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=741s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | linear on the Y so semi-log this I pulled from a paper by some Google folks because only Google folks would think it would be fun to train something with 600 billion parameters million million million ok that's reasonable 600 billion was stupid but six our million perfectly reasonable anyway so that's that one he's getting bigger here's your visa this is from misha's | 760 | 782 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=760s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | talk yesterday well I made it but it's fine it's the same same plot right we see the double descents this is now we're random feature models where we just keep adding random features and see how the accuracy goes and even though it does go up it kind of comes back down as you make the models bigger and bigger am i i'm just interpolating so this one is just interpolate I'm not doing this | 782 | 816 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=782s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | case is man we minimum euclidian arm although although what's interesting is that I like this plot better is that if you don't do minimal Euclidean norm but you just go use read regression so you allow yourself a tunable parameter now that dip goes away no it just gets better oh so I don't know all right so this is not minimum nor mystique this is regression and then that bump goes away | 816 | 840 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=816s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | yeah you've picked the best Ridge you pick the best Ridge it's not a constable I picked the best one for each and you're gonna see why in a second this is the best one for each it's the only done with a number of features give it like for that the best value goes down every number is so yes so sorry in this case I'm not taking the best one from here I'm not taking the best of these two it | 840 | 869 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=840s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | is just if I tuned this what's the best I can do just on this one fiber here is a paper something I stole from Peter this is from an inner UPS actually was nipped at the time but nerves tutorial from 20 years ago here he was doing boosting some of us what do you see here well I have first of all note the semi-log axis there we go semi log X on the you know that's pretty neat and | 869 | 897 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=869s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | you add parameters I mean boosting right you increase your model size with every step and so now I have the models get bigger and bigger and bigger and the test error keeps going down I think the the so that's that was interesting right so that was small data machine learning but I still seem to have a thing this is another really interesting one which I pulled from a C ACM article by Bell and | 897 | 919 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=897s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | Koren describing how they won the Netflix prize and they saw exam the same thing that they have different kinds of models but they kept making their models larger again we have semi log X on the routers linear in error and you see this thing kind of continue to keep going down to bigger and bigger you make your model so I think there is something interesting there so there you're seeing | 919 | 941 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=919s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | two things one making the model really huge doesn't doesn't you know it has a ton of capacity maybe you're controlling it with various kinds of regularization of some form or another but oh it just seems make it bigger worry about that later does seem to be a good take home and the other thing is that you see significant diminishing returns right I mean if you | 941 | 964 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=941s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | have a log x axis I mean this doesn't mean that eventually you have to give up even Google eventually you have to give up so I'm not sure that this is necessary how to get a suit like that that irreducible error we'd like to get you no no no no no it's not a typo not a typo a hundred thousand million that's like Carl Sagan it's a hundred thousand million parameters that's a big deep I | 964 | 1,011 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=964s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | didn't make this plot they don't get their ice log these need different things you have to read the caption that's not here sorry I maybe should've edited these things do not mean don't they're in the caption whatever those mean I don't know what those numbers are these number factors are you sure paper [Laughter] anyway so we could look it up and think see a cm it's cool they did a lot of | 1,011 | 1,056 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1011s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | good work let's go back to this one so look here's what I would like to say like there are crazy diminishing returns - it does seem making the model bare for a fixed holdout set that we fix in time for the history of the universe does make that test error go down but that's also not generalization error alright if you do a holdout split or you take a train set and you take a holdout set and | 1,056 | 1,079 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1056s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | then you just fix that holdout set forever maybe what's happening here is you have this these giant models have enough fluctuations in them that they can you now over you could actually leak a lot to the test set and over fit to this one holdout set that you fixed forever so this leads to a question this leads to a question we're all spend most of the rest of the time of the talk | 1,079 | 1,098 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1079s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | there it's perfect I mean I know we have planet I'm good so yeah that's the rest of the talk so maybe maybe what's happening here is you make these models really big and that allows you to overfit on this one holdout set and so there's only one way to check right which is make a new holdout set okay there a better way I don't know but we'd that's what we did that's what we did we | 1,098 | 1,119 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1098s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | made a new holdout set let me explain how so if here is progress on SIF r10 over time so if you just use raw pixels and do linear classification you get thirty seven percent accuracy I just realized we've switched from error to accuracy hopefully it will be clear from context of the same accuracy now we have 97.1% in 2017 and 2019 what is it lou big-nose these numbers | 1,119 | 1,142 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1119s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | ninety-nine point what nine point zero so we got you know there's still time to write more iclear papers everybody we've got ten more ten more ticks so this is like our deep revolution here all right this is the deep revolution happened in 2012 and you know we just keep making progress make them bigger get them more capacity make these models really large the shake-shake models are you.why | 1,142 | 1,166 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1142s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | rednecks also or just ones right I try to get them to fit brushes is this overfitting right is this overfitting right because this early part we can match with shallow Methos yeah actually you can get even get to like eighty I can't remember the number but there's there some work by chumki cada aleck agrawal greg valiant and leigh song like white 13 so if you just do random | 1,166 | 1,189 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1166s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | features just you can get to about 85% accuracy using just dumb random features shallow stuff can get it was big so it's also a large shallow model can get to about 85 and so the question is are all we're doing here is just overfitting to the test set by graduate student design so we're gonna check and we're gonna check by building a new test set now what does that mean so it turns out that | 1,189 | 1,217 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1189s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | the syph are ten creation process is super well documented and it was documented by the folks at Toronto who made it in the first place and so there there is a lot of details about how they did it and in particular where they got their images to begin with and that comes from one of my favorite data sets ever it's called the 80 million tiny images the tiny images | 1,217 | 1,235 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1217s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | data set and it's a was curated by Antonio - Rob Rob Ferguson bill Freeman and they made beautiful like scatter plots like all the images on the internet and these like mosaics and just seeing like how everything varies it's very cool kind of looking at what kind of things are out there that you can get to get off the internet in 2008 so the reason why there were thumbnails is | 1,235 | 1,259 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1235s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | because they wanted to make something that you could store and then do all sorts of visualization and studies with so from these thumbnails they subsample 60,000 using a process that was very well detailed supply well detailed with human not experts but human laborers and they tried to get down to these ten classes so can we get iid resampling well there are you know there's out of | 1,259 | 1,284 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1259s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | 80 million images we only took 60,000 out so the hope is maybe we could sample some more maybe not even that many more how many do we need to actually get something that would believe so I think Ludvig did his error bar calculation that said mm that's what he wanted 2,000 new ones and so and work that I did not want to get involved with Ludvig Becca and VY salt labeled I say mostly just | 1,284 | 1,307 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1284s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | Becca right this one for this one was looting in Becca labeled tens of thousands of images in in the tiny images data set and we got a new test set of size 2000 and again there find details about like what counts as a boat what counts to the car and this kind of thing we drive to match as closely as possible so what did we see okay so the first thing we see is | 1,307 | 1,328 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1307s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | we take vgg 16 everyone's favorite network all right I guess anyway this is a big one and what we saw is a huge drop in accuracy right there's an 8 percent drop in accuracy from the first test set to the second test set which is much bigger than you would expect by a 1 over root N and some some reasonable capacity that's big mean this is we put a little confidence interval about where we | 1,328 | 1,349 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1328s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | should be this dashed line is the reproducibility you'd hope that the confidence interval would hit the dash line that would mean okay there's you know these we'd say hey we're not having been overfitting at all well that's great that's great so it's clear what's gonna happen cuz right 85.3 was right in the ballpark of what people had seen you could do with just showering clearly | 1,349 | 1,372 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1349s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | what's gonna happen let me just I don't know if anybody's read this paper yeah but clearly what I thought would happen what I put my money on was that yeah now the shallow learning stuff will just be here and we'll just see that we'll just get a saturation so we've just been adapting and we'll just see a saturation up here and everybody will be equal and right and that was like this big drop | 1,372 | 1,391 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1372s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | and so here here here's our random features that's not what we saw at all not what we saw at all I lost money I didn't lose money because all everybody else got the same it's good there was no house in this case right bigger drop 12% drop so that went from eighty five point six to seventy three point one bigger job present and the shake-shake model only had a 4% drop | 1,391 | 1,417 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1391s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | this one that had super-high Agra so we saw exactly the opposite of what our hypothesis said right our hypothesis suggested that maybe the big models were able to just fluctuate themselves to overfit to there's one particular holdout set and if we draw a new holdout set we'll see some sign that they had adapted to it but we saw the opposite in fact the ones that have the better test | 1,417 | 1,440 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1417s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | error on the original test set have a better accuracy on the new testament oh did you have the original weights for these models or did you have to retrain them we did well there was that we did retraining in the paper but in this case this was just original weights this is some summary training but again this is why github as an experimental resource is so powerful people put post the | 1,440 | 1,464 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1440s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | weights in the repos saves you tons of time so it actually is the contrary it's very you don't even need a GPU right thank you you can just download stuff that people already done change a parameter here do some kind experiments you already picked out like got a big problem here right so there there was clearly this is not an iid resampling it's not ID resampling because that can | 1,464 | 1,507 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1464s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | Ludvig or not we don't know who the toronto folks were probably some paid undergraduates and they're not that kind of like who have never been to Toronto right yes so there is a there is a yeah so it's not perfectly ID right this is a great question but we'll come back to it let's come back on the empirical training possibly yeah but no you can't tell a difference if | 1,507 | 1,536 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1507s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | you train a classifier do you sample test what white features oh just individual features to see if you see significant things we didn't do we did not do any of this yeah but let me keep going well then they're not from the same distribution right Marshall over it yeah we would hope so right I mean I that that you could probably check because there were some sub selected to | 1,536 | 1,577 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1536s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | begin with right so if too many were sub-select 'add then they're not going to be iid and that's something I think we could probably the dead part didn't vote human so that part did not involve humans but not he's asking is just that probably the only thing that changed probably again being generous to us the only thing that's changing here is the labeling function | 1,577 | 1,601 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1577s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | not that of the human the thing is producing the why not the thing that's producing the X actually sight under why you only cycle you just want to know if you could do read you interleave the two test sets if you or sorry if you can interleave some of the stuff into the training data so we're gonna be then the entire and actually the fact that better models have a smaller drop suggests that maybe | 1,601 | 1,664 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1601s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | this boils down to the test and it could be I mean what well so sorry what so what were you what are you suggesting can we just quit crossing it so this is actually really important this is really important the fact that you've already moved on from the fact that there's no adaptive overfitting is shocking look there's no adaptive overfitting we took this is rule one don't look at | 1,664 | 1,689 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1664s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | the holdout set more than once that's rule one we look at the holdout set fifty thousand trillion times at Google every day fifty thousand trillion times and yet it doesn't matter the better you do on this holdout set this one stupid fixed holdout set like it doesn't matter that I think really before we talking to about why the drop happens just the fact that we don't see adaptive overfitting | 1,689 | 1,709 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1689s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | isn't it blew my mind I did not think that would happen I did not think that was or whether you see the single noun to productive or fitting into some others and so those are two different things which so far from your results in love wait what does that mean might be your video yeah I cut her with Zico there's no adaptive or control and this gap is only because of the difference in the | 1,709 | 1,755 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1709s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | distribution yeah the thing is to be fair gonna be hard to get it's gonna be well I think so but right listen one second what is that I think it's gonna be hard to tell from safar 10 and I think what's interesting is if we go to this other data set that's more interesting anyway I mean maybe we can't a little bit but I think it's much more competitive yeah yeah I'm getting there | 1,755 | 1,779 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1755s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | maybe I might be gay there we'll see how a lot of people want to stay through lunch yeah tell you it's not his hand up for a second here go ahead what it's gonna have a quick propose also epub Bank I knows we can have like 50 km yeah all these new data set and then test on all superpower tested and I don't know if we do get at leats I I don't think we get too concerned about mm emerges and | 1,779 | 1,810 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1779s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | some of 50 was a basically exposed so if you wanted to get another 50,000 mm just would actually be different I think we could probably do more in 2000 but I'm not sure what the limit is yeah yeah that's actually in the paper you can see at the end but but also also the thing is I only have two Becca Becca and living now they have limited cycles you know this is a problem it's a | 1,810 | 1,839 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1810s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | problem it's a problem and certainly certain and I lost both of them that's really sad actually that's really sad yet both well I know loot big sting sorry good okay that was okay no first thing we have to do though we did these Becca she's graduated this year okay also who cares about as if our time so far ten is this fun little thing that we all kind of like this we can train it | 1,839 | 1,862 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1839s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | and it certainly it's interesting because it's like the first non-trivial thing we can get to but you know what captured people's imagination was this image net data set questions do we do that one now that's harder because that's bigger there's more labels there's more and even like even here like this 1.2 million training images were not labeled in the lab they were labeled using Mechanical | 1,862 | 1,881 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1862s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | Turk that's opportunity right so now we could try to reproduce this data set and this procedure using Mechanical Turk which makes it much more scalable and we can make much larger who can do much larger experiments so how do we get the xiety resampling of image that in the water but it's not as surprising as you say given the fact that we can we can drive particular models to zero laws | 1,881 | 1,907 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1881s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | something which we thought was impossible and that's okay this is like the community drive in there it's like community doing or repeating on the test and it's kind of a same effect no I don't think so like why do it why don't we see a plateau anyway that's right that's right that's why anyway let's this is more interesting Misha the zipper tempting is so boring that meets | 1,907 | 1,942 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1907s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | let me show you this one this is much more interesting much more interesting I promise I promise much more interesting as if you want we can go into some more nitty-gritty with this one I have far more that I'm gonna be able to get through which is fine it's about no equations I'm no proof assemble class like this so this this is kind of the interface that you would see if you were | 1,942 | 1,961 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1942s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | a mechanical turk being paid to label images on the internet which is basically how all of our everything is made to work now inside all the big tent companies they hire people and have them watch horrible disturbing content and flag it for their their algorithms is great wonderful world we live in so hey so we have so here's our images right these are supposed to be bow | 1,961 | 1,984 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1961s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | everybody can read the top a bow is actually is not quite clear a weapon for shooting arrows composed of a curved piece of resilient wood with a top cord to propel the arrow and the task is click on every single image that has a boat that's what the Turker is asked to do click on every single image that has a bow so for example these three have both these ones don't actually these | 1,984 | 2,007 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=1984s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | were three that were adversarially placed in here to make sure that the people that that the turgor z' weren't cheating by the so what we were eight what we did was we actually flag all of these nuke so we come in reproducing and reproducing image that we tried to reproduce the query that was used to collect images reproduce the things that were allowed like including the date | 2,007 | 2,030 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2007s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | range from flickr reproduce the way that these things were labeled and we also through the old test set in here to see how those were labeled by these Turner's which is much much nicer than what we're able to do with with safar tender and what's interesting here is that there's super high variability in what everybody's like not everybody says the same thing obviously there's gonna be | 2,030 | 2,051 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2030s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | disagreement right there's one you have this is site anybody who work in crowdsourcing knows this right you have the people who do these tasks see through they work too quickly or there's just new us right so for example this is this bow was selected a hundred percent of the time by every turn this one was 70% of our heroine from brave I mean there's a bow there it's a little | 2,051 | 2,073 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2051s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | bit hard to see you on this blown off screen right so there's a loss of reason it's not Center frame so the lot of reason to miss it this one now becomes a metaphysical conversation about what is the bow good friend Conrad Curtin says yes so I mean neuroscientist so you know I don't know everybody has a good opinion about what that is and this one's wrong and then | 2,073 | 2,095 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2073s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | 20% of the people the boat it's a boat it's not the boat we want we want right ok so I think that's that's our issue right so this is not the boat we want and so we actually had like the histograms for every class for every class thousand classes we had histograms of the selection frequencies and so when we actually sampled our new test set we tried to match the statistics of the the | 2,095 | 2,117 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2095s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | label from the original set with the new set also the big back end vise all looked at every single image and made sure it was correct in the new test which is clearly creating a distribution shift yes Alex told me it's Berg told me but I'm not anyway it's not true oh yeah you could go look look that's actually the other amazing thing about all these data sets how many people who ever | 2,117 | 2,146 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2117s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | looked at the images name isn't it I mean you don't need to you just watch the curve go down but like you know it's like actually especially though it's like we kind of have decoupled ourselves from actually any of the domain expertise it's actually quite entertaining what you do yes it's quite a JD what do you do I grew up right ok so punch line right we see exactly the same thing we see | 2,146 | 2,167 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2146s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | exactly the same thing we saw in separate n a bigger drop technically speaking is the tempers dislike it 10 percent drop at the top here for the best ones but we still see a positive slope so the models that have that models that fared better on the original set fared better on the new set but a little bit it's not for analysis it was first two part 10 but we definitely see | 2,167 | 2,184 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2167s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | a positive slope in our fit but we do still see the significant drop so what is how is the new legally we need you said you also labeled the ultimately just we labeled the old imagenet just to match the selection frequencies from our so okay there's was there a distribution shift already you are you already pointed out that Ludwig and Becca are not the same and then we | 2,184 | 2,216 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2184s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | matched their sampling frequencies but you also in the process you also saying we sampled from we have a bigger pool we didn't have our new test set with a bigger pool of images and we sampled from that pool to match the statistics of the old Tessa oh just the statistics yes no no no not using in this evaluation say there are no old images in here really yeah yeah yes and there | 2,216 | 2,250 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2216s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | isn't same neighborhood yeah you can measure how will the old labels this might labeling take those images on those images take the predictor defined by the old labels yeah accuracy on old accuracy images it doesn't write it has seven foundation just old validation is yeah it's noisy its noisy no man it's noisy it's still noisy again so the orange right here | 2,250 | 2,289 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2250s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | nothing too big nor think it's then but even if it doesn't matter it says you're taking the label that was as it appears in the chain sent every image on these the label yeah it was generated using some procedure yeah measure of how well those predictions predicted a pose that was assigned to me for your process that's not how these sets are built at all hey Louie let me I got that I've got | 2,289 | 2,323 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2289s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | the okay it's okay I got the forest but I know I'm also used to this it's a procedure it's a procedure so so it's also true right that that none of there's no these data sets are not made by labeling images again like what happened right you show people you you query and then you test correctness as to whether or not they go in or out it's a weird it's a weird process the other | 2,323 | 2,349 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2323s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | thing that's really interesting by the way the other thing it's really interesting we take for granted the question you asked the Turkish is which which of these images contains at least one type of object Bo that is not the classification problem that everybody is now weightlifting against each other for the question we have now is just label the damn image it has one label and that | 2,349 | 2,371 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2349s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | label somehow comes out and we know that like why is this a book I mean this contains a bow but this is the the I forgot her name what's her name she's from brave okay it's not Elsa that's frozen I know that one yeah great so again like I asked you what is that image you it you could describe so much it's Elsa in a forest with her bow on the branch here anyway there's like I'm | 2,371 | 2,398 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2371s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | sorry Meredith excuse me anyway right so it's like a much more complicated thing and what we evaluate is very different let me do I I didn't make the original image that day is that so I I just illusion equals ask the question was we asked exactly the same question that the imagenet people asked I'm just saying it's the exact same question oh my god maybe you should redo it | 2,398 | 2,430 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2398s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | $1,000 it's really expensive it's really expensive it's so at anytime you want to say you want more right I mean just note that this is a very expensive project and thank you to Microsoft for Darren's leave funding part of the lately anyway can I just let me just blaze for to cut late so they run Mont I have to like two more examples do you want I mean everyone's goal oh yeah okay | 2,430 | 2,454 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2430s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | okay here two things I do think and yeah you could fight with me maybe try to find more evidence that we do not see adaptive overfitting or this is not obvious but we do see significant fragility from distribution shift and the distribution shift is here is just humans disagreeing with each other barely I mean just so sorry Simon humans disagreeing with each other it's just | 2,454 | 2,471 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2454s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | that there were much chemikal Turkish in 2011 and now they're mechanical turkeys and 2019 and that's different people most likely I don't think people Turk for that long oh geez no no retraining no nothing is just take the thing take the weights don't have to retrain because otherwise that would be really expensive again unless they were tearing with the dawn | 2,471 | 2,497 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2471s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | bench some of them are really slow you know some of them come from Google so they're really really slow and so you have like these things that are huge but yeah the trend is the trend is so the question is can we find more evidence I just want present more data and then I'll stop go ahead I know man well it's the small that it feels like it's the small distribution | 2,497 | 2,517 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2497s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | shift and just manifested in a huge error so I guess the other way I'd say that is small distribution shifts and seem to propagate into large like imagine what happens in reality right I mean this is like one of these so then I say I mean this is trying as hard as we possibly could to match the statistics maybe we could've tried harder it's it's yeah it's a small distribution shifting | 2,517 | 2,537 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2517s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | reduces the large error and we should be worried about that I'm more worried about that than the diminishing returns reversing but what's also surprising that in what flux you're just like you would imagine that the fragility increases as it were it doesn't right so they clearly all we have to do is go from 600 600 million parameters to 600 billion parameters and we'll be up on | 2,537 | 2,555 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2537s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | the line and then it'll be fine right I guess then we'll have cheese self-driving and full self-driving will happen once we get up there but um yeah I don't know I mean I don't know I don't know how to extrapolate this any further because I we've already even even at Google they've run out of resources actually even iid know and actually I think the test set okay validation set | 2,555 | 2,586 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2555s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | commissioner and folded into the training set and so on you can basically the same actresses yeah evidence video yeah you want lunch you don't see plots 10:00 you want to see Plus man you want to fight with him I have a date bar in my back right so this is this is a cool one same time this is a curated data set for 2015 by the original image net folks right here there was like | 2,586 | 2,619 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2586s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | there were 4,000 videos but then they just rendered them down as 1 million JPEGs and then presented to you as JPEGs and each of these corresponds to some classes it's a 30-30 classes the subset of imagenet so what we know exactly where these images came and it's kind of it was supposed to be for video but you could just use this for detection and classification and what we did is we | 2,619 | 2,640 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2619s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | invented a metric that's fairly reasonable metric which is you teach you cheat each video as a set of similar images and then for every frame you pick a K and you look in the neighborhood of that K and you see if you could find one where you get a miss classification this just allowed us to prune through that data set pretty quickly so remember that these are all mostly 30 frames per | 2,640 | 2,663 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2640s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | second videos so 10 frames is about a third of a second ok so so here's some cool pictures you see these cotton pickers on Twitter all the time where we go from a domestic cat and within 10 frames it's called a monkey oh yeah I think it's cuz now I see a monkey this what I don't see it goes from bird to domestic cat I guess that's what is he eating I don't know this one goes from turtle | 2,663 | 2,692 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2663s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | to lizard this one goes from dog to horse what's amazing it's easily these images to you look the same I mean I have to tell you to squint to see why they're different right again within a third of a second of each other and of course Jason can't see you that's right to Jason that jason has a good filter that makes them all look to say they all are cats and we made a lot of effort to | 2,692 | 2,713 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2692s | Training on the Test Set and Other Heresies | |
NTz4rJS9BAI | like make sure when you're going through here that the kinds of things we were pruning when we actually get to this next plot were like deep so the ones we saw before would look really similar these look really different right these are very different we prune those they did not go into the data set when we were doing curation so we did this again we I didn't do this | 2,713 | 2,730 | https://www.youtube.com/watch?v=NTz4rJS9BAI&t=2713s | Training on the Test Set and Other Heresies |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.