video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
a6v92P0EbJc | and over and the the truth is that there might be much better architectures that were simply not exploring right there might be much better building plans for networks that we don't know of that might perform a lot better with the same data and the same training so neural architecture searches the process of automatically searching for these better architectures of course that's a | 170 | 194 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=170s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | combinatorial problem but the idea is that you know you can actually learn to construct good architectures and by doing so you can you can sort of speed up this process that is manual otherwise and the idea behind it is there some regularity of when an architecture is good there's some like high level of pattern that you as a human maybe cannot really grasp but like a machine can | 194 | 221 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=194s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | figure out which architectures are good and which ones aren't so there have been a few inventions in this in this area but they are mostly costly that's what they say here the time and effort involved in hand designing deep neural networks is immense this has prompted development of neural architecture search techniques to automate this design however neural architecture | 221 | 246 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=221s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | search algorithms tend to be extremely slow and expensive they need to train vast numbers of candidate networks to inform the search process so what neural architecture search methods do is what they'll have is they'll have something like a controller in the controller itself of course is going to be a neural network so there'll be this thing that will be the controller and the controller will | 246 | 272 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=246s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | emit like a building plan so the controller will emit like a building plan for this network right here and then you train the entire thing once through for the entire hundred thousand steps and then you observe the final validation accuracy which might be something like eighty percent and then you know okay this is eighty percent so you feed the eighty percent into your | 272 | 296 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=272s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | controller and the controller out puts the next building plan that it thinks will score higher and then you train the entire thing again and you maybe observe a 70% accuracy you again feed that in right and the controller realizes oh I may have done something wrong let me try something else and does it yet if this looks like reinforcement learning to you that's | 296 | 319 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=296s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | because this is reinforcement learning so there really is see here the controller would be the agent the percentages here the accuracies would be the reward and the invasions would be basically this thing here this thing would be the actions but sometimes it's the observations and you need to score the different things okay so the problem of course with this is that the | 319 | 347 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=319s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | reinforcement learning requires a lot of data it requires a lot of steps to converge because the signal from the reward is just so weak you simply get one number for your action and you don't know what you can change to make it better you simply have to try so you need a lot of steps but this thing here is mighty slow because each each single step in your reinforcement learning | 347 | 372 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=347s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | procedure involves training an entire neural network for like this many steps ok so all of this is ginormously slow and resource intensive and that of course blocks a lot of research because you know we started with the plan to automate this part right here but automating it itself is super expensive so they go for a different solution they say this could be | 372 | 400 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=372s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | indeed if we could infer at Network sorry if we could infer a networks trained accuracy from its initial state okay it seems a bit out there but let's let's give them benefit of the doubt in this work we examine how the linear maps induced by data points correlate for untrained network architectures in the NASA bench 201 search space and motivate how this can be used to give a measure | 400 | 431 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=400s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | of modeling flexibility which is highly indicative of a network strained performance we incorporate this measure into a simple algorithm that allows us to search for powerful networks without any training in a matter of seconds on a single GPU okay and they have the code available right here if you want to go and check that out so let's go in let's go into that the claims are pretty big | 431 | 458 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=431s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | and the reasoning behind the claims is the following observation you can already sort of see in this graphic right here we'll we'll go over what it means in one second but what they do is they take different networks in this search space and the search space in this case is given by this benchmark so this benchmark basically has a long list I think of architectures that you could | 458 | 485 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=458s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | consider actually so it's a it's a constructive list so they don't actually give you the list but they give you like a a way to construct architectures and they took those architectures and they rank them by how well they score on C 410 so there are very good architectures which are here there are good ones there are mediocre ones and then the bad ones okay and you can see that the histograms | 485 | 511 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=485s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | here of whatever they measure they look quite different so the histograms with the good ones they all have kind of spiky around zero and the histograms of the bad ones all sort of look spread out so this is the measure that they're going to propose is they have some sort of number some sort of histogram that they produce and if the histogram is very spiky and close together | 511 | 535 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=511s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | around zero then they conclude that this network is good and if the histogram is very spread out like this they conclude that the network is bad now these histograms as you might expect they are computed not from the final trained Network but they are computed from the initial Network so here they show at least you know in this case it seems to be that there is a general correlation | 535 | 564 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=535s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | between the trained accuracy and how this histogram looks and we're going to explore what they do so it's essentially it's pretty easy they compute the linear map around each data point so what is that if you imagine a neural network as a nonlinear function which I guess you should because it is so let's imagine it as like a nonlinear function from X to Y what they'll do is | 564 | 597 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=564s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | simply they'll look at a given date training data point which could be here right this could be the X and this could be the the Y and in fact let's look at it in lost landscape not even in Y but in L in terms of the loss because we don't need necessarily a single label this could be for unsupervised this could be for anything okay so it Maps a data point to a loss now what we'll do | 597 | 625 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=597s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | is we'll simply linearize the function around that point which means we'll just freeze all the nonlinearities in place and that will give us this linear function right here okay we just observe that this linear function can exist it's the tangent to the lost landscape and it's at a particular data point right it's in data space not in in weight space then we look at a different data | 625 | 649 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=625s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | point so we look at this data point right here another data point what's the linear function around this one is sort of like whoops T is like that and then around this one is like this okay so this is one function now let's look at a different function right here so L X and we'll look at this function the linear function okay so for some reason this is like this and if we | 649 | 684 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=649s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | consider two data points their linearization is very similar now imagine that these two have been produced by the same sort of neural networks it's just the architecture is a little different but they have been produced like they have the same number of parameters in the neural network which neural network would you prefer remember you can in by training the | 684 | 711 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=684s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | neural network you can actually shape this loss function you can kind of shape that around so which one would you prefer I personally would prefer the top one because the top one already tells me that hey you know I might have 10 parameters here and this already sort of looks like each of the 10 parameters is doing something so if I then go into my 10 parameters and I you know turn this | 711 | 735 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=711s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | knob right here then I might you know up this bump or down this bump or do something with it but the sort of frequencies curvature the randomness of the function the way that it fluctuates tells me that all of the different parameters must have some sort of effect right because it's of quite an expressive function whereas if I have the same number of parameters for a | 735 | 760 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=735s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | function like this this sort of tells me well maybe only one of the when we only one of the weights is actually doing something maybe only one of the dimensions is doing something this seems odd right that even though I've initialized it randomly a super regular function like this comes out so maybe all of the all of these parameters down here they don't do anything or it is so | 760 | 786 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=760s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | somehow the signal doesn't get through so that's I they don't explicitly say it in these terms but this is how I make sense of this what they're saying is that if you look at the linearizations of the function and you look at the the angle right here so the angle in this case is that and in this case is that and in this case is that so you look at the slope here and | 786 | 813 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=786s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | the slope is basically the gradient of these linearized functions and what you want to do is you want to look at the correlation between those of the different data points so here you have three angles one is very short one is very bit longer like this and or no even like this and one is even over ninety degrees like that they are not correlated at all right they're all very | 813 | 845 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=813s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | different however the angles here they're all quite the same as you can see so what they propose is the following let's send all the data points or in that case all the data points in a particular mini-batch let's send them through the function and let's calculate their linearizations so the linearization is nothing else than you send them through the network to obtain the F value for | 845 | 874 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=845s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | the x value and then you calculate the gradient with respect to the input now you have to get used to this a bit because usually we calculate the gradient with respect to the weight but now we calculate the gradient with respect to the input which if this is a linear function so if you have a live f of X equals WX like a linear function then this gradient del F del X would | 874 | 899 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=874s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | just give you the W will give you the slope of the linear function and the same in the neural network when you linearize it alright so we're going to obtain all these linearizations and that gives us the this matrix J right here and what we can do is we can then observe the covariance matrix of J of all these linearizations the covariance matrix simply tells you how to data | 899 | 929 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=899s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | points vary with each other and in fact they don't look at the covariance matrix but they look at the correlation matrix which is simply the scaled covariance matrix so one entry in this covariance matrix so you have n data points and this gives you a matrix that's n by n and that particular entry here like the entry IJ would simply state how does the angle of data point I | 929 | 955 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=929s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | correlate with the angle of data point J okay that's the that's the covariance matrix and now the hypothesis is if all of these data points are sort of independent like in our very expressive function here then the these correlations they should not be high in fact most data points should be rather uncorrelated however in this case right here if the function is sort of kind of | 955 | 986 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=955s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | degenerative or something not very expressive then all of these all of these angles or of these linearizations should be highly correlated and that's what you see in this graph right here this right here now is these correlation histogram of the correlations between local linear maps across all pairs of items in a mini batch of C 410 training data each pod is a histogram for a | 986 | 1,016 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=986s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | single untrained na s bench 201 architecture so remember the expressivity is important because we want to train that function and therefore it's important that every parameter does something and if it's degenerate we can't train it well and that's I find that's the reasoning they they sort of say this but not I might make I might make the wrong sense out of it here but it seems to me like that's | 1,016 | 1,042 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1016s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | what's actually going on so you can see this is simply these matrix values rolled out and then plot it as a histogram so what does it mean when the histogram is like super spread out like this it means that there are a lot and I think down here our axes yes there are a lot of data points that correlate highly or anti correlate highly with each other okay which means that exactly this | 1,042 | 1,067 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1042s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | degeneracy happens either too high or too negative high correlation means that they're very much they're kind of the same thing so there is if you have as many parameters as data points that means that one parameter can potentially serve these two data points or these two that are correlated by one or negative one you don't need both parameters and therefore | 1,067 | 1,092 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1067s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | you have a lot of parameters doing nothing whereas over here with the good networks you can see that this spikes around zero meaning that the data points are not correlated or the linearizations around the data points are not correlated and therefore you can sort of shape the function around each data point however you want which we sort of know that neural networks what they do | 1,092 | 1,118 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1092s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | is they're so over expressive that they're actually able to shape the functions around the data points without necessarily looking at other data points nearby and that expressivity is what what you want and that expressivity is what this in part measures okay so they make a they have some experiments here where they validate this so for all these architectures in this benchmark | 1,118 | 1,145 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1118s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | and maybe I should tell you what show you what the benchmark looks like so the benchmark has this particular form this particular form there's this skeleton and in this skeleton there is this block and it's always repeated and you're basically your task is to determine what this block should be so this block has an input node a and an output node D and two intermediate nodes and what you have | 1,145 | 1,169 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1145s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | to do is basically you have to determine these connections right here so there are six connections and for each one you have the option of putting different things there like you can see you put can put a convolution you can put the identity function which is a skip connection zero eyes I'm I don't maybe that's the zero function so it basically means nothing I'm not so sure | 1,169 | 1,191 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1169s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | honestly but you could technically put a convolution here and here right or and or different convolutions or things like this so there are these 15,625 possible cells okay so the nurse benchmark contains 15,625 possible architectures that you'll have to search and they take these architectures and they plot now they plot for each architecture the validation accuracy | 1,191 | 1,226 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1191s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | after training and the training protocol is standardized you don't have to care about that right and the score that they measure at the beginning of training and what you can see is that there is a linear relationship sort of like a sort of from from these experiments what you'll get is like this sort of feeling what they're gonna propose is that you should take that score as a as a measure | 1,226 | 1,253 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1226s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | and here again also sword of sword sword of there is a there is a clear trend as you can see right here though yeah though this as you can see this sort of spreads out and the most right one is imagenet which is the most difficult one of course so and this is C for 100 which is more difficult than C for 10 so we can see that this sort of relationship at the top it doesn't really hold | 1,253 | 1,286 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1253s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | anymore if the task gets difficult and this is so what I think is happening this is kind of an interjection of my own opinion what's happening here is that this score that they discover allows them pretty efficiently to see which networks are just degenerate and and cannot be trained like if you try to train them they just perform really poorly okay that it's probably a very | 1,286 | 1,312 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1286s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | good score for weeding those out and that would mean if you kind of barrier here somewhere right you could just discard a whole lot of this crap or even even here right you could just discard a whole lot of this crap and also now here just you know all of this crap yeah whereas here as you can see some of this score sometimes it's higher then these ones even though they perform | 1,312 | 1,338 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1312s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | better and again you could probably discard a lot of the crap but it's not as distinctive for the well performing networks because these here are all not the degenerate version right they're not degenerate in the sense that they're they have some fundamental flaw where the function lakhs now expressivity from the very start so you can't train it and then probably other factors come into | 1,338 | 1,362 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1338s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | play other factors then you can simply determine with this particular score but you know there there is this relationship that's that's you know you can see that and they do some ablations on this here for example or your score is a proxy for a number of parameters and they say no the number of parameters works way worse than this particular score which it always is a cool thing | 1,362 | 1,392 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1362s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | then how important is a specific mini-batch and initialization and they say look right here we for some architectures we do different mini batch sizes and you can see each of those groups they don't vary too much in how their it influences their score this is I believe this is the same architecture so it's always an architecture that achieves in this case for example wow | 1,392 | 1,416 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1392s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | that's a not a straight line 77% or so and you can see if you go for different mini batches the score varies only minimally initialization is a bigger variance inducing thing but also here the scores don't vary too much but it is interesting that the different initialization to get you to different score because it would directly support kind of my hypothesis now what's going | 1,416 | 1,445 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1416s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | on here is that you sort of measure initial degeneracies and you can sort of make up for these initial degeneracies in the architecture sometimes with sort of a different initialization so the different initializations give you differently performing networks we already know this from things like you know a lottery ticket hypothesis and so on that the initialization can | 1,445 | 1,469 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1445s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | matter to some degree in these types of things now that being said they always train to the same it seems but their their score varies so I might be backwards correct here or not correct but in any case the initialization here matters more but also you can still see this linear relationship and this is particularly interesting this is even the case when you just input white noise so instead of | 1,469 | 1,500 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1469s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | the data you measure that score by just inputting noise that I guess has some sort of the same magnitude as the data would have but it's just noise and you can still sort of see this linear relationship which is very interesting and that I think also shows some that you what you'll find what you find is a property of the network itself and the fact that it is it is initialized and | 1,500 | 1,526 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1500s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | built in such a way that it allows you to train it in a very in a sort of a benign manner it has no degeneracies okay so in last experiment they go here and they say we evaluated the score on initialized networks in the PI torch CV library so they go to this library that has a lot of these networks but these networks are not the same as this benchmark this benchmark is specifically | 1,526 | 1,559 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1526s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | designed to do architecture search now the networks in this library they are all designed to perform really well some are designed to be quite small some are designed to be quite fast and so on but in general they're all of their goal is to perform well and they have been sort of found by humans to perform well so they take now these networks on C 410 and they test them so as you can see | 1,559 | 1,585 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1559s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | here here is the test accuracy again and here is their score that they give it and they say rip it up put it up now I can't move this anymore hello well okay they say that this linear relationship still sort of holds it doesn't it doesn't hold super super well but you can still sort of if you squint if you squint hard you can see that it sort of goes upward though you really have to | 1,585 | 1,621 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1585s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | squint hard like what are these things right here and what again what's the case is that if the score is low you will sort of be able to cut off the cut of the worst-performing ones but really at the top here it doesn't seem like there is a particular relation between between these networks and this initial score which sort of strengthens my hypothesis that what this does is just | 1,621 | 1,652 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1621s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | kind of weed out the bad ones but it's pretty cool because you can weed out the bad ones without any training right it's simply forward prop backward prop there you have it so cool now they come they here is the experiment where they now really do this na s benchmark and they compare with other methods so some of these other methods are designed to do they call it weight sharing which | 1,652 | 1,680 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1652s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | basically is a technique where you can sort of speed up the speed up the algorithm as compared to non weight sharing and the non weigh cheering that's one of these we have discussed initially that was my initial example with the controller and so on where it takes super long so here you see the method and how long each method takes now the best ones as you can see already | 1,680 | 1,706 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1680s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | the best ones here are these these methods right here are the best ones they score somewhat like a 93.9 or so on C 410 whereas these weight sharing ones they don't perform too well except this one seems to perform quite well and in this hour's case they perform worse than that but they still perform better than a lot of the weight sharing once so what their point is basically is | 1,706 | 1,737 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1706s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | that they get a pretty good score which is a ninety one point five on C for ten which is you know it's at least not degenerate it's a it's a good accuracy they score that with simply evaluating ten architectures right and as n goes up as they evaluate more and more architectures they do they do get better but not much so they have a discussion here I'm having trouble moving this all | 1,737 | 1,771 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1737s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | right so we'll sort of go through the discussion we report results yada yada yada yada as the set up the non weight sharing methods are given a time budget of twelve thousand seconds for our method and the non weight sharing methods are averaged accuracies or averaged over 500 runs for weight sharing methods accuracies are reported over three runs with the exception of G das our method | 1,771 | 1,797 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1771s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | is able to outperform all the way chairing methods while requiring a fraction of the search time and that you maybe see at the table this is the real I mean this is the real deal here they only use here one point seven seconds compared to the twelve thousand seconds of the other methods and you reach almost the same accuracy now to be said two percent in this particular regime on | 1,797 | 1,819 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1797s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | C 410 is still a sizable difference and that's the same benchmark right with the same sort of the same training schedule and so on so there's not too much room to tune here you simply have to find a better architecture so these things are still sizably ahead of this and what it appears to me that these methods here that don't perform well they're they're simply crap it seems they're simply I | 1,819 | 1,848 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1819s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | don't I don't know but they might be trying out something or you know doing something researchy or whatnot but it seems like if you're well able to weed out the bad architectures you might be getting to a score like this and then if you are actually performing a search to find the best one then you might be getting to somewhere like this and you can see this | 1,848 | 1,876 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1848s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | here throughout so in C for 100 they achieve a better score than these things but a worse score than the non weight sharing method and an image net it gets even the difference is even larger so again what I can see here is that there's is a good method to maybe get you like let's say 90% of the wear of the way you want to go and what's interesting is that here they say we | 1,876 | 1,908 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1876s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | also show the effect of sample size we show the accuracy of the network's chosen by our method for each n so that's the sample size we list the optimal accuracy for sample sizes 10 and hundred and random selection over the whole benchmark so in this case they have the the optimal one which I guess they just draw 10 samples and then take the best one so they train all of them | 1,908 | 1,930 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1908s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | and then take the best one you can see that already gets you to the 93 and whereas in their case sometimes when they add more they get worse so here they get better but then they get worse again so they comment on this right here we observe that the sample size does not have a large effect on the accuracy of our method but note that as sample size increases our method suffers from a | 1,930 | 1,956 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1930s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | small amount of noise increasing the gap between our score and the optimal result and of course the key practical benefit is execution time so again they are massively faster than the other methods but to me it seems you could just think of combining these methods right you combine this with this in that what you want to do is actually actively search for the best ones but by doing so you | 1,956 | 1,986 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1956s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | could if you could pretty quickly weed out the bad ones using this method down here you might already have like a big speed up because again with comparison to this random ones what appears to happen is that the get good at finding you know you're 90% architecture but then they fail to differentiate the top performance performers from each other where you'd really have to train the network to find | 1,986 | 2,015 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=1986s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | out what's you know which one's better so yeah here they say they visualize the trade-off between search time and accuracy for C for 10 for different NES algorithms on dnas benchmark by removing the need for training our method is able to find accurate networks in seconds instead of hours and here you can see the accuracy and here you can see the time and all the the good ones are | 2,015 | 2,041 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=2015s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | either way over here or here and there's is almost at zero while being quite close to the accuracy of the other ones all right yeah that was that was this paper again I think this is pretty valuable if you are especially if you're in a new domain where you might not know what kind of network to build you might just be able to write a little script that generates networks run it through | 2,041 | 2,072 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=2041s | Neural Architecture Search without Training (Paper Explained) | |
a6v92P0EbJc | this algorithm and at least you get an idea of which ones are certainly not worth considering and then you can simply select one of the other ones it doesn't you know often it doesn't need to be the best ones and you can then tweak it a little bit manually the ones you found may be you see some regularity and yeah that was my two cents on this paper I hope you liked it if you did | 2,072 | 2,094 | https://www.youtube.com/watch?v=a6v92P0EbJc&t=2072s | Neural Architecture Search without Training (Paper Explained) | |
-Y7PLaxXUrs | Translator: Michele Gianella Reviewer: Saeed Hosseinzadeh When I was a boy, I wanted to maximise my impact on the world, and I was smart enough to realise that I am not very smart. And that I have to build a machine that learns to become much smarter than myself, such that it can solve all the problems that I cannot solve myself, and I can retire. And my first publication on that dates back 30 years: 1987. | 0 | 42 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=0s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | My diploma thesis, where I already try to solve the grand problem of AI, not only build a machine that learns a little bit here, learns a little bit there, but also learns to improve the learning algorithm itself. And the way it learns, the way it learns, and so on recursively, without any limits except the limits of logics and physics. And, I'm still working on the same old thing, | 42 | 76 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=42s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | and I'm still pretty much saying the same thing, except that now more people are listening. Because the learning algorithms that we have developed on the way to this goal, they are now on 3.000 million smartphones. And all of you have them in your pockets. What you see here are the five most valuable companies of the Western world: Apple, Google, Facebook, Microsoft and Amazon. | 76 | 111 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=76s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | And all of them are emphasising that AI, artificial intelligence, is central to what they are doing. And all of them are using heavily the deep learning methods that my team has developed since the early nineties, in Munich and in Switzerland. Especially something which is called: "the long short-term memory". Has anybody in this room ever heard of the long short-term memory, | 111 | 144 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=111s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | or the LSTM? Hands up, anybody ever heard of that? Okay. Has anybody never heard of the LSTM? Okay. I see we have a third group in this room: [those] who didn't understand the question. (Laughter) The LSTM is a little bit like your brain: it's an artificial neural network which also has neurons, and in your brain, you've got about 100 billion neurons. And each of them is connected | 144 | 185 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=144s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | to roughly 10,000 other neurons on average, Which means that you have got a million billion connections. And each of these connections has a "strength" which says how much does this neuron over here influence that one over there at the next time step. And in the beginning, all these connections are random and the system knows nothing; but then, through a smart learning algorithm, | 185 | 213 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=185s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | it learns from lots of examples to translate the incoming data, such as video through the cameras, or audio through the microphones, or pain signals through the pain sensors. It learns to translate that into output actions, because some of these neurons are output neurons, that control speech muscles and finger muscles. And only through experience, it can learn to solve all kinds of interesting problems, | 213 | 244 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=213s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | such as driving a car or do the speech recognition on your smartphone. Because whenever you take out your smartphone, an Android phone, for example, and you speak to it, and you say: "Ok Google, show me the shortest way to Milano." Then it understands your speech. Because there is a LSTM in there which has learned to understand speech. Every ten milliseconds, 100 times a second, | 244 | 275 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=244s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | new inputs are coming from the microphone, and then are translated, after thinking, into letters which are then questioned to the search engine. And it has learned to do that by listening to lots of speech from women, from men, all kinds of people. And that's how, since 2015, Google speech recognition is now much better than it used to be. The basic LSTM cell looks like that: | 275 | 305 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=275s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | I don't have the time to explain that, but at least I can list the names of the brilliant students in my lab who made that possible. And what are the big companies doing with that? Well, speech recognition is only one example; if you are on Facebook - is anybody on Facebook? Are you sometimes clicking at the translate button? because somebody sent you something in a foreign language | 305 | 333 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=305s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | and then you can translate it. Is anybody doing that? Yeah. Whenever you do that, you are waking up, again, a long short term memory, an LSTM, which has learned to translate text in one language into translated text. And Facebook is doing that four billion times a day, so every second 50,000 sentences are being translated by an LSTM working for Facebook; and another 50,000 in the second; then another 50,000. | 333 | 368 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=333s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | And to see how much this thing is now permitting the modern world, just note that almost 30 percent of the awesome computational power for inference and all these Google Data Centers, all these data centers of Google, all over the world, is used for LSTM. Almost 30 percent. If you have an Amazon Echo, you can ask a question and it answers you. And the voice that you hear it's not a recording; | 368 | 400 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=368s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | it's an LSTM network which has learned from training examples to sound like a female voice. If you have an iPhone, and you're using the quick type, it's trying to predict what you want to do next given all the previous context of what you did so far. Again, that's an LSTM which has learned to do that, so it's on a billion iPhones. You are a large audience, by my standards: | 400 | 433 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=400s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | but when we started this work, decades ago, in the early '90s, only few people were interested in that, because computers were so slow and you couldn't do so much with it. And I remember I gave a talk at a conference, and there was just one single person in the audience, a young lady. I said, young lady, it's very embarrassing, but apparently today I'm going to give this talk just to you. | 433 | 462 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=433s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | And she said, "OK, but please hurry: I am the next speaker!" (Laughter) Since then, we have greatly profited from the fact that every five years computers are getting ten times cheaper, which is an old trend that has held since 1941 at least. Since this man, Konrad Zuse, built the first working program controlled computer in Berlin and he could do, roughly, one operation per second. | 462 | 497 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=462s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | One! And then ten years later, for the same price, one could do 100 operations: 30 years later, 1 million operations for the same price; and today, after 75 years, we can do a million billion times as much for the same price. And the trend is not about to stop, because the physical limits are much further out there. Rather soon, and not so many years or decades, | 497 | 528 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=497s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | we will for the first time have little computational devices that can compute as much as a human brain; and that's a trend that doesn't break. 50 years later, there will be a little computational device, for the same price, that can compute as much as all 10 billion human brains taken together. and there will not only be one, of those devices, but many many many. | 528 | 552 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=528s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | Everything is going to change. Already in 2011, computers were fast enough such that our deep learning methods for the first time could achieve a superhuman pattern-recognition result. It was the first superhuman result in the history of computer vision. And back then, computers were 20 times more expensive than today. So today, for the same price, we can do 20 times as much. | 552 | 577 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=552s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | And just five years ago, when computers were 10 times more expensive than today, we already could win, for the first time, medical imaging competitions. What you see behind me is a slice through the female breast and the tissue that you see there has all kinds of cells; and normally you need a trained doctor, a trained histologist who is able to detect the dangerous cancer cells, | 577 | 609 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=577s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | or pre-cancer cells. Now, our stupid network knows nothing about cancer, knows nothing about vision. It knows nothing in the beginning: but we can train it to imitate the human teacher, the doctor. And it became as good, or better, than the best competitors. And very soon, all of medical diagnosis is going to be superhuman. And it's going to be mandatory, because it's going to be so much better than the doctors. | 609 | 640 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=609s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | After this, all kinds of medical imaging startups were founded focusing just on this, because it's so important. We can also use LSTM to train robots. One important thing I want to say is, that we not only have systems that slavishly imitate what humans show them; no, we also have AIs that set themselves their own goals. And like little babies, invent their own experiment | 640 | 672 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=640s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | to explore the world and to figure out what you can do in the world. Without a teacher. And becoming more and more general problem solvers in the process, by learning new skills on top of old skills. And this is going to scale: we call that "Artificial Curiosity". Or a recent buzzword is "power plane". Learning to become a more and more general problem solvers | 672 | 698 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=672s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | by learning to invent, like a scientist, one new interesting goal after another. And it's going to scale. And I think, in not so many years from now, for the first time, we are going to have an animal-like AI - we don't have that yet. On the level of a little crow, which already can learn to use tools, for example, or a little monkey. And once we have that, it may take just a few decades | 698 | 729 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=698s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | to do the final step towards human level intelligence. Because technological evolution is about a million times faster than biological evolution, and biological evolution needed 3.5 billion years to evolve a monkey from scratch. But then, it took just a few tens of millions of years afterwards to evolve human level intelligence. We have a company which is called Nnaisense | 729 | 761 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=729s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | like birth in [French], "Naissance", but spelled in a different way, which is trying to make this a reality and build the first true general-purpose AI. At the moment, almost all research in AI is very human centric, and it's all about making human lives longer and healthier and easier and making humans more addicted to their smartphones. But in the long run, AIs are going to - especially the smart ones - | 761 | 793 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=761s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | are going to set themselves their own goals. And I have no doubt, in my mind, that they are going to become much smarter than we are. And what are they going to do? Of course they are going to realize what we have realized a long time ago; namely, that most of the resources, in the solar system or in general, are not in our little biosphere. They are out there in space. | 793 | 820 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=793s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | And so, of course, they are going to emigrate. And of course they are going to use trillions of self-replicating robot factories to expand in form of a growing AI bubble which within a few hundred thousand years is going to cover the entire galaxy by senders and receivers such that AIs can travel the way they are already traveling in my lab: by radio, from sender to receiver. | 820 | 852 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=820s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | Wireless. So what we are witnessing now is much more than just another Industrial Revolution. This is something that transcends humankind, and even life itself. The last time something so important has happened was maybe 3.5 billion years ago, when life was invented. A new type of life is going to emerge from our little planet and it's going to colonize and transform the entire universe. | 852 | 888 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=852s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
-Y7PLaxXUrs | The universe is still young: it's only 13.8 billion years old, it's going to become much older than that, many times older than that. So there's plenty of time to reach all of it, or all of the visible parts, totally within the limits of light speed and physics. A new type of life is going to make the universe intelligent. Now, of course, we are not going to remain the crown of creation, of course not. | 888 | 920 | https://www.youtube.com/watch?v=-Y7PLaxXUrs&t=888s | True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo | |
H5vpBCLo74U | hi there today we're looking at Excel net generalized auto regressive pre-training for language understanding by Jill and yang and other people from Carnegie Mellon University as well as Google brain so this is kind of a the elephant in the room currently as Excel net is the first model to beat Bert which was the previous state of the art and a lot of NLP tasks to be burped at a | 0 | 27 | https://www.youtube.com/watch?v=H5vpBCLo74U&t=0s | XLNet: Generalized Autoregressive Pretraining for Language Understanding |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.