video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
a0f07M2uj_A | basically in this way where you had input and feedback here very simple simplistic view of neurons whereas nowadays even the company computational community views neurons in a more differentiated way where you have for example different regions here on the soma that can be separated from each other and you have inter neuron interference and so on I'm not qualified | 1,894 | 1,925 | https://www.youtube.com/watch?v=a0f07M2uj_A&t=1894s | Backpropagation and the brain | |
o3y1w6-Xhjg | okay so hi everyone so huge thanks to Alexander hyung you and Sammy for organizing this incredible workshop and excited to be here and thanks everyone for coming I know I'm between you and lunch so I'll try and keep things on time so today I'm really excited to talk a bit about transfer learning in the context of deep learning so transfer learning is this incredibly popular | 0 | 28 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=0s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | technique it's used almost everywhere that we apply deep neural networks but there's also this challenge that we really don't understand many aspects of it all that all that well and so by the end of this talk I hope that you know you have some sense of all the many different ways it comes up and also some of the interesting open questions there are in the field so a lot of this talk | 28 | 51 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=28s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | is going to be based off of a paper understanding transfer learning for medical imaging that's joint work with my collaborators chauhan jeong john Kleinberg and sami Benjo okay so let's dive right in what is transfer learning very basic so in the settings that we're gonna really be studying what you do is you first you learn classifier on some tasks let's call it Tosca and sometimes we call this | 51 | 77 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=51s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | pre-training then having learned Koski you continue training this classifier on a new task task B and the goal is really to get good performance on task B so the goal is to get good performance on toss B you might ask why train on task a at all and the general belief in the community is that if you know task a is sort of complex and diverse and very general then by going through this | 77 | 101 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=77s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | process of training on task a hopefully you've learned useful things that you can sort of transfer over when you start retraining on on task B so this high-level framework actually has connections to a lot of interesting work that's come out from the the theoretical perspective I think Zac mentioned some in his talk earlier today and I wanted to kind of give a quick point out to shy | 101 | 126 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=101s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | and Devine who is maybe here no I don't see him oh you're here you're here okay in the front where there's been a lot of interesting work done in in this very related field of domain adaptation to try and understand this from a more formal framework and I think the times really come to revisit some of these ideas and see how we can use them to give us better insights for | 126 | 145 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=126s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | transfer learning in in the deep learning context so speaking of transfer learning and deep learning what does that look like well it's very similar of sorry very simple you sort of just replace classifier with deep network so you have this deep network it's gonna be your classifier you randomly initialize it and you train on toski and this will be known as pre training then you take | 145 | 166 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=145s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | this network and and it's sort of converged to some set of parameters on taos k and then you train it again this time on task B and then voila you have your final model that you're going to deploy and it's it's hopefully going to do great on on task B so this paradigm is pretty simple but it's been extremely successful and it's probably the computer vision community that sort of | 166 | 188 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=166s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | really showed us how successful this could be in various applications so in sort of specifically the current setting what people do is pre train a large convolutional neural network on some data set of large images and you know image net gets a special shout-out here it's extremely popular for for pre training so much so that there are sort of entire papers saying why image net is | 188 | 210 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=188s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | good for transfer learning but besides that there are a couple of other data sets so ms cocoa is a very popular big computer vision benchmark for object detection that sometimes used for pre training and companies also tend to have their own internal data sets which they like using for pre training so a really big one that Google likes using is jf th has sort of three hundred million images | 210 | 231 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=210s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | so absolutely enormous what's been really interesting to see in the past couple of years though is that transfer learning has also become very popular in applications in natural language processing so in the past people were able to transfer word embeddings so you've probably seen these diagrams of taking a word in your vocabulary getting a vector representation and then these | 231 | 252 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=231s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | vector representations of all of these nice properties we could do that for a while but it's only more recently that sort of all of these a neural networks that are named after Muppets for some reason have have have been developed and that lets us transfer much more complex representations of language and that's shown to be very very successful in a lot of standard natural language tasks | 252 | 274 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=252s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | and now I have to mention the most important part of transfer learning which is github for transfer learning so been mentioned in a talk in his talk earlier that you know github can be this very useful research resource and that's definitely very true in transfer learning so in transfer learning kind of applications nobody actually bothers with the pre-training stuff instead you | 274 | 298 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=274s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | go to github and you sort of find the model you're interested in and find all of its pre trained weights and then you just download it and then after you download it you just perform the fine-tuning for whatever tells you're interested in and this is really important because what this is enabled is it's enabled people who totally aren't working on sort of core machine | 298 | 316 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=298s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | learning to apply transfer learning to all of their problems and nowhere is this more true than in medical imaging where where it's sort of the entire community has almost universally adopted transfer learning as this paradigm and what's the setup here well the setup here is that you take this sort of standard pre-trained imagenet model something large and complex like Inception v3 and then you | 316 | 342 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=316s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | and then you have sort of these pre trained weights on imagenet that you sort of downloaded from somewhere so imagenet has you know amongst other things a whole bunch of different dog breeds and then bizarrely you sort of find two in this model to do all kinds of medical predictions so you find unit to predict and diseases on chest x-rays diseases like retinal | 342 | 361 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=342s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | diseases PET scans for early detection of Alzheimer's and sort of the most exotic application was even sort of screening human embryos for IVF treatments so people are just sort of going out there and doing this and and when you think about this this is kind of bizarre because the reason the community um you know wanted to do transfer learning is this belief that | 361 | 382 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=361s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | you sort of learned features on sort of this this source dataset and then you can kind of transfer all of this to to your target tasks but of course medical images and natural images are extremely different to each other so it's kind of interesting to understand what's what's on here one final thing is these aren't just sort of sort of turning into papers where you see accuracies but they're | 382 | 402 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=382s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | actually being deployed in clinic so this is the example of one company called idx which literally states that it takes inception v3 pre turndown imagenet and is using these to diagnose retinal diseases and it's sort of out in clinic right now so understanding their examples the inception model mean for this problem adversarial examples where for medical images you mean I'm just | 402 | 427 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=402s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | asking you're saying this is being deployed and it's natural to think about at the state examples for the infection not that I mean so for example you know I'm not talking about transferability I mean that's that's an interesting question sort of like how much can you like kind of do like these are completely different datasets so I think that's also like a very interesting | 427 | 449 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=427s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | thing to look at um but okay so while we're kind of going ahead and sort of deploying all of these there's kind of this real challenge because even in the natural image setting we actually don't understand the effects of transfer learning that well so I'm gonna review some results which have just come out in literally the last year that have really challenged the common assumptions people | 449 | 468 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=449s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | have on transfer learning so this is a picture of Ms Coco and I mentioned it earlier it's a very popular computer vision benchmark for for doing object detection so this is kind of what it looks like you're trying to learn these balance boxes and in almost all of the competition entries the standard thing to do is you pre train on imagenet and then you sort of fine-tune your image | 468 | 491 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=468s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | net a model on this this ms koko tusk but then last year we got this paper rethinking image Annette pre training which basically showed just by kind of being up maybe a little bit more careful about how you pick your learning rate you actually get exactly the same results from random initialization as you do with Swiss pre-training now part of the reason people really | 491 | 514 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=491s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | like pre-training on imagenet is again this belief that it's kind of this big diverse task and sort of if you train on there you're gonna learn lots of interesting features that you can kind of reuse in lots of places so sort of this underlying assumption is more data is is great but then there was there's another paper also just last year which looked at pre training on jft which is | 514 | 534 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=514s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | sort of this even bigger data set sort of 300 million images and what they find is that more true more pre training data is not always better which is this sort of very closely held assumption in the community so in particular like they sort of try training from random initialization versus the entire data set and in a bunch of places the performances are actually really pretty | 534 | 555 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=534s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | comparable other criterion if you fix the number of iteration or training data more source domain data is not better I suspect that that won't be true right it's like fitting more to the training distribution it's not necessarily better to the story features not necessarily better so they're they're not fixing the number of training iterations I think I think they're literally just training to | 555 | 596 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=555s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | convergence and see what it's late right yeah right like you're saying maybe you just sort of like dues or barely stopping and then you just fix it whatever the number is that you got from the first amount of examples you do the same number of updates for more data or data points you know I'm not I'm not sure it'll make a difference I see your point and I'd have to check to see | 596 | 626 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=596s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | exactly what they did but we can discuss this offline yeah but yeah so so one interesting point to kind of make here and I think this is actually connected to some of the theoretical work in this that's that sort of come up in related topics is if you do a better job of actually picking the subsets of data that you train on you actually do see significant performance gains and I | 626 | 650 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=626s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | think again this is a place where we can kind of revisit some of the theoretical ideas and see if we can do something better this process is like relatively ad-hoc and then finally one other paper that came out just earlier this year do better image net models transfer a better has this kind of makes us implicitly makes this very interesting observation that when you decide to do | 650 | 673 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=650s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | transfer learning you're not just taking the the features but you're also committing to an architecture it because you download them both together and if your task looks very different to imagenet you know this is something you should be be aware of and thinking about and so do better image net models transfer better well it's complicated there's this nuance relationship that | 673 | 694 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=673s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | depends on sort of how you regularize during the training process the data set the specifics of the data set and size etc and so you sort of actually see a lot of variability based off of all of these conditions and sometimes you actually get pretty similar performance as usual optimal weight optimal where um oh here um yeah so this is optimal for transfer so so this is sort of like standard ways | 694 | 722 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=694s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | in which people can perform regularization and this is how you tend to download your pre trained models but these are actually sort of not as good for for doing transfer learning on and.and here if you kind of train in a slightly different way you actually end up with better features for transfer yeah really thorough so I highly recommend reading it so they tried they | 722 | 750 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=722s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | tried pretty much everything so they tried this sort of like fixed feature extractor setting where you sort of freeze things and then just retrain some of the top they also tried left fine-tuning setting where you sort of trained everything and they were also of course comparing to training from from scratch in these settings and so yeah the kind of exact results vary a little | 750 | 767 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=750s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | bit I think your fine-tuning maybe you have like slightly less sensitivity to some of these these settings particularly for larger target datasets but yeah they try everything so definitely worth reading so this is all in the natural image setting what about in the medical image setting actually hardly anything is explored in the medical image setting which i think is | 767 | 788 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=767s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | sort of something we should we could really try and address from sort of two angles firstly of course I think it's important to study it because we're actually deploying these in a lot of places and it's important to understand what's going on particularly is this is sort of a very counterintuitive thing to do secondly I think this medical imaging setting also captures a very interesting | 788 | 809 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=788s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | part of captures an interesting sort of regime in of performing transfer learning so here your source and your target tasks are extremely different to each other like the data is different the actual tasks you're going for is different and as we'll see later there are still some benefits that that come up so sort of understanding why that happens I think is sort of very | 809 | 829 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=809s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | interesting to to explore from a purely principled angle okay so in this talk so first we'll do a quick sort of performance evaluation of transfer just to sort of set things up and then we're going to go into a little bit more detail and try how is pre-training affecting the actual features we're learning in our model and for this I'll also touch on some work | 829 | 853 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=829s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | we've been looking at on studying representational similarity of networks using canonical correlation analysis and then finally and very interestingly and also somewhat paradoxical we'll look at some feature independent properties of transfer that we see okay so the first yeah so I'm fine um so I think the question is sort of is there a precise definition of what fine-tuning means in | 853 | 891 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=853s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | this setting is there as in like you sort of stop after some amount of time or yeah is there something very specified um the answer is like not really you're really just sort of training to convergence or pretty much you're mostly stopping once you see that like validation losses converged yeah but okay so okay so so let's let's take out let's breeze through the first part | 891 | 916 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=891s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | um so first part we're gonna just try and evaluate the actual performance gains of transfer and to do this the way people tend to evaluate transfer learning because you're sort of downloading these datasets from at least sorry these models and these weights from github is you just tend to evaluate it on on standard imagenet architectures so like some big complicated thing like | 916 | 936 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=916s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | this but as I mentioned if your task looks really different to imagenet its kind of important to think about the fact that you're making this implicit or contextual choice and so in our evaluation we also evaluated this much smaller family of architectures that we call CBR s they're really just vanilla convolutional neural networks they're called CBR s because the most sort of | 936 | 957 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=936s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | popular and successful way of having a vanilla Convenant these days is to have a convolution followed by batch norm followed by a rel u activation and these things are really tiny there may be sort of like one eighth to maybe one twentieth the size of your full-fledged two image net architecture and then in terms of tasks we looked at sort of two large-scale medical imaging | 957 | 979 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=957s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | tasks one of them is diagnosing different diseases from chest x-rays and another one is diagnosing a certain kind of retinal disease diabetic retinopathy from scans of the the back of your eye so we sort of run these experiments across all these different architectures random initialization and transfer learning there was a lot of experiments but we saw some clear takeaways so | 979 | 1,004 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=979s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | firstly perhaps you know sort of falling on from some of the results we've seen in the natural image settings transfer and random initialization actually performed pretty comparably so here's a sort of complicated results table we got from our chest x-ray experiments and if we look at where transfer and random initialization perform comparably it's actually for most of the table and in | 1,004 | 1,027 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1004s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | some cases transfer even sorry random initialization even outperforms transfer secondly and and interestingly we observed that these simple vanilla kind of networks actually performed about as well as these standard imagenet architectures so we weren't really trying to optimize for performance we just wanted to try some simple things to see what they look like compared to | 1,027 | 1,049 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1027s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | image net like image net architecture is because those architectures are really pretty different to to what you might want for the for the medical data that's it knows the simple architecture did you also free trainer when this yes yes so we pre trained them on imagenet and then we and then fine-tuned on the medical data sets yeah and then sort of did those comparisons so the | 1,049 | 1,078 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1049s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | data set sites so we actually buried this so like kind of variations of this or in the paper um the full data set is around 200k images I think which is kind of like reasonable size but um sort of smallish compared to image net which is like in the millions so it's sort of interesting to see that these sort of perform comparably and then finally we also saw that image net performance was not | 1,078 | 1,100 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1078s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | actually indicative of how these architectures would perform on a medical task so what do I mean so let's look at another results table so here's where I at 50 and here are these two architectures and here's what we saw when we train them on image net these architectures actually perform horribly they're not designed for image net in a way that I can sort of explain offline | 1,100 | 1,120 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1100s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | but when we look at how they do on the medical talk they're actually really within sort of ballpark performance of each other oh these architectures they're just um they're just like a family of simple vanilla convolutional networks yeah we kind of just made them up just because like we wanted something extremely simple and I can explain why sort of like you're sort of seeing this | 1,120 | 1,146 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1120s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | difference I guess offline if you want but there are kind of clear ways in which image net is not the right way to design your architecture for some of these tasks and so we were able to take advantage of that you mean the data set size so yes it's something we've buried but I'm full dataset sizes maybe two hundred thousand images for both of them yeah so I mean this last point is | 1,146 | 1,173 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1146s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | interesting because other papers so this paper undo better image net models transfer better and even the papers that been Rick's group has been working on on this distribution shift and sort of seeing if how performance car lights across distribution shift does show that there's this correlation but here here we don't see this okay so that was kind of a quick sort of | 1,173 | 1,193 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1173s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | overview of what the the performance evaluations look like but we like to go beyond just the performance evaluation we kind of really want to understand what is transfer learning doing to our architecture it's like what are we gaining from from applying transfer learning if anything at all and you know I mean what we saw is like at a performance level things are performing | 1,193 | 1,213 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1193s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | about the same and so a really fundamental question is well random initialization and pre train weights don't really look anything like each other so we have one thing sitting in one part of the space another set of parameters sitting in another part of the space what's happening during this fine-tuning process is it just that it doesn't really matter how you initialize | 1,213 | 1,231 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1213s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | and then after you fine-tune you just sort of change dramatically and everything you initialize with is erased so we could just kind of do whatever we liked or is something else going on and to alter that question what we really want to do is we want to look at some of the the latent representations of these models and and take a measurement to see how similar they are the problem with | 1,231 | 1,252 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1231s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | trying to do this kind of an analysis is that comparing representations from different neural networks is is really difficult there's this alignment problem it's not like one neuron and a layer of one network is going to correspond nicely to to another neuron in the layer of another network in fact there's no reason that one neuron Maps so another neuron at all it could be like a group | 1,252 | 1,272 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1252s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | of neurons that are having the same function as as a single neuron and another network or one group mapping to to another group so it's pretty complicated but in another line of work we've been looking at doing exactly these kinds of comparisons using canonical correlation analysis and the basic framing is as follows we have some data set of interest and we're going to | 1,272 | 1,294 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1272s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | think of the neurons representation what it's learned as what we call an activation vector so we feed in this kind of data set of interest and this neuron is going to emit a scalar value across all of these these input points and we can literally just sort of stack all of these and that'll form a vector that we call the the activation vector of this neuron and so there's this sort of nice | 1,294 | 1,318 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1294s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | framework where we kind of think of these neurons as these activation vectors and because layers are linearly combining their neurons there are sort of these subspaces that are sort of spanned by their neurons so I'm so so this is going to be at a very high level the details of like all the the the mathematical details of CCA are in their relevant papers but so at a very high | 1,318 | 1,340 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1318s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | level what we do is we take in two sets of these neuron activity vectors and they're typically going to be layers so layer from one network a layer from another network and then CCA will find the linear combination of these neurons that maximizes correlation and by iteratively applying this process we can basically get something like a similarity score between these layers so | 1,340 | 1,362 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1340s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | the score just tells us well you know how how similar are the representations learned by these layers sort of up to sort of scaled linear transforms so that's the way in which it addresses this disalignment issue and so previously we've kind of used this to study various properties of convolutional networks lately it's been become quite popular in studying various | 1,362 | 1,383 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1362s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | different kinds of language models in NLP and more broadly this this kind of entire area of studying similarity between deep representations has quite a lot of people who have been thinking about it so we're I'm mostly gonna be talking about using CCA but the first paper here was probably this paper called convergent learning by Lee Osinski Kuhn Lipson and Hopcroft in ic LR 2016 | 1,383 | 1,409 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1383s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | where instead of dealing with the distributed problem they just tried to find nice one-to-one mappings between between neurons and different networks then we kind of followed up with some of the CCI work then there was a more recent paper that's really pushing this framework of having activation vectors for neurons and and sort of comparing similarities between subspaces | 1,409 | 1,430 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1409s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | and then most recently there's been this paper similarity of neural networks representations revisited by Kornbluth naruse Ely and Hinton that's broadly proposing a kernel based similarity measure and one one quick note about sort of all of these is that I think performing these sort of similarity comparisons is a very interesting way to try and get at what your are neural networks are doing it's | 1,430 | 1,453 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1430s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | kind of useful for interpretability and it has interesting consequences for things like compression and for model Ensemble Inc and sort of all these papers including our own are interesting but I think there's a lot of scope for doing things in a more formal and a more principled way so although many of these methods are built off of sort of principle techniques a lot of the ways | 1,453 | 1,473 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1453s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | in which we apply them are indeed heuristic and we I don't think we can claim we fully understand their limitations or or where best to use them so I think there are a whole bunch of interesting questions in this space but okay so going back to transfer learning what are we going to do well we're gonna do a very simple experiment we're gonna train a bunch of networks from pre train | 1,473 | 1,492 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1473s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | weights we're gonna train a bunch of networks from random initialization then we're going to apply CCA to just see how similar they are to each other we want a baseline so CCA is gonna give us these similarity scores but we want some kind of a baseline to compare these two and so we're also going to look at the similarity scores we get when we train a population of networks from different | 1,492 | 1,512 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1492s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | random initializations and apply CCA there so here what the results look like so along the x-axis are sort of different architectures these blue points are what you get from doing this comparison from networks trained from different random initializations and these yellow points are what you get when comparing pre train networks the network's train from random | 1,512 | 1,531 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1512s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | initialization yeah so so you do see different architectures here but ah good point no we didn't do that and that would be kind of an interesting thing to study definitely so-so but yeah so we just kind of stayed within an architecture but kind of the the takeaway is that these blue points are sort of higher up than these yellow points what does that mean well it means | 1,531 | 1,559 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1531s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | that models train from random initializations seem to be more similar to each other representational e than then models trained from from pre trained weights and transfer learning so even though we're seeing the same performance there is something different happening at the the representational level so yep the different circles actually compare a correspond to different networks so we | 1,559 | 1,581 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1559s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | train multiple networks and then we just sort of performed we put on these comparisons yeah so it's it's it's averaged over the layers um multiple I mean I can tell you offline but sort of slightly different layers for different networks because their architectures are a bit different but yeah taking a few layers at different stages in the network performing this comparison and | 1,581 | 1,600 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1581s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | then averaging we have two distributions and I get a bunch of samples Ramon popular some examples of the other you know like basically just be something I'm wondering like any statistic is going to be kind of you know but I think this pertaining to a population but you know this number of like thirty four and a half verse 36 is this enough to tell me something about I think it is hard to | 1,600 | 1,642 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1600s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | compare across architectures at least the way we did this experiment just because like we looked at different layers and things and we'd have one point see see a similarity actually telling us a lot about of transferability I think well not about transferability but I think it is actually telling us something about what's similar versus what's not similar like I think that was the whole point of | 1,642 | 1,661 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1642s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | like kind of having this sort of baseline comparison I guess in blue and I think the fact that we see and nobody tried like multiple networks for that so I think the fact that we are seeing that the blue things are higher like in most cases is telling us that there is more similarity there we train on the same training data exactly yeah yeah because we're interested in lots of questions | 1,661 | 1,688 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1661s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | okay we're interested we're interested in yeah sort of seeing what this looks like once we've trained on the medical data yes oh yeah so like we're not done making the conclusion yet but sort of the first thing we saw was that like performance is similar and so like hypothesis one is like it kind of doesn't matter how you initialize and like they're actually all doing the same | 1,688 | 1,710 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1688s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | thing all the way through so on top they're clearly similar because performance is similar but like we don't know what's happening in between so so then we try this analysis and then and then it looks like different things are happening in between but there's kind of more coming yep CCA assumes that the inputs are like linear you take a linear combination I was wondering if you try | 1,710 | 1,734 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1710s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | like deep CCA where you take a nonlinear or why did you assume make the linearity assumption yeah I mean that's a good question you could definitely try deep sea CA for CCA like I mean we have to have some kind of I guess place where we want to say something is like you know like where we kind of conclude that things are not that's similar to each other representational e and we thought | 1,734 | 1,754 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1734s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | linear is like kind of a good proxy because you know layers kind of operate linearly and so like you know things are sorted within linear transforms of each other it seems like a reasonable kind of call to say okay that's sort of somewhat similar whereas yeah when you come to like not like I think the nonlinear comparison could also be interesting but yeah you | 1,754 | 1,770 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1754s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | need to know exactly when to call things similar and not somewhere oh man so many questions okay I'll take one more question and then maybe move on did you have a question me yep okay Jack maybe we'll chat more offline okay yep similarity r/t layers yeah so it's that's an interesting question and what I'm about to get to so so the altar is like I think for like | 1,770 | 1,795 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1770s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | some models you see that but not for not for not for others and so that is about to come up yeah so so so okay so like kind of so we saw performance is similar and then a hypothesis says maybe they're just all doing the same things that doesn't seem to quite be the case and now like kind of let's look further into that and to look further into that let's do something really simple so this are | 1,795 | 1,817 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1795s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | the actual filters from the first convolutional layer of of ResNet that's kind of initialize with pre-trained weights so you trick take your rest net you pre train it on image net and this is what the filters look like and sort of you know true to kind of the community's expectations you see all of these really nice Gabor filters come up so yeah so I don't think the weights OCC | 1,817 | 1,852 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1817s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | I only operates on like activations like I think it's like I think it's difficult to sometimes make direct comparisons between weights because your weights can look really different but I think functionally is like what we what's like more interesting to us like I mean there's like the standard experiment where you have like a ground truth neural network and then you train | 1,852 | 1,869 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1852s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | something to mimic that but like even they're like the weights are not going to look like that's similar but like kind of out with wise is yeah what we're looking at well yeah so we're interested like whether there are kind of functional similarities in these they're like like the actual outputs and so that's what we we study by making your your similarity measure parameterize by | 1,869 | 1,889 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1869s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | the examples not by the weights it factors out all these in variances and permutations and stuff like that inside of a network and like you mentioned there are lots of papers legend sincere that find all these relation between the example mapping or the ground matrix it something yeah like I think almost all of these like even even the paper those doing kernel based | 1,889 | 1,906 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1889s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | measures yeah like kind of thinking about it in terms of examples I think is kind of really helpful for doing these sort of similarity measures but okay so so we're gonna do something even simpler let's just look at like let's just look at the filters okay so these are the filters from conv one that we've initialized from having pre-trained freshly off of imagenet look at all | 1,906 | 1,923 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1906s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | these gorgeous Gabor filters what happens when we when we train this network so we train it on this medical data and well after training it actually looks kind of similar so so maybe that just means that you know these like these kind of Gabor filters are perfect for this medical data okay now let's see what happens when we do the same thing from random initialization so this is | 1,923 | 1,944 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1923s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | what our our network looks like when we randomly initialize it and again we're gonna train it on this medical data so we do that and oh man this actually also looks somewhat similar so so so what is going on because so over here like maybe you could say oh there's interesting feature reuse happening but but you're also seeing the same stuff for random initialization okay so that's like our | 1,944 | 1,966 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1944s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | that's that's ResNet now we've trained a whole bunch of other architectures and some of them are much smaller so what happens here well here's one of our very small architectures it's maybe one-tenth the size of the resonant so here it is initialize with it's nice imagenet weights and what happens after training well it actually changes dramatically and then again if we look at it at | 1,966 | 1,987 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1966s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | random initialization and then look at it off your training again it changes significantly your connections a few dozen skip connections activation I mean we should interpret them by adding you know having you across layers or something like the real representation is and it depend on all the layers not so this is just yeah so so because the architectures are different we looked at | 1,987 | 2,007 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=1987s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | Khan Wan specifically for that because it's sort of below all of that now you are getting feedback from kind of the skip connections that you aren't getting but I think this this is also true for for say like something like Inception as well I'm just sort of showing showing resonate here and I think I think what's really going on is to do with sort of to do with the size of these models so yeah | 2,007 | 2,027 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2007s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | so two quick points one point that's kind of interesting is that everyone loves Gabor filters but these models are not actually the smaller model which is changing a lot does it actually really seem to be learning at least the classical Gabor filters and so here are like places where there's a good more filter and it's actually erased the Gabor filter so like here's another | 2,027 | 2,043 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2027s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | place where it's kind of good more filter erase cover or filter erase and then the other point is relates to what my tousle already brought up which is that like what we observe through kind of further experiments is that the size I think the size of these architectures is actually kind of really impacting what you see during this fine tuning process so like the kind of picture we | 2,043 | 2,063 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2043s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | have in our head it's sort of maybe so random initialization and pre-trained weights take us to sort of very different parts of the space but sort of somehow when your model is kind of large and these imagenet architectures are indeed in some sense seem to be large for these medical tusks maybe you just don't sort of move as much and then and then whereas when you have sort of | 2,063 | 2,083 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2063s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | smaller models you sort of change a lot more now we've you know we've throughout this workshop we've seen lots and lots of interesting work on the neural tangent kernel and sort of thinking about these infinite with limits in the kernel regime versus the sort of the not the deep regime but like at least it to my kind of high level understanding it's not a direct mapping to what we're | 2,083 | 2,105 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2083s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | seeing here so I think it's kind of interesting to to sort of study this further and try and understand try and make this connection because there's probably some kind of a connection and sort of understanding why this is happening would be really interesting okay so um final point is that in the paper we have a lot more kind of work on sort of broadly thinking about | 2,105 | 2,123 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2105s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | similarity and reuse there are other interesting things we see like we can store for our larger models our similarity at initialization can be pretty predictive sort of similarity post training we can also like kind of look at how much feature reused is happening and this interesting co-adaptation problem which happy to chat about more offline but you know i | 2,123 | 2,144 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2123s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | think time is running out and I don't want to I want to make sure we get to lunch on time so I wanted to end with I think one of the most interesting observations we saw during this entire set of experiments so one thing we observe again and again across different architectures and and across our different setups is that when you train with pre-trained whites versus | 2,144 | 2,164 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2144s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | training from random initialization there is a huge difference in convergence speed so this yellow line here is what you get when you convert in sort of your training your training curve with with pre trained weights and this blue line is what you see with random initialization if you sort of extend these far enough out they basically converge to the same value but | 2,164 | 2,182 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2164s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | there's this sort of huge difference in in how quickly they converge and you know when you when you first see this plot you might think oh well this means that transfer learning is doing its job like you know you've learned some useful features and you're sort of reusing it and that's why you're converging foster but we've also seen a lot of counterintuitive results on the things | 2,182 | 2,200 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2182s | Towards Understanding Transfer Learning with Applications to Medical Imaging | |
o3y1w6-Xhjg | that larger models are sort of maybe a bit lazier and just don't move as much and it's it's still kind of not fully clear exactly how much feature use is happening in this in this process and so we tried an experiment to try and understand why we see this difference in convergence speeds and the experiment is very simple so we decided to initialize by drawing weights IID from from sort of | 2,200 | 2,224 | https://www.youtube.com/watch?v=o3y1w6-Xhjg&t=2200s | Towards Understanding Transfer Learning with Applications to Medical Imaging |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.