video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
_p8vFSUesNs | going to use effectively tries to make our denies image which our CNN estimates close to two the image we have but at the second time we also need to make sure that the image which we reconstruct is actually close to the case based measurements we have obtained and this leads us to when we develop our CNN to have a layer which is probably not very common in other applications which we | 953 | 983 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=953s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | call a data consistency layer so this forces our data our fidelity and what this effectively does is this equation you see here is we have some part of missing key space if we make an estimate of this we simply keep that estimate of of K space and we have some part where we have measured K space and that measured bit of K space we're going to average together with with our estimated | 983 | 1,013 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=983s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | K space depending on how noisy this is so if you have a completely noise free case you would assume that lambda s goes to infinity and you would only keep your original measurements in k space of course you have some measurement noise you might actually really average those two together and so here in this particular equation s CNN is effectively the the foria version of the image which | 1,013 | 1,039 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1013s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | I've reconstructed and this here's our zero field K space so this is what what we would normally have so having this as a layer will force you to have a consistent image reconstruction with what you have measured and it turns out that because we want to train this end to end we need to be able to do the forward and the backward classes in order to also propagate our gradients | 1,039 | 1,066 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1039s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | back now the forty operation is a linear operation so actually turns out that the forward and the backward pass you can write down quite easily in solution and really in the backward pass you this is a Jacobian of the data consistency layer and if you decided your lambda is trainable because you don't know how much measurement noise you have you can also write down the | 1,066 | 1,091 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1066s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | derivative with respect to lambda so this is quite nice because I can basically propagate my gradients easily through this data consistency there so here's what you actually end up then with is you have an input image a complex valued input image you have your case based measurements and then you have a number of denoising layers which try to effectively remove aliasing from | 1,091 | 1,118 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1091s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | the image and then after these the noising layers you have your data consistency layer which then forces your measurement to be consistent with K space and effectively that links to your k space measurements so and then you can sort of in analog to a sort of iterative optimization you can on you can cascade these networks to an arbitrary depth we typically use five different of these | 1,118 | 1,147 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1118s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | cascades so here's an example what you end up with when you see on the left hand side is the image as acquired with six fold under sampling so if it's six fold on the sampling that means the acquisition is now six times faster I only have 1/6 of the data and you can see that the image which are reconstruct here is virtually useless this year is a technique which is based on dictionary | 1,147 | 1,175 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1147s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | learning and sort of more compressed sensing technique this is our CNN and this is the fully sampled image so of course I I've been sort of assimilating this under sampling here and what you see is that you can virtually see no difference between the fully sampled image and the reconstruction using CNN's the compressed sensing here one is also quite good but it turns out it's much | 1,175 | 1,202 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1175s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | slower and in terms of psnr it's several it's probably 10 percent are worse and the CNN you can push this even even to higher undersampling right so here's an example of 11 fold on the sampling which is really quite aggressive and you can still recover the the image very well so this is actually a really quite nice result now probably one of the biggest advantages is actually here so of course | 1,202 | 1,232 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1202s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | the the CNN is better in terms of peers and all that's that's that's nice but really one of the most important advantages is a speed so he used a compressed sensing technique with dictionary learning it takes around six hours to reconstruct an entire image sequence which in some clinical scenarios is acceptable but for example if you want to use your images for | 1,232 | 1,255 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1232s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | navigation because you want to for example perform a biopsy or something else then actually this is far too slow and the CNN is of course very very much faster so this actually now and now you can really do this probably in in 100 milliseconds so this is fast enough to do it on the scanner for image guided surgery and that's a that's a very big advantage in these techniques good I | 1,255 | 1,283 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1255s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | want to now go from image reconstruction and talk a bit about a two related Table Topics image segmentation and super resolution and you'll see in a moment why why I've sort of lumped them together because they share a number of problems which we have in medical imaging so okay so if I if I adopt a sort of standard approach for image segmentation I'm not going to show you | 1,283 | 1,313 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1283s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | anything in anything new here we for example use a sort of variant of an FCN for medical image segmentation if I pair that with a large enough data set for training so this is actually quite nice that I said which is also publicly available where a group in in Oxford have annotated 5000 subjects they have taken 5,000 subjects and annotated over 90,000 images | 1,313 | 1,342 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1313s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | in this data set you can use this for training and actually turns out that if you use this for training to fix don't try to segment the heart here you probably can't see this very clearly you can actually do a very good job in all these images which are part of this UK biobank so even in in slices which are sort of for example here at the apex of the heart where the heart ends which are | 1,342 | 1,367 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1342s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | typically very difficult because the heart is moving in and out of the plane this really works very very well you don't really have any any problems with that if you then try to compare how good does a machine do to to a human then actually the automated measurements for clinically important parameters are pretty much within the variability of what different humans do so we're not | 1,367 | 1,394 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1367s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | performing better than a human does so there is no superhuman performance but actually we're doing as well as a human does and I think that's quite natural because seems to me quite hard to get superhuman performance from from training data which actually in some sense is quite flawed so you can do this quite easily so there is no real challenge here we can just a talk what | 1,394 | 1,415 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1394s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | what many of you guys have done in envision what is the challenge is that actually the imaging data has a lot of artifacts which you need to understand in order to really make best use of the data so one of the things is we typically do is when we acquire the heart we acquire one slice we image for we ask the patient to hold a breast for 10 seconds we acquire then the second | 1,415 | 1,442 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1415s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | slice do the same thing again third slice do all of this again and our slices are quite thick which means they have an isotropic resolution so the high resolution inflame but low resolution out of planes so if I if I take that data stack it together and show you that as a sort of reformatting you see these ugly staircases which comes from the fact that probably every slice is is | 1,442 | 1,469 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1442s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | around one centimeter thick and in in planar probably have around one philometor solution there's the second problem you have which comes from the fact that the patient has to hold their breath for every slice and they might hold their breath in a different position between slices which means that actually one of the slices are shifted correct if you basically treat this as one volume you | 1,469 | 1,497 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1469s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | end up with with problems and for example if you take this volume here you can quite clearly see well in two of those slices of patient has who held their breath in the wrong location so now I can do segmentation with this data set in 3d or super resolution and this is what you end up when you do for example super resolution which is actually quite nice this super | 1,497 | 1,520 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1497s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | resolution here takes this dataset produces this and has this fantastic thing here where there's effectively a hole in the heart of course the patient doesn't have a hole in the heart because it would probably not survive with that but of course the super resolution can only do whatever the data give if you give it and similarly in the segmentation here you can also see this | 1,520 | 1,544 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1520s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | disconnected our regions so what we really would like to do is take that data same data and produce either a super resolution you see here or a segmentation like you see here and for that we have to incorporate anatomical knowledge if if you ask a clinician to look at this data they will look at the only slice by slice but in their head they will build up a 3d representation | 1,544 | 1,570 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1544s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | of what they're looking at and they are completely immune to the fact that you have motion between these different slices so one of the challenges we we came across is really that these standard these lost functions which we normally use for segmentation or super resolution are not really very good in those situations so we thought about what can we do differently because we | 1,570 | 1,596 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1570s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | really want to put a low resolution input data into our network and end up with a high resolution segmentation segmentation or super resolution so this network not only does acts perform semantic segmentation but also increases the resolution of the data so we decided let's let's try out something which looked very cool called TL Network which has been proposed in in graphics | 1,596 | 1,624 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1596s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | and really has effectively sort of two components the one component is a sort of authoring code and encoder which forces you into a latent space with variables age and then a decoder and that network effectively is trained with segmentation so not with it with will effectively will label maps and then a second branch which is the predictor network which for example takes an | 1,624 | 1,653 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1624s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | intensity image and predicts this latent representation from from this from this intensity image and we can sort of train this this network in an enjoy fashion now when you look at this you might say well hang on what's this what's this useful for well for example if we train our network for doing segmentation one of the things we can do is we put our low resolution image in we obtain a | 1,653 | 1,683 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1653s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | segmentation using our segmentation Network so this is our segmentation but we want an anatomically plausible segmentation so we encode that segmentation into our latent space and for our ground truth labels we can also encode it in our latent space and we have then a sort of loss function on that latent space which forces you to be similar in in the anatomical | 1,683 | 1,709 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1683s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | representation of what you're looking for and then you can couple this with your standard cross entropy loss okay so you now have two loss functions one which is a normal cross entropy and one loss in this latent space which forces your shape to look similar to what you had seen during training and if you do that you actually get a very nice result so instead of now getting these really | 1,709 | 1,738 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1709s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | weird shapes where you get biologically implausible shapes you can constrain your data to be very close to what is the ground truth if you acquire high resolution images by work by the way these high resolution images you can only acquire if you hold your breath for 40 seconds so this requires really sort of very dedicated volunteers to be able to do that and if | 1,738 | 1,765 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1738s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | you do this with super resolution you can now do the same thing correct with super resolution the only difference here is that actually my super resolution network produces as output an intensity image and then I'm predicting from that intensity image my latent space representation and and I have here the same for the ground truth so now here I do the same thing my latent space | 1,765 | 1,790 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1765s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | representation though comes from these predictor networks which which go from intensity space to latent space rather than from segmentation space to labor space and here's an example of of what that does if you use a sort of low resolution image this is your standard super resolution approach out of the box this is an anatomically constrained super resolution and this is what you | 1,790 | 1,815 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1790s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | would end up with if you he look at the higher resolution ground truth and here's sort of a movie showing exactly the same thing in the dynamic image sequence so this really works very well and it's actually quite powerful I think really interesting I mentioned to you before that one of the things which we quite often face is we have trained our models for example using UK biobank | 1,815 | 1,844 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1815s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | which is great because it's so much data available you then deploy it in the clinic and it doesn't really work that well and it's mostly due to really differences in not only the hardware but also the knobs which people turn when they acquire the images so MRI is great because it's so it's effectively a programmable device but it also means you can produce very different looking | 1,844 | 1,869 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1844s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | images and I already mentioned that to you so one of the things which we have sort of played around with is can we use adversarial training to try to make sure that we learn feature representations which are invariant to the data but also which don't require us to have annotations for the test data because that's actually very expensive to do in in medical imaging so I guess I | 1,869 | 1,898 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1869s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | this problem I think I don't really need to explain to you in very much detail but you basically end up with training your machine on on the source domain trying to separate one set of labels from another set of labels so this might be scanner a where you have training data then when you go to scanner B you actually see that you have this domain shift where your distribution of | 1,898 | 1,922 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1898s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | features which the network has changed I learned change because the images look slightly different and what we're trying to do here is basically try to find a way in which we can use adversarial learning to effectively help us that these images like these samples get misclassified and we instead would like to learn a classifier which is more generalizable and this has been really | 1,922 | 1,952 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1922s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | sort of it was very nice paper on this how you can do this with with neural networks which was sort of published a two year or three years ago not two years so it's an ancient paper in terms of machine learning but really works quite well where what you try to do is you try to train a classifier where you try to learn a domain classifier which can tell you whether your data comes | 1,952 | 1,979 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1952s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | from domain a or domain B and you try to minimize the Icarus II of this domain classifier because if that domain classifier does a good job then obviously you haven't learned features which are very domain in their head and the nice thing is for this you only need labels whether your data comes from scanner a and scanner B I don't need annotations for scanner B so here's the | 1,979 | 2,004 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=1979s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | approach which we have used we use a neural network which has has a fancy name called deep medic which was one of the nicer the first ones who could actually do properly 3d convolutions and now of course you can do this quite easily sort of has two different pathways I don't really need to explain it in too much detail one high resolution one low resolution pathway is really designed to | 2,004 | 2,029 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=2004s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | spot brain tumors and has been actually quite successful in in this so here's an example of what it would actually output in 3d so you can produce your fancy 3d renderings for this so we take this network and if the if you train this network it will not be a domain invariant what we what we instead do is we add to the normal segmentation pathway we add this the discrimination | 2,029 | 2,058 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=2029s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | pathway where you basically take the features add a low-level features the mid-level features and the high-level features you put them into your adversarial branch and you try to learn this or you try to prevent this from being a good domain discriminator and then by minimizing the accuracy of that domain our discriminator you tend to learn features which are more | 2,058 | 2,086 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=2058s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | generalizable so effectively you have two different terms and your cost function one is sort of your normal cross entropy loss and the other one is your how well you can discriminate the domain but the important thing is the top this this loss here I can only evaluate for samples from scanner a because that's my training set whereas this domain discriminator i can actually | 2,086 | 2,114 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=2086s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | evaluate for samples from scanner a and b because i only need to know whether they come from scanner a and b and that's quite easy to do and it turns out that this actually does does really quite well so here's an example what happens when you don't do this domain adaptation so here actually instead of using data from different scanner we assume that at test time we don't have | 2,114 | 2,140 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=2114s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | one of sequences available so we have to use another imaging sequence and if you don't do then the main invariant features you see you end up with horrible results in your in your segmentation where you're supposed to to spot a brain tumor which you can probably see here quite nicely and here when you have done the domain adaptation even if you switch one sequence to | 2,140 | 2,165 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=2140s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | another you actually do quite well good so the last two minutes I just want to talk about some challenges which we have practically faced and I think many of you might face in in other scenarios is there are many good networks out there for example a unit which in medical imaging quite a lot of people use FCN they have a lot of meta parameters a lot of different architectures you can | 2,165 | 2,195 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=2165s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | choose that we influence the behavior and then really it's at least at live to us it quite often looks like it's very hard to make predictions which model will work really well for a given task so one option is to use I guess something you all very familiar with this sort of non solve all of these different models and try to be as insensitive as possible and unbiased as | 2,195 | 2,223 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=2195s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | possible so here's for example in example when I have a flare image with a brain tumor so you see the core of the tumor in red and the edema in yellow then if I use the same network but I switch here cross-entropy for i/o you as a sort of a loss function I get very different behavior and for example this this intersection over Union effectively forces you to make a very hard | 2,223 | 2,249 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=2223s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | segmentation and that gives you overly high confidence values for this so it might not be a very good thing because these things you miss classify with very high confidence okay so the approach which were sort of used as trying to basically approximate the probability distribution which we're really interested in by this model where you have these different meta parameters | 2,249 | 2,277 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=2249s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | in there and usually just pick one meta parameter and then run with it so what we really wanted to do is sort of marginalize out over these meta parameters and try to find a more robust way of doing this and so we use different network architectures for example a diplomatic FCN a unit approach but also we tried out different architectures what we different training | 2,277 | 2,305 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=2277s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | loss functions different sampling strategies there are many knobs you can you can turn on and at the mechanic conference which I guess is sort of the the the cbpo for handicap for those who work in medical imaging is sort of they run challenges and this type of approach really was quite successful in this for one the first price out of 50 competitor us and and really it's it's quite simple | 2,305 | 2,335 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=2305s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | because you didn't really need to spend huge amount of time engineering the particular approach okay so I just want to sort of summarize in computer deep learning you've seen a number of really nice papers and literature showing really great progress in this but there's also quite a lot of hype and quite a lot of discussion about whether we actually asking the right or whether | 2,335 | 2,362 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=2335s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | we're posing the right problems to machine learning in medical imaging and really to make this truly intelligent we have to move beyond images so we have to even a radiologist somebody told me actually if you are if you want to study medicine and you really hate patients the one thing you should do is you should become a radiologist correct because you normally | 2,362 | 2,383 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=2362s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | never have to interact with a patient but even the radiologist will look at an on imaging information and so really there is a lot of data available this is really a challenging validation like in the previous speaker I think really if unless you really work together in teams with the with a clinicians engineers you can't really solve this problems or you might actually end up solving the | 2,383 | 2,410 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=2383s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | wrong problem and one thing which is I think what exciting is something which which we're trying to do in the future in and currently it's sort of at the moment you have these three separate blocks your quiet your data you reconstruct your data you are now you then somebody define tells you what I wanted to measure and then you do the analysis and you pop out some of some | 2,410 | 2,433 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=2410s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | results but really if all of this I can formulate with deep learning then one of the things I am actually very excited about is that I can do end-to-end optimization so if I know what clinical measurements I want to make I can optimize the acquisition the reconstruction and the analysis for exactly that purpose and I think that's a very powerful paradigm especially | 2,433 | 2,460 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=2433s | Daniel Rueckert: "Deep learning in medical imaging" | |
_p8vFSUesNs | because these scanners are effectively programmable there are piece of programmable hardware and you can optimize what they do and of course you can couple it with what Big Data and an multimodal data so I just want to finish by acknowledging all the people who helped with a work which I've showed you here and also the people who funded this research thank you very much | 2,460 | 2,489 | https://www.youtube.com/watch?v=_p8vFSUesNs&t=2460s | Daniel Rueckert: "Deep learning in medical imaging" | |
PuStNtldiJY | [Music] reading the war between human and artificial intelligence our deep learning systems are beginning to surpass humans sorry [Music] jurgen schmidhuber the swiss AI lab ids IA on the 9th of november 1989 I saw the Berlin Wall fall on TV if you ask me when did you ever have tears in your eyes this is the first event that comes to my mind when I was a boy I | 0 | 60 | https://www.youtube.com/watch?v=PuStNtldiJY&t=0s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | wanted to maximize my impact on the world and I was smart enough to realize that I'm not very smart and so it became clear to me that I have to build a machine and artificial intelligence that learns to become much smarter than I could ever hope to be such that it can learn to solve all the problems that I cannot solve myself such that I can retire and my first publication on that | 60 | 97 | https://www.youtube.com/watch?v=PuStNtldiJY&t=60s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | dates back 30 years today 1987 might promises was about solving the grand problem of AI not just building something that learns a little bit here and a little bit over there but also learns to improve the learning algorithm itself and it learns the way it lands away learns recursively and I'm still working on the same thing and I'm still saying the same thing and the only | 97 | 126 | https://www.youtube.com/watch?v=PuStNtldiJY&t=97s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | difference is that more people are listening because on the way to that goal my team has developed learning methods which are now on 3,000 million smartphones what you see behind me are the logos are the five most valuable companies of the Western world Apple Google Microsoft Amazon Facebook and all of them claim that AI is central to what they are doing and all of them are using heavily | 126 | 169 | https://www.youtube.com/watch?v=PuStNtldiJY&t=126s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | the deep learning methods as they are called now that we have developed in our little labs in Munich and in Switzerland since the early 90s in particular something called the long short-term memory has anybody in this room ever heard of the long short-term memory ends up fel SEM has anybody in this room never heard of the LS TM and okay I see we have a third group in this room | 169 | 212 | https://www.youtube.com/watch?v=PuStNtldiJY&t=169s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | who didn't understand the question the lsdm is an artificial neural network which has recurrent connections and it's a little bit inspired by the human brain in your brain you've got about 100 billion little processors and they are called neurons and each of them is connected to maybe 10,000 other neurons on average and some of these neurons are infant neurons where video is coming in | 212 | 247 | https://www.youtube.com/watch?v=PuStNtldiJY&t=212s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | through the cameras and audio is coming into the microphones and tactile information is going in through the pain sensors and some of the neurons are output neurons and they move the finger muscles and speech mushrooms and in between are these hidden neurons we're thinking is taking place and they all connected and each connection has a strength which says how much does this | 247 | 274 | https://www.youtube.com/watch?v=PuStNtldiJY&t=247s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | neuron over here influence this neuron over here at the next time step and in the beginning all these connections are random and the network knows nothing but then over time it learns to improve itself and it learns to do so of all kinds of interesting problems such as driving a car just from examples from training examples and you may not know the lsdm but all of | 274 | 303 | https://www.youtube.com/watch?v=PuStNtldiJY&t=274s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | you have it in your pockets on your smartphone because whenever you take out your smartphone and you do the speech recognition and you say okay guru show me the fastest way to the station then it's recognizing your speech and what's happening there's an lsdm in there which gets about 100 and puts per second from the microphone and they are streaming in memories of past inputs are circling | 303 | 329 | https://www.youtube.com/watch?v=PuStNtldiJY&t=303s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | around these these recurrent connections and from many training examples it has learned to adjust these internal connections such that it can recognize what you're saying that's now on 2 billion Android phones it's much better than what Google had before 2015 here is the basic LS TM cell I don't have time to explain it but here are also the names of the brilliant students in my lab who | 329 | 358 | https://www.youtube.com/watch?v=PuStNtldiJY&t=329s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | made that possible how are the big companies using it well speech recognition is just one of many examples if you're on Facebook is anybody on Facebook ok are you sometimes using the translate function where you can translate text from other people yes again whenever you do that you are waking up a long short term memory and lsdm which has learned from scratch to | 358 | 384 | https://www.youtube.com/watch?v=PuStNtldiJY&t=358s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | translate sentences into equivalent sentences in different languages and Facebook is using that a system which has LSD a Metascore for about 4 billion translations per day that's about 50,000 per second and another another 50,000 and the second and another 50,000 if you have an Amazon Alexa it's talking back to you it sounds like a female voice it's not a recording it's an STL SDM | 384 | 416 | https://www.youtube.com/watch?v=PuStNtldiJY&t=384s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | which has learned to sound like a female voice to see how much lsdm is per meeting the modern world just look at what all these Google Data Centers are doing now 30% 29% as of 2016 of the awesome computational power for inference in all these Google Data Centers was used for lsdm the big Asian card companies such as Samsung are also using it and and just a couple of months | 416 | 450 | https://www.youtube.com/watch?v=PuStNtldiJY&t=416s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | ago Samsung became the most profitable company in the world for the first time what can be learned from that if you want your company to be among the most profitable profitable ones better.you is lsdm now we started this type of research a long time ago in the early 90 years and by the way you are a large audience by my standards but back then few people were interested in artificial | 450 | 480 | https://www.youtube.com/watch?v=PuStNtldiJY&t=450s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | intelligence and I remember I gave a talk and there was just one single person in the audience a young lady I said young lady it's very embarrassing but apparently today I'm going to give this talk just to you and she said ok but please hurry I am the next speaker [Applause] since then we have greatly profited from the fact that every five years computers are getting ten times cheaper | 480 | 519 | https://www.youtube.com/watch?v=PuStNtldiJY&t=480s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | that's an old trend much older than Moore's law and goes back at least to 1941 when here invests a team Conrad Sousa built the first working program controlled computer and 30 years later for the same price we could do 1 million times as many operations per second because he could do only one operation per second roughly and now it's 75 years later we can do roughly a | 519 | 546 | https://www.youtube.com/watch?v=PuStNtldiJY&t=519s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | million billion instructions per second for the same price and it's not clear that this trend is going to break soon because the physical limits are much further out there if this trend doesn't break then within the near future we are going for the first time we are going to have little computational devices that can compute as much as a human brain we don't have that yet but soon | 546 | 570 | https://www.youtube.com/watch?v=PuStNtldiJY&t=546s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | it will be possible if that trend doesn't break them it will take only 50 more years such that for the same price you can compute as much as all 10 billion brains on the planet and there will not be only one little device like that but many many many everything is going to change by 2011 computers were fast enough to allow us for the first time to have superhuman performance at | 570 | 600 | https://www.youtube.com/watch?v=PuStNtldiJY&t=570s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | least and limited domains through these deep learning networks back then that was 2011 so computers were about 20 times more expensive than today today we can do 20 times as much for the same price and and that was already good enough to do superhuman traffic sign recognition which is important for self-driving cars and ten years ago five years ago when computers were about ten | 600 | 626 | https://www.youtube.com/watch?v=PuStNtldiJY&t=600s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | times more expensive than today they were already fast enough to make us win these medical imaging competitions what you see behind me is a slice through the female breast tissue and our network which started as a stupid Network had no idea of anything just learned to recognize cancer by imitating a human doctor a histology and out competing all the other competitors | 626 | 656 | https://www.youtube.com/watch?v=PuStNtldiJY&t=626s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | back then soon all of healthcare soon all of medical diagnosis is going to be superhuman it is going to be so good that it's going to be mandatory at some point we can also use Alice TM and things like that to control robots but we don't only have systems that slavishly imitate human teachers know we also have systems that invent their own goals we call that artificial curiosity | 656 | 684 | https://www.youtube.com/watch?v=PuStNtldiJY&t=656s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | artificial creativity systems that like little babies learn to invent their own experiments to figure out how the world functions and what you can do in it and systems that set their own goals are required to become smart because if they don't have the freedom to do that they are not going to become more and more general problems over solving one new self-invented problem after another on | 684 | 714 | https://www.youtube.com/watch?v=PuStNtldiJY&t=684s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | the other hand it's hard to predict what they are going to do but you can steer them in the not-so-distant future I guess we will for the first time have AI on the level of small animals we don't have that yet but it's not going to take so many years once we have that it may need it may require just a few additional decades to reach human level intelligence why because technological | 714 | 742 | https://www.youtube.com/watch?v=PuStNtldiJY&t=714s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | evolution is maybe a million times faster than biological evolution because the dead ends are weeded out much faster and it took 3.5 billion years to go from zero from nothing to a monkey but just a few tens of millions of years afterwards to go from the monkey to human level intelligence we have a company that is trying to make that a reality it's called Nissan's pronounced Nissan's | 742 | 770 | https://www.youtube.com/watch?v=PuStNtldiJY&t=742s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | like in English but spelled in a different way and and this company is trying to him to build the first the general-purpose AI that really deserves the name many people think there is this insurmountable wall between today's special purpose a tires which do for example the speech recognition etc and translation and and the universal or a general purpose AI or intelligence of | 770 | 808 | https://www.youtube.com/watch?v=PuStNtldiJY&t=770s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | humans but mr. Gorbachev we are going to tear down this one and there is no doubt in my mind that within not so many decades for the first time we are going to have superhuman decision-makers in many many domains super-smart ARS which are as I told you not just going to be slaves of humans they are going to do their own thing in many ways and and they are going to realize what we have | 808 | 841 | https://www.youtube.com/watch?v=PuStNtldiJY&t=808s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | realized a long time ago which is that most resources are not in our thin film of biosphere now they are out there in space so of course they are going to expand out there in space where most of the resources are and through billions of self-replicating robot factories they are going to colonize the solar system and within a few hundred thousand years they are going to cover the entire | 841 | 869 | https://www.youtube.com/watch?v=PuStNtldiJY&t=841s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | galaxy with senders and receivers such that they can travel the way they are traveling in my lab today which is by radio from sender to receiver now nobody knows anything about the details of how all of that is going to happen but it's the only logical thing because you still need resources and torm terms of Matan energy so the only way is to move outwards what's happening now is much | 869 | 898 | https://www.youtube.com/watch?v=PuStNtldiJY&t=869s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
PuStNtldiJY | more than another Industrial Revolution this is something that transcends humankind and biology itself a new type of life is going to expand from this little planet in a way where humans cannot follow well that's okay we don't have to believe we are going to stay the crown of creation we don't believe we have to stay the crown of creation but you still can see beauty in being part | 898 | 933 | https://www.youtube.com/watch?v=PuStNtldiJY&t=898s | How AI Is Beginning To Surpass Humans | Jürgen Schmidhuber | |
SoODZ7tEN5Q | in 1948 Claude Bristol wrote a best-selling book the magic of believing this very popular book continues to sell in paperback form today Claude Bristol died in 1951 however we feel that in order to maintain the compelling nature of the book we would like to give you this in a form as close to the author's original style as possible just as you hear the voice of a | 0 | 31 | https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=0s | "The Magic of Believing" By Claude Bristol | |
SoODZ7tEN5Q | writer as you read a book with the help of an actor William Caine we hope to bring you the voice of Claude Bristol and just as you can stop reading a book most conveniently at the end of a chapter we'll create natural places for you to stop time to think about what you've heard time to take it in listen to the magic of believing by Claude Bristol it could change the course of | 31 | 56 | https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=31s | "The Magic of Believing" By Claude Bristol | |
SoODZ7tEN5Q | your life chapter 1 how to tap the power of belief is there a force a factor a power a science call it what you will are something which a few people understand and use to overcome their difficulties and achieve outstanding success I firmly believe that there is it is my purpose here to attempt to explain it so that you may use it if you desire I realize I have run across something that is | 56 | 110 | https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=56s | "The Magic of Believing" By Claude Bristol | |
SoODZ7tEN5Q | workable but I don't consider it as anything mystical except in the sense that it is unknown to the majority of people and is little understood by the average person I'm aware that there are forces powerful forces at work in this country that would dominate us substituting a kind of regimentation for the competitive system which has made America great among nations I believe | 110 | 136 | https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=110s | "The Magic of Believing" By Claude Bristol | |
SoODZ7tEN5Q | that we must continue to retain the wealth of spirit of our forefathers if we don't we shall find ourselves dominated and everything we do by a mighty few will become serfs in fact if not in name I hope this work will help develop individual thinking and doing some may call Mia a crackpot or a screwball I'm well aware of that let me say that I am past the half-century mark | 136 | 168 | https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=136s | "The Magic of Believing" By Claude Bristol | |
SoODZ7tEN5Q | and have had many years of hard practical business experience as well as a goodly number of years as a newspaperman I started as a police reporter police reporters are trained to get the facts and take nothing for granted apparently I was born with a huge bump of curiosity I've always had an insatiable yearning to seek explanations and answers this yearning has taken me | 168 | 197 | https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=168s | "The Magic of Believing" By Claude Bristol | |
SoODZ7tEN5Q | to many strange places brought to light many peculiar cases and has caused me to read every book I could get my hand on dealing with religions cults and both physical and mental sciences I have read literally thousands of books on modern psychology metaphysics ancient magic voodoo ISM yoga 'sm theosophy Christian Science unity truth new thought and many other dealings it's what I call mind | 197 | 225 | https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=197s | "The Magic of Believing" By Claude Bristol | |
SoODZ7tEN5Q | stuff many of these books were nonsensical others strange and many very profound gradually I discovered that there is a golden thread that runs through all the teachings and makes them work for those who sincerely accept and apply them that thread can be named in a single word belief it is the same element or factor belief which causes people to be cured through mental | 225 | 255 | https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=225s | "The Magic of Believing" By Claude Bristol | |
SoODZ7tEN5Q | healing enables others to climb the ladder of success and gets phenomenal results for all who accept it why belief is a miracle worker is something that cannot be satisfactorily explained but have no doubt about it there's genuine magic in believing the magic of believing became a phrase around which my thoughts steadily revolved I've tried to put down these thoughts as simply and | 255 | 282 | https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=255s | "The Magic of Believing" By Claude Bristol | |
SoODZ7tEN5Q | as clearly as I could so that everyone can understand my hope is that anyone who listens will be helped in reaching their goal in life I would like to start by relating a few experiences of my own life with the hope that by hearing them you will gain a better understanding of the entire science early in 1918 I landed in France as a casual soldier unattached to a regular company as a | 282 | 313 | https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=282s | "The Magic of Believing" By Claude Bristol | |
SoODZ7tEN5Q | result it was several weeks before my service record necessary for my pay caught up with me during that time I was without money to buy gum candy cigarettes and the like every time I saw a man light a cigarette or chew stick a gum the thought came to me that I was without money to spend on myself certainly I was eating in the army clothed me and provided me with a | 313 | 336 | https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=313s | "The Magic of Believing" By Claude Bristol | |
SoODZ7tEN5Q | place on the ground to sleep but I grew bitter because I had no spending money and no way of getting any one night on route to the forward area on a crowded troop train sleep was out of the question I made up my mind then that when I returned a civilian life I would have a lot of money the whole pattern of my life was altered at that moment I didn't realize it then that at that moment I | 336 | 365 | https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=336s | "The Magic of Believing" By Claude Bristol | |
SoODZ7tEN5Q | was laying the groundwork for a new direction in my life groundwork that would unleash forces that would bring accomplishment as a matter of fact the idea that I could with my thinking and believing develop a fortune never entered my mind money is not the only desire you may have it doesn't matter to what end the science is used it will be effective in achieving the object of your desires and | 365 | 393 | https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=365s | "The Magic of Believing" By Claude Bristol | |
SoODZ7tEN5Q | in this connection let me tell another experience some years ago I decided on a trip to the Orient and sailed on a ship called the Empress of Japan something was working for me on that trip I had no claim to anything but ordinary service however I sat at the executive officers table and was frequently his personal guest in his quarters as well as on inspection trips through the ship well | 393 | 421 | https://www.youtube.com/watch?v=SoODZ7tEN5Q&t=393s | "The Magic of Believing" By Claude Bristol |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.