video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
pfFyZY1RPZU | engineering separate pieces of code in each of these individual application silos which which we have been doing for decades now just a few more fun examples it turns out you can plug in other sensors to the brain and the brain kind of figures out how to do of it shown on the upper left is a scene with your tongue right so this is actually undergoing FDA trials of now to help | 668 | 690 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=668s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | blind people see a system called brain port so the way it works is you strap a camera as your forehead takes a low-resolution grayscale image of what's in front of you run a wire to a rectangular array of electrodes that you place on top of the tongue so the each pixel maps to a point on your tongue and maybe a high voltage is a bright pixel and a low voltage is a dark pixel and | 690 | 712 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=690s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | even as adults you and I today would be able to learn to see you about tongues in like tens they only ten-thirty 10 20 minutes human echolocation well you need a you know snap your fingers right I'll click your tongue and there are there actually schools today training blind children to learn to interpret the pattern of sounds bouncing off the environments as human sonar a haptic | 712 | 738 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=712s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | belt is you know bring a buzzes around your waist program the one facing off the bus and then you get the directions and you just magically know where North is or similar to how birds sense direction you can plug a third eye into frog and you know the farm learns how to how to deal with it it doesn't work in every single instance there are cases where this doesn't work but I think to a | 738 | 758 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=738s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | surprisingly large extent it's almost as if you can plug in you know not quite any sensor but almost the large range of sensors onto almost any part of the brain kind of going to do of it so want to be cool if you get a learning algorithm to do the same so let's see oh let's take a break could you guys on I think you now know enough to work look at the questions one | 758 | 784 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=758s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | through three in the handout do you guys want to take a few minutes so just write down do write down what you think is the right answer and when you've done so you know discuss what you wrote down with your neighbors and and see if you agree or disagree for question one I had d4 question two I had auditory cortex learns to see and the question three I don't know different people have | 784 | 809 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=784s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | different ideas I guess I tend to use the wording that much of human intelligence can be explained by a single learning algorithm but there are lots of other Worthing's that lots of other ways of distracting it alright so given this you know what are the implications for machine learning right so here if we think that without visual system computes an incredibly | 809 | 835 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=809s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | complicated function of the input right it looks all those numbers and those pixel values and tells you that that's the motorcycle exhaust pipe and so two approaches that we could try to build such a system as you could try to directly implement this complicated function which is what I think of as a hand engineering approach or maybe you can try to learn this function instead | 835 | 858 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=835s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | right and in kind of a side comment maybe only for the aficionados the machine learning is that if you look at a train learning algorithm you know a learning algorithm after has trained with all the parameter values there's a very complex thing but the learning algorithm itself is relatively simple most learning algorithms can be described in like half a page of | 858 | 877 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=858s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | pseudocode so the complexity of the things we're training usually comes from the complexity of the data rather than the complexity of the algorithm and then that's a good thing because we know how to get complex data you just year which is an images or around us but coming up with complex algorithms is hot right so here's a here's a problem that I guess I post a | 877 | 898 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=877s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | few years ago which is you know can we learn a better feature representation for vision or audio or what have you so concretely can you come up with an algorithm they just examine examine it's a bunch of images like these and automatically comes up with a better way to represent images than the raw pixels and if you can do that maybe you can apply the same algorithm to audio and | 898 | 922 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=898s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | have the same algorithm trained along a bunch of audio clips and have it find a better way to represent audio than the raw data okay so let's let's write down the mathematical formalism of this problem right which is given a 14 by 14 image X image patch X one way to represent the image patch is with a list of 196 row numbers corresponding to the pixel intensity values the probably one | 922 | 947 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=922s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | opposes can we come up with a better feature vector to represent those pixels okay and if you can do so then this is what you can do here's a problem called a self-taught learning I guess which is well so in in traditional machine learning right if you want to learn to distinguish your motorcycles from non motorcycles you have a training set with some and this is a pain because there's | 947 | 976 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=947s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | a lot of work to come up with a lot of pictures and motorcycles it was like tens of thousands of them so in the unsupervised feature learning on the self or learning problem what you do is instead we're going to give you a large source of unlabeled images then give you an infinite source of unlabeled images because of the web where we all have an effectively infinite source of images | 976 | 1,000 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=976s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | and the task is can all those random images up there somehow can pictures of trees and sunsets and horses and so on can that help you to do a better job figuring out that this picture down here is a mobile cycle okay and so one way to do that is that we have an algorithm that can look at these on label images and learn a much better representation of images than | 1,000 | 1,030 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1000s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | just the raw pixels and if that superior representation allows us to then look at a small label training set and this my superior representation allows us to use the small label training set to do a much better job figuring out what this tested images okay so I guess in machine learning there are sort of three standard three and a few common formalisms right there's the supervised | 1,030 | 1,055 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1030s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | learning setting which is the olders will standard one that most you best know so let's see it goes to record distinguish between cars and motorcycles right so the standard Nero school like 50 year old supervised learning setting 30 year old supervisor is I think you need to come a large training set up a bottle cause a lot of motorcycles okay Oh about 10 15 years ago deciding Andrew | 1,055 | 1,079 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1055s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | McCollum Tom Mitchell or maybe even others before them start to talk about semi-supervised learning the idea of using unlabeled data and that was exciting but in semi-supervised learning as is simply conceived um you know the ability to use unlabeled data is great but in semi supervised learning typically the unlabeled data is all images or causing over cycles and it | 1,079 | 1,103 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1079s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | turns out that this is a semi-supervised learning this sort of model is not widely used because it turns out that um you know rarely do you have a data set of where all the images are either caused or motorcycles and nothing else but the only thing we're doing is just missing the label so this is kind of useful but isn't why they use where as in um what I call self-taught learning | 1,103 | 1,127 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1103s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | the goal is to take you know totally random images that may be caused may be old cycles may be totally other random things and to can you can use somehow use this to learn to distinguish the miles and many cycles and one weight is what would I like to think about it is that you know the first time that a child sees a new object where someone invents a new vehicle right the first | 1,127 | 1,151 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1127s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | time that you and I saw a Segway we learn to recognize the Segway very quickly just from seeing it once and I think the reason that we learn to recognize a Segway very quickly is because you're in my visual system prior to that had had several decades of experience looking at random natural images just seeing the world and was by looking at these random unlabeled images | 1,151 | 1,175 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1151s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | that allow us that allowed you in my visual system to learn enough about the structure of the world to come up with better features if you will so that the first time you saw a Segway you very quickly learn to recognize what a Segway is right so just to make sure you've got this concept could you please a look at question four and just do that map this to a new example and so someone called | 1,175 | 1,198 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1175s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | the answer what first first part a PCOD second part and third part all right also all right that was easy cool so how do you actually do this in order to come up with an algorithm to learn features let's turn one last time to biological motivation turns out that when your brain gets an image the first thing it does is look for edges in the image right so first stage of visual | 1,198 | 1,224 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1198s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | processing on the brain is called visual cortical area view what I think y'all might have mentioned is yesterday and the first thing it does is look for edges or lines I'm going to use the term lines and edges interchangeably so in your brain right now there's probably a neuron that is looking for a 45 degree line 45-degree edge like this shown on the left with the dark region next to a | 1,224 | 1,243 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1224s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | bright region and there's probably a different neuron in your brain right now there's looking for a vertical line like this one right here okay so um how can we get our software to maybe mimic the brain and also find edges like this what we don't want to do is code this up by hand because you know what I don't want to do is tell us the neuroscientist and then work really hard to right hand | 1,243 | 1,268 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1243s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | engineer software the replica I think what's most much more interesting is we can have an algorithm learn these things by itself and that there is such an algorithm very old ones like a what 16 year old result now do 200 thousand a few called sparse coding you talked about this a bit yesterday did you write cool so now go through this very quickly even a small coding | 1,268 | 1,290 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1268s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | was obviously conceived as like a theoretical neuroscience model so you know Brunello thousand will tell you right he never envisioned that this would be used as a machine learning algorithm this is like a theoretical neuroscience result used to try to explain you know computations in the brain or something like that right and this is how the atom works is an | 1,290 | 1,312 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1290s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | unsupervised learning algorithm so the way where else is you feed it a set of m images X 1 X 2 up to X M so each input example is and let's say an N by n matrix yo quicker like a 14 by 14 image patch what's fast coding does is it learns the dictionary of basis functions Phi 1 Phi 2 up to 5 K such that each of your training images X subject to the constraint that the a J's are mostly 0 | 1,312 | 1,341 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1312s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | sparse and then the way this is implemented is on with a l1 constraint where both work you know minimize the sum of absolute value terms on the on their coefficients AJ okay the sparsity penalty term and so if you do this then on by the way I think this is the only equation I have for this first hour so I hope you enjoyed it so same thing in pictures if you train sparse coding on unnatural | 1,341 | 1,376 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1341s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | images every single time you're on it they'll learn the set of basis functions that look a lot like the edge detectors that you know we believe visual protocol error view one is looking for and then given a test example give it a test image X right what what what they will do is it will select out let's say three out of my 64 basis functions here and it will take that test example and explain | 1,376 | 1,401 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1376s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | it or decompose it into a linear combination of in this case just three out of 64 of my basis functions okay so speaking loosely this algorithm has to quote invented edge detection right the algorithm is free to choose absolutely any basis functions at once but it shows but you know if you run it every time it chooses to learn basis functions that look like these edges and | 1,401 | 1,425 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1401s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | what this decomposition says is that this image X is 0.8 times H number 36 plus 0.3 times H number 42 plus 0.5 times H number 63 okay so if you will this is saying this is now decompose the image in terms of what edges appear in this image and this gives a high-level more succinct more compact representation of the image and also probably a more useful one right because | 1,425 | 1,457 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1425s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | it's more useful to know where the edges are in the image that you know where the pixels are moreover this gives us a alternative way to represent the image instead of representing the image patch using a list of 196 pixel values we can instead using this vector of numbers a 1 through a 64 these are the coefficients multiplying into the basis functions just a few more examples so the method | 1,457 | 1,484 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1457s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | in density attention those represent an image in terms a just appear in it and it turns out that a funeral scientists have done that had to invent a chef have done quantitative comparisons between sparse coding and oh and visual cortical area v1 and found that you know it's not as by no means a perfect explanation of visual to everyone but but it matches of unknown so surprisingly well on not all | 1,484 | 1,513 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1484s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | but on many dimensions so that's vision power other input modalities so this is a slide I got from Evan Smith from his PhD thesis work with Michael Ricky so what Evan did was he appoints false coding to audio data and what I've shown here is 20 basis functions learned by sparse coding when trained on natural sounds ok so this is a grid of 5 by 4 you know what 5 by 4 the lower audio | 1,513 | 1,543 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1513s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | clips I guess audio basis functions so this 20 basis functions learn by sparse coding um what he did was he then went to the cat or the tree system since the biologists in Boston had been you know using electro recordings to figure out what early auditory processing in a cat does and for each of these 20 things learned by his algorithm he found the closest match in the | 1,543 | 1,569 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1543s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | biological data and the closest matches are shown over the in red ok so the same algorithm that only one hand gives a you know he's an explanation for early visual processing and on the other hand use may be a you know by no means perfect but but the reason why X which is some explanation for review of it auditory processing as well and it turns out you can do a similar study on them | 1,569 | 1,596 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1569s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | early somatosensory processing as well this is work done by Andrew Sacco Stanford where he collected touch data how do you call it touch data right so the way that done Andrew Andrew Sachs did it was um you know so we hold things of our hands all the time when I'm holding this thing but how do you how do you actually collect data for how I'm holding it so the way | 1,596 | 1,616 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1596s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | Andrew Sachs did it was um he took a cloth and he took an object and he sprayed talcum powder all over the object and then when you take a glove and you hope this object and then you let go the pattern of talcum powder you know on your glove tells you where you came into contact with the object and moreover the density of talcum powder actually corresponds a little bit to the | 1,616 | 1,641 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1616s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | to the pressure and and he not sure why he did this but um he didn't actually found so so what type of objects do people hold or I won't we don't know so we're collecting data you want to be representative of what animals do so fortunately it turns out that there were two biologists that has spent about a year of their lives sitting on some Island watching monkeys and carefully | 1,641 | 1,665 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1641s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | documenting every single way that monkeys pick up different things so thank God I'm up computer science is right and so Andrew Sachs you know took that distribution of data and he wearing his glove picked up objects using the same distribution of drawers as was documented in these monkeys on an island oh and that was his data um I think that story was pretty fun but totally | 1,665 | 1,692 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1665s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | unnecessary oh but but Carlyle sorry showing a zero so training training on data like this that turns out that you learn basic functions using sparse coding there are I should say by no means a perfect match to one to what is known to what is believed to happen in somatosensory cortex but this may be a surprisingly good match dimensions right so that's fast coding | 1,692 | 1,723 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1692s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | um and let me could you take could you do a five and six on the handle so it was the answer four five and six wait what's wait once what is must actually where was it four six again okay come on all right be right all right cool let success on the same for every image but is the coefficients they want to vacate the look features for the specific image okay um all right great | 1,723 | 1,763 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1723s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | so that's sparse coding and it turns out that on the different ways to implement you know sparse coding and what what I just talked about was maybe the the original way biocells and view in 1986 they're much route there there are different ways now I think young mother talked about encoded decode architectures I'll talk a bit more about that later today but I think this | 1,763 | 1,788 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1763s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | intuition of learning sparse features has been kind of key for is one of the ideas I guess that allows us to learn very useful features even from unlabeled data come back to this later as well there are there other ways to do it if any of you are familiar with ICA actually how many of you have heard of the ICA independent components out oh cool all of you awesome so it turns out | 1,788 | 1,813 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1788s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | that there's a you know the deep mathematical relationship between ICA and sparse coding with the turns out the two algorithms are doing something very similar for me personally these days I tend to use the ICA version so sparse coding rather than the version I just talked about but later today also talked about about sparse altering colors different ways of learning sparse | 1,813 | 1,835 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1813s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | features we'll get to that later but so what do you do instead of your we just described is you know one layer of one of these spas feature learning algorithms maybe sparse coding may be as possible to encoder may be a sparse VPN or sparse RBM and turns out what you can do really building on a jet engines work what you can do is recursively apply this procedure where instead of just | 1,835 | 1,859 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1835s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | going from pixels to images excuse me pixels to edges you can recursively apply this procedure and you know just as you can group together pixels to form agency can group together edges to form combinations of edges and group together combinations of edges to form higher level features so let me show an example that this is an example run by Hall actually it's not a michigan professor | 1,859 | 1,887 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1859s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | but what huh laughs did was she trained one air of virtuous paws dbn and first layer you know Avram mercy group together pixels for mages another level up learn significant edges to form models of object parts right so this I should say this was an example of sparse coding train just on pictures of faces so the entire dataset was pictures of faces right and then I'm requesting you apply | 1,887 | 1,915 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1887s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | this the next level up and doing some more complete models of faces so let me make sure that this visualization makes sense right um when I have this little square here shown here what this little red tag what does little square means is that I have learned a neuron in the first level that is looking for a vertical edge like that one okay going one level up and and and I've shown all | 1,915 | 1,941 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1915s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | these rectangles the same size but higher up features are actually looking at bigger regions of the image okay it's just a resize all of the same but one level up this is actually looking at the bigger vision of the image but one level up you know this rectangle here means that at that next level one of the neurons has learned to detect eyes that look like that great and then the | 1,941 | 1,963 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1941s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | highest level you know if you look at the upper leftmost square say with that visualization is showing that there's a neuron that has learned to detect faces that you look local bit like that person okay if you train the same algorithm on different object classes you end up with different decompositions of different option classes into different object parts then more complete models of | 1,963 | 1,988 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1963s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | objects if you train the algorithm on a mix of four different classes of objects so there's an algorithm trained on a data set that includes cars faces bikes and airplanes then you know you end up with at the mid level you get features that are shared among the different object classes where I don't know maybe what I guess are new cars and motorbikes both have real tire like shapes or your | 1,988 | 2,014 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=1988s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | features that kind of shared between multiple object parts and then the highest level you get object specific features okay yeah is there any sort of variance in see yes there is there's a point so because of the nature of the visualization I showed them as though images but yeah it it there's some amounts of Indians that's hard to visualize it oh I remember I have a | 2,014 | 2,046 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2014s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | better example later today of a where where we more carefully document the invariant zones have a better example later okay so was this good for when you when when you hear your research isn't deep learning like me talk you you see people like yawn in me and Jeff intern is hello stories IQs but you know so you can learn features so what is it good for well it turns out the Hollywood to | 2,046 | 2,070 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2046s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | benchmark could stand the benchmark in computer vision where the task is to watch a short video clip and decide whether you know any of a small number of activities took place in this video you know whether two people kids to do hugs almost driving so as eating's or was running the visor activities like the theater computer vision has tried out many different combinations that | 2,070 | 2,089 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2070s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | features last year probably soon as standard found that by learning rather than hand engineering the features he was able to significantly outperform the previous AVR all right how about audio it turns out you can apply similar ideas to audio so this is a spectrogram which is a different representation for audio you can take slices of spectrograms and apply sparse coding to that it turns out | 2,089 | 2,115 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2089s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | if you do this then on this is a dictionary of basis functions learning for speech I guess I'm not an excellent speech but in impervia probably a slightly optimistic as a reading of these you know the basis functions learn by sparse coding correspond roughly to phonemes there's a slightly optimistic interpretation I should say and so but if under this slightly optimistic interpretation when | 2,115 | 2,143 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2115s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | say informally that sparse coding has learned to decompose speech data you know very loosely into it because the phonemes to the appearance speech and moreover you can take this recursively apply this idea just as we saw earlier to build higher and higher level features and I guess a few years ago oh how likely so against the timid benchmark is a data set that many speech | 2,143 | 2,167 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2143s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | researchers work on this is one of those datasets where you know if you do point 1 percent better you write a paper um and a few years ago Holland was able to you know make what correspondent so I think we worked out something like like two thirds of a decade work for progress or something on this data set just by learning in this chart is outdated I made this child I think back when | 2,167 | 2,195 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2167s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | pauillac was publishing this paper since publishing this paper geoff hinton and others have surpassed this also using deep learning techniques right um and then I was referring as was preparing this talk I best prefer practice I ask my students to help me put together a chart of the social results where you know we or others or whatever holders through the odd benchmark result using | 2,195 | 2,219 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2195s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | deep learning and there were surprisingly many of them from us Stanford from other groups on I say yeah I worked on machine learning for a long time I've never in my life seen anyone technology not go over benchmarks like this quickly this is the whole view the deep learning is like knocking over benchmark like nobody's business um there's actually a lot more | 2,219 | 2,244 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2219s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | than fits on one slide I think if I put all the ones I'm aware of it'd be about three slides like this what's left to be done right so and I know that some of you are you know here because uh you want to learn how to apply these things and I know some of you are here because you might be even interested in doing research yourselves and writing research papers yourselves in deep learning and | 2,244 | 2,266 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2244s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | future learning so I want to share of you I'll do this later to talk more about the state later as well I'll share of you what I think of as a as a good way as one as one of many promising directions in which to you know take research for for deep learning I think that's a scaling up so and happy how do we build effective deep learning algorithms right how do you get these | 2,266 | 2,290 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2266s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | animals work well well in fact how do you build effective machine learning algorithms you know so let's not back in history right about 20 years ago oh there were these debates about you know where these different supervised learning algorithms so no feature learning yourself or learning pursue these supervised learning algorithms and they used to be all these debates about | 2,290 | 2,308 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2290s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | you know is this your album better is my album better so um michelle banko and eric bro did one of the studies that most influenced my thinking where they took maybe four of the state of the art learning algorithms of the day I guess back in 2001 azn's were not yet popular so didn't actually study SVM's but they took a natural language processing house on which they | 2,308 | 2,334 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2308s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | had a effectively unlimited source of label data and they trained for learning algorithms and plotted on the x-axis is a training set size parts on the y-axis is the performance is the accuracy all the algorithms do about the same is the amount of data you have and even a quote superior algorithm often lose to a quote inferior algorithm if only you can give the inferior | 2,334 | 2,364 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2334s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | algorithm or data to train on yeah so I think this results like these that has led to this Maxim in machine learning that you know says that often is not who is the best album that witnesses who has the most data and then I definitely see this over and over and slip the value of you if you look at think about the most commercially successful websites you know the ones making large amounts of | 2,364 | 2,384 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2364s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | money in that you use per every day many of those algorithms are incredibly simple is like logistic regression but the secret is that those albums will fit far more data than anyone else has so how about so this is supervised learning um how about unsupervised learning so Adam coats as you who who helped prepare this handout a few years ago actually and a half ago did this interesting | 2,384 | 2,417 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2384s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | study right where he took all of the unsupervised feature learning algorithms of the day that you know that there's guys like us I guess debate as my averin better as your oven better weather and he took a bunch of these algorithms and ran all of them and they read the model size so for unsupervised feature learning all of us have a have a large amount of data right if you know you're | 2,417 | 2,441 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2417s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | doing if you're learning from unlabeled images from natural images um you have an infinite amount of data right new and so the parameters the very is not the amount of data is the size of the models of how many features do you learn those coefficients a 1 just in the or the example we had 64 coefficients a 1 through a 64 for sparse coding but let's set that bigger let's learn a thousand | 2,441 | 2,467 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2441s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | features instead or 10,000 whatever let's learn much larger number features and what Adam found was that you know the album does matter maybe it matters more than know maybe matters more than we respond supervised learning because these albums are less patrol but they can be sort of silver result where the bigger the model the better it does and in fact one interesting historical aside | 2,467 | 2,492 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2467s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | so see part n we actually went back and historically creates though you know I mean we like to publish paper and saying my oberyn's better than yours yours valine whatever right went back and traced all the sequence of papers with you know person a published as a result on C far person B published paper saying oh I did better than person C published an LP say even better and then the one | 2,492 | 2,513 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2492s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | says oh I do even better a new idea invert we traced though a couple sequences of look of that of supposedly benchmarks of advances in benchmark that was supposedly do 200 miles better than yours as I do better and we believe that a lot of those results of the supposed progress was actually because the models got bigger right it's not that my album is actually better it just I don't know | 2,513 | 2,538 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2513s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | most law had more time to work and so I trained mine better and so I'm going to write a paper saying my albums better in the stuff I've done the one most reliable way to get better results has been to train a bigger model if I change the algorithm sometimes it makes it better sometimes not but in fact I you know look at the literature I feel like a lot of work by a lot of | 2,538 | 2,560 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2538s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | different research groups has been in some ways on on trying to get these models to just train bigger right so in this world of a supervised learning where all of us have an infinite amount of data you know I feel like we're not limited by what data we have were much more limited by our ability to process the infinite amount of data that all of us have alright so | 2,560 | 2,587 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2560s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | you know many attempts to come more efficient algorithms parallelization I say John's done very cool work on the FPGA and a second plantation oh I think I brought I'm going to take credit for bringing GPUs to the deep learning world and and so on as well work like this um and in fact looking at this chart my personal interpretation which others will disagree with is that those results | 2,587 | 2,611 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2587s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | were achieved to a very large part because of scalability issues right but this is my personal interpretation which others may disagree with could you through the questions 7 & 8 on the handout so question 7 whether you have which which ones did you check off 2 & 3 cool I'll take your word for it and for question 8 oh and I just actually a question I checked off | 2,611 | 2,643 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2611s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | everything except DNA computing them it off my answer anyway yeah I thought of floating quantum in there too but I think someone actually is working on quantum computing yes yeah all right cool so um let's see you know what there's something else I could talk about I think I'll do that towards the end um so you know just to wrap up this piece I think talk about the high level | 2,643 | 2,671 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2643s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | vision of less learning rather than manually designing our features but again kind of for me you know this isn't just about machine learning anymore this is I feel like um can we really learn something about a I especially perceptual ai ai ai and human intelligence is very broad I think you know we're we're starting to get a handle maybe the perceptual part of AI | 2,671 | 2,690 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2671s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | which is maybe like you know 40 to 60 percent of many animal brains right so this big part of the brain and so what I'd like to do is as I state you know thank you for your attention and for your patience or need to do these things I hope that was somewhat fun what I like to do is let's break and later on in the in the next couple sessions where you know die slightly deeper into technical | 2,690 | 2,714 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2690s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
pfFyZY1RPZU | details go for the basics talk about neural networks and builders the algorithms also to point out that you know after you later today or whatever if you want to go through a deeper highly technical tutorial with exercises in everything there's one up there this is the URL is also given at the bottom left of the handout but you can you can check that out later after | 2,714 | 2,735 | https://www.youtube.com/watch?v=pfFyZY1RPZU&t=2714s | Andrew Ng: "Deep Learning, Self-Taught Learning and Unsupervised Feature Learning" | |
XBJ2f68LuO4 | hi everyone I hope everyone here okay so I'm pleased to be here with you today to share with you some of my experience in Catalan during several years so the title is called is easy so it's quite different from other talks where it was more technical about how to win competitions here I will keep it rather simple and I will talk about what I was doing before and what I started doing to | 0 | 41 | https://www.youtube.com/watch?v=XBJ2f68LuO4&t=0s | Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle | |
XBJ2f68LuO4 | improve myself so I will start by giving some general advices that everyone should follow and I will talk about some technical sir they me about how to improve or to check for improvements in your models and then I will talk quickly about some case studies and see that you will see that getting a gold medal is not sometimes well most of the times not that difficult | 41 | 74 | https://www.youtube.com/watch?v=XBJ2f68LuO4&t=41s | Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle | |
XBJ2f68LuO4 | so starting with the advices first advice positive mind if others can do it you can also do it that is how I started when I started kaggle I said if those people can do it I can do it I compete a lot and I managed to become again my second advice understand the problem I try to find you IDs never starts fine-tuning your put your model during the first week while you are still doing | 74 | 109 | https://www.youtube.com/watch?v=XBJ2f68LuO4&t=74s | Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle | |
XBJ2f68LuO4 | some feature engineering steps you are still looking for a good architectures you will just waste most of your time doing by training spend this time trying different approaches different architectures to come up with different models for your final example turtle bites don't use caramels when you start a competition you will mostly end up using a slice variation of that | 109 | 134 | https://www.youtube.com/watch?v=XBJ2f68LuO4&t=109s | Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle | |
XBJ2f68LuO4 | kernel so don't look at caramels when you start try doing that data analysis by yourself try to come up with your own models with your or Peter engineering you will end up with models that are different from doors that you will find on Kegel later on you can use ideas you can borrow ideas from cattle camels to help your model improve even better if you have the chance to work in teams | 134 | 162 | https://www.youtube.com/watch?v=XBJ2f68LuO4&t=134s | Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle | |
XBJ2f68LuO4 | don't share anything except some important insights so when I work in team with my teammates we don't share the architectures of our models we don't share the features that we create we only share important insights that we find by doing the analysis of the data if something doesn't work don't stick to it you will mainly waste your time find a different approach try different | 162 | 195 | https://www.youtube.com/watch?v=XBJ2f68LuO4&t=162s | Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle | |
XBJ2f68LuO4 | modeling approaches to your problem you will end up with different solutions let me help you another advice is you should always keep it simple we saw previously in different competitions some awesome solutions that use hundreds of models which stuck in and as a as beginners or even at some advanced levels you may say that I can will never be able to do something like | 195 | 225 | https://www.youtube.com/watch?v=XBJ2f68LuO4&t=195s | Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle | |
XBJ2f68LuO4 | this stuck in is very an advanced topic so try to to make simple models simple models will also work and to help you to get a gold medal don't understa mage the power of neural networks even on tabular data mostly on tabular data people try to use only gradient boosting decision trees because they are so powerful moral networks can be as good as present boosting decision trees if you are able | 225 | 252 | https://www.youtube.com/watch?v=XBJ2f68LuO4&t=225s | Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle | |
XBJ2f68LuO4 | to come up with a good architectures and it will also help you your final ensemble if you are working with two models right inputs boosting decision trees and or immensely another point is throws for IDs that's how a college is try to work on different fields to image images in AP time series suppler don't mean classification regression try to work on as many different topics as possible | 252 | 285 | https://www.youtube.com/watch?v=XBJ2f68LuO4&t=252s | Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle | |
XBJ2f68LuO4 | ideas that you may use in in images for example can be used on tabular data if you know how to model your problem in order to use such ideas and the last idea and the last point is thinking outside of the box don't think don't try to do what other people does do because we'll end up with the same solution and you will end up with the same idea try to do it things that people don't think | 285 | 316 | https://www.youtube.com/watch?v=XBJ2f68LuO4&t=285s | Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle | |
XBJ2f68LuO4 | about trying new thoughts new new ideas most of the time there will be crazy ideas that may not work but you have to try them and if one of those ideas work it will get you the gold medal so checking for improvement I would start talking talking about Peter importance how to assess the importance of so one thing that's all so what does I have no ties among calculus is that they are always trying | 316 | 360 | https://www.youtube.com/watch?v=XBJ2f68LuO4&t=316s | Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle | |
XBJ2f68LuO4 | to eliminate features looking for the lorikeets filters trying to eliminate them in order to improve the performance of the model and to make it faster to trade so I never do that when I start a competition but let's say I never do that it's during the first modelling approach but later on you have to do it so why I don't use it I thought eliminates ranking filters when I thought because | 360 | 387 | https://www.youtube.com/watch?v=XBJ2f68LuO4&t=360s | Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle | |
XBJ2f68LuO4 | you may have correlated features that they explain the low ranking of some filters when we will do feature engineering new filters that will you will end up coming with may have some good interactions with the lower ranking filters and you will see an increase if they're in their importance and also the ranking of the filters is dependent of the complexity of the models some low | 387 | 413 | https://www.youtube.com/watch?v=XBJ2f68LuO4&t=387s | Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle | |
XBJ2f68LuO4 | ranking filters may have they become really important if you increase the complexity or if you decrease the complexity some some high important filters may decrease so here is an example using the Titanic exam so on the left you can see the original row features and on the right I just added a new feature that I called noise filter that I made from a normal distribution | 413 | 443 | https://www.youtube.com/watch?v=XBJ2f68LuO4&t=413s | Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle | |
XBJ2f68LuO4 | and as you can see I made a grid search trying to find the best parameters for the two models and you can see here that the noisy feature is ranking third which doesn't make sense this is just random noise so if you start looking at the lorikeets features you will try to eliminate some of those filters and will completely forget about the noisy filter so this is something I | 443 | 473 | https://www.youtube.com/watch?v=XBJ2f68LuO4&t=443s | Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle | |
XBJ2f68LuO4 | have noticed when I was working on the first silicon petition I made up hundreds of meters and I tried to use the the noise Peter trick which is just made a noisy filter and see the ranking and if anything is rank it below the noisy filter it just means that it it's just noise you can eliminate this but it happens that in practice doesn't work and my noisy Peter | 473 | 502 | https://www.youtube.com/watch?v=XBJ2f68LuO4&t=473s | Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle | |
XBJ2f68LuO4 | even though I have hundreds of meters was always ranking in the top five filters so don't look at the low ranking Peters but start looking at those that are ranking very very high your model is probably overfitting on some noisy filters and by eliminating those features first because the low rugged Peters may become more important because your model will try to use them instead | 502 | 532 | https://www.youtube.com/watch?v=XBJ2f68LuO4&t=502s | Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle | |
XBJ2f68LuO4 | of using the noisy Peters so the main purpose is to detect the overfitting meters you have two choices you remove them if it helps your cross-validation and if it hurts your cross-validation then just try to apply some transformation on those filters to have a better generalization so the second part of checking the improvements is about Twitter engineering and this is | 532 | 568 | https://www.youtube.com/watch?v=XBJ2f68LuO4&t=532s | Gold is easy: Kaggle tips and tricks | by Chahhou Mohamed | Kaggle Days Dubai | Kaggle |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.