video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
dMUes74-nYY | supervised but the other two versions are completely self supervised so they've made the reconstruction last good work is you you have a masking region and you have a ground truth for that and you could just apply the reconstruction error on the pixels that have been masked so you take your decoded image and you apply the inverse of the mask and you get all the other pixels out and | 2,082 | 2,113 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2082s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | and then you can just mask out those pixels for the reconstruction here so there are multiple losses that you can use for the reconstruction objective one is one is you could use just a mean square error that you saw in the previous slide of diagnosing or encoder or one problem that's usually a common with the mean square error which was also mentioned in the Gant lecture is | 2,113 | 2,138 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2113s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | that they often tend to be blurry so because you don't want Larry reconstructions you actually want sharp predictions of all these missing pixels you can actually think of using a gann loss which is you have a discriminator and that discriminators you behaving going to behavior I could learn a loss function and you can think of using the discriminator objective and the | 2,138 | 2,163 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2138s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | reconstruction objective together because back back in those days training just a discriminator and using adversary losses wasn't particularly easy so the authors ended up using a combination of the regular reconstruction objective and the and the adversarial a discriminator objective so so this is the architecture they adopted where you have your original image one finis 8 by 128 and | 2,163 | 2,194 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2163s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | you just use strided 4x4 cons and down sample spatial resolutions and you also up keep up sampling your channel resolutions and you get a flat hidden vector of 4,000 dimensions and then you up sample using transpose convolutions back into the actual original image and you can use the reconstruction error it could be an l1 or l2 objective and I think the authors tried both both l1 and | 2,194 | 2,224 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2194s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | l2 and found l1 to be working slightly better as far as reconstruction goes and they also have the discriminator that takes in real data as well as your predicted missing patches and then you classifying if it's a real or a fake image so you can see that the l2 loss is producing a blurry pixel interpolation basically scrapping all the neighborhood pixels the vaping workers you just have | 2,224 | 2,256 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2224s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | the missing square and you can just fill up like the borders based on what based on the pixels that are available immediately to the left or the top of the right at the bottom respectively and once you filled it up you can just fill in the missing regions of the square based on what you've already filled up within the square on the edges and that will look like a reasonable completion | 2,256 | 2,276 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2256s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | very blurry though and if you look at the adversarial loss it's introducing artifacts that are completely new so as long as the discriminator thinks like it's a real object it would still work but it may not particularly have a coherence with respect to the actual background image and you've seen this problem in pics to Pyxis will vary for a conditional ganyo to provide the context | 2,276 | 2,301 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2276s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | in addition to what you're trying to translate to so so that's clearly what's going on there and the joint loss is trying to use both the reconstruction objective and the and the adversary's objective and it's producing something sharper than just using l2 loss so so here are some results for what what happens okay let's say you finish this retraining process now you take the | 2,301 | 2,332 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2301s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | encoder out and you you want to use it for a bunch of downstream task so the downstream task could be a classification in detection or semantic segmentation and classification and detection are done on a Pascal dataset which is much smaller than imagenet so you can think of the advantage of P training as hey if you don't have too many labels you know you really need | 2,332 | 2,357 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2332s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | some kind of features to start with to be able to perform the task and semantic segmentation are also on Pascal VLC but a different version of the dataset 2012 and that uses another architecture like fully convolution that for doing the semantic segmentation part so using the feed train part of context encoder as a backbone our part in the FCN so so here the results are | 2,357 | 2,384 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2357s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | reasonably good so if you just use image net features which is you pre train a classifier or an image net and then you fine-tune it on Pascal the results are like seventy eight point two percent for classification and 56 twenty-eight percent for deduction forty eight percent for segmentation and context encoder the fine tuning to pascal classification is not that good it gets | 2,384 | 2,409 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2384s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | only fifty six point five percent which is way lower than supervised but it's when it's reasonably good in the sense that it's able to perform on par with with a or are actually quite quite better than an auto encoder an auto encoder gets fifty three point eight percent on pascal classification and detection in case forty two percent so so they get two to three percentage | 2,409 | 2,436 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2409s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | points more than doing just a regular encoder and other other other self supervision methods that were available at the time so this was a reasonably interesting result at the time so next we look at the principles of predicting one view from another where you basically do some source separation and you're trying to predict the separate parts from each other so this is a slide | 2,436 | 2,461 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2436s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | for Richard Zhang who is also the first author of this line of work so we already saw what a denoising auto-encoder is it basically takes raw data corrupts it and tries to reconstruct the original data now imagine that you can separate the raw data in two different views the best way to understand this is an image can be separated into a color image and the grayscale and you could try to predict | 2,461 | 2,491 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2461s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | one view from another so you could try to predict the color image from the grayscale putting the grayscale from the color doesn't actually need any deep neural net all you need to do is average two pixels and quantize and you're going to get something that is reasonably grayscale and in exactly the conversion is done it's a weighted average of your RGB pixels so | 2,491 | 2,513 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2491s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | but the other version which is predicting the color from the grayscale means that you have to add some new information to what's already there because you don't have any information about the color so that way you do have to understand that hey if you have a tree you know the leaves are green and like the bark is brown so you have to identify some of the objects and try to | 2,513 | 2,535 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2513s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | like learn features about like edges and so on so so so that's the goal of this this line of work so it's best visually illustrated here so grayscale so an image can be just like you have RGB there are like various different color channels parameterizations of an actual image and instead of using the RGB color space you can use the L a B color space and the L channel will behave like a | 2,535 | 2,565 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2535s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | grayscale image and the a B channels behave like the color image and you can use the L Channel encode it and predict the a B channels and that is basically the tasks consequently considered in the learning to colorize work so you can see that those yellow light pixels are identifying the eyes of the fish of the body and but you can also see that because the background is Queen starting | 2,565 | 2,595 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2565s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | with the color of the body of the fish it's it's not able to separate it out so uniformly color is the background but it's able to color the coral reefs around so incident francium is green so you can see that it's able to understand some high-level aspects of the image by doing this task so so that's basically what's going on and to visualize what how the actual image looks like you can | 2,595 | 2,622 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2595s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | just concatenate the two channels and see how it looks like so that is the ocean and here the author's first tried the new idea which is take your draw a ground truth then you just do a mean square between your predicted and the ground truth and that would producing a very degenerate like degenerate like colorization of the actual bird and the ground truth is made | 2,622 | 2,649 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2622s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | more colorful and diverse so what the author has realized is instead of treating the prediction as a mean square a regression task what if you treated as a classification task where you quantize the pixels so you quantize the a/b channel information into a button and and and and bit it into a bunch of categories and now instead of predicting some value for the a/b Channel and just | 2,649 | 2,677 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2649s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | regressing to the ground truth you're going to actually output a distribution over the possible values because it won't test and we can then do a softmax puzzle to be lost instead of mean square error loss and it's in general it works out really well and deep learning to do this you've also seen how it worked out really well in pixel RNN and where where all the pixels were quantized to | 2,677 | 2,698 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2677s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | discrete categories instead of using a Gaussian mean square error going to use the process to be lost and that produces sharper images so here here's how it goes you you basically take an image you have the a be channel prediction you quantized it and now you're going to use Krauser to be lost to predict your actual a B channel information so that is basically how the colorization work | 2,698 | 2,733 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2698s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | is done another version of this done by Richard Zhang was trying to do something like a split brain or encode or a split view or encoder which is you've separate the channels in your source you have encoders that try to predict the other channel from the current channel so where X 2 hat is the other channel prediction and X 1 hat is the prediction for from X 2 and now you concatenate | 2,733 | 2,764 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2733s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | these two channels together and you get your actual input again and you want to make sure that this version matches your original version so so this way it's like ensuring a backward consistency in some sense because it's not just about predicting the color from your grayscale it's also like hey the a B channel should also make sense like whatever you predict from your a B channels your L | 2,764 | 2,793 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2764s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | channel that the and what are we predicting I'll channel to a beach on together if you look at it it should make sense and it should look like my actual image so this can make more sense if you're looking at other kinds of views like depth image and color images so here's here's like one way to implement this which is you you separate out you have two different encoders you | 2,793 | 2,814 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2793s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | predict the other channels missing channels you concatenate and then you have a loss on the actual predicted image so and this is how it would work for a color and depth information so so so these are all interesting ideas and we're not gonna really look into the metrics that these methods have because more than the metrics these papers are more famous for the input-output version | 2,814 | 2,843 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2814s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | itself which we looked at pics depicts where the fact that even colorization can work so well is so peeling but but but in terms of the metrics in terms of the numbers will we look at them much later when we look at contrast Aloni so here here is here is the other final the second version that we wanted to see which is introduce some kind of common visual common sense tasks so so so here | 2,843 | 2,879 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2843s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | we are going to look at relative patch prediction this was an idea put forth by Kadosh abhinavagupta ala al Alex EE froze and so in some sense this was one of the first papers to do cells crew as learning on images at a larger scale and it's considered one of the foundational papers for a lot of the ideas put forth so basically what is the task the authors considered here | 2,879 | 2,908 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2879s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | the task was given two patches try to identify the relative position of the two patches which is to say that if you have a center patch and if you have a patch linked to its immediate right and given these two patches to a neural network the neural network should say hey this patches to the right of this reference patch so look at look at its best looked at from this figure so you take an image | 2,908 | 2,936 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2908s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | you take an approximately three by three grid of non-overlapping patches and now given the blue patch you're trying to predict that the yellow patches to the top right corner so you can number the surrounding patches from one to three so on until eight so and you basically have eight categories for a classification task given the reference center patch and this way you can take any you know | 2,936 | 2,966 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2936s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | different regions so the same image or like lots of different images you just have to take an approximately 3x3 grid of non-overlapping patches and give it to your neural network select two of them give to your neural network and create these labels for free from your data so it is it is another version of supervised learning where you you are actually adding creating a task like a | 2,966 | 2,989 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2966s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | jigsaw task sort of not exactly jigsaw but you can you can think of it learning spatial associations and then you know then it has to understand hey if you give me the year of the cat and the eyes of the cat it's then then it's likely that the year is lying on top to the right or to the left and and it also has to understand that what is what is left in right here | 2,989 | 3,014 | https://www.youtube.com/watch?v=dMUes74-nYY&t=2989s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | so so so that means that it's learning these low-level features and as well as high-level associations and it's way that could be useful for a downstream task so that's pretty much it you you you share the CNN in chorus for the two patches you use the mean pool representation at the end and you train a classifier on top and you can create a lot of data for your training tasks because you can | 3,014 | 3,046 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3014s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | sample a lot of different batch grades from a lot of images and see how good the features are so there are a couple of the details in in in getting this right which were extremely crucial and which have been adopted in every follow-up paper almost which is making sure that you jitter both spatially and color wise so firstly you should make sure that to pick the patches don't | 3,046 | 3,078 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3046s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | overlap or else it's so easy to get the well which branches like to the ref to the right just by looking at the boundary pixels so that's one obvious but but but highly non-trivial at the time but considered obvious now so you make sure that the patches don't overlap second thing is you jitter you jitter the patches to prevent chromatic aberration so so that the but by that | 3,078 | 3,109 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3078s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | what I mean is you you sample you basically you you sample a particular random crop view you can you divide it into three by three grid of patches and then within each cell of the three by three grid you be you basically create another random crop and drop some color channels and so you basically do spatial and color jittering at every single patch to prevent the chromatic | 3,109 | 3,132 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3109s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | aberration from happening and that way the neural network can cheat and both of these details will have a non trigger at a time and and they were very crucial and all the follow-up work so another version which is very similar to router position prediction is you know actually going all the way so rather to position prediction looks looks like a jigsaw task like like the | 3,132 | 3,156 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3132s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | jigsaw processed children saw so why not actually do exactly that just just make you don't have to saw six of puzzles and that's what this paper is doing which is you similarly take a crop three by three grid of matches from a random crop of your actual image and you shuffle them and then you try to predict what is the correct order of the shuffling so in this case you it has to identify that | 3,156 | 3,187 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3156s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | similar kind of spatial reasoning and associations so that that's really what works well here and and and and if the neural network is able to solve this task well it means it's able to understand what this comes to the right and left it's learning general visual reasoning so how is it implemented a very easy very implement this is you if you have a 3x3 jigsaw puzzle task then | 3,187 | 3,219 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3187s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | there are 9 factorial possible permutations so instead of asking the neural network to output the exact order you can ask the numeron that will top with an index of that and you can hash the corresponding order in a hash table of all possible permutations indexed by like some simple declare category and you can just have the neural network to predict the category and that way the | 3,219 | 3,243 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3219s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | neural network is the you know doesn't you don't need an RNN decoder or something like that and and and it just looks like a normal classification task where every single patch in that word is passed through the same shard see in an encoder the meaningful representations are taken and they're concatenated in some form and tried you just trying to predict this output category so a final | 3,243 | 3,270 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3243s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | version of a final version of this idea of doing puzzle tasks creating data creating tasks from raw readers wrote the very simplest version is rotation prediction this is really really so simple that it's amazing a works over but but there is also a concrete argument as to why it works the idea is you take an image you rotate it by a random angle so in this case you're | 3,270 | 3,297 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3270s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | rotating it by a multiple of 90 degrees and this C and you pass it through the convolutional neural net and the conversion here and that is asked to predict what is the angle you rotated the original image by that's that's really it so in this case for the first time as you would predict the conversion and that has to predict 90 degrees for the second it has predict 270 for the third | 3,297 | 3,318 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3297s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | 180 and so on for the fourth it just has to predict there's no rotation which is a zero so why does it learn something nice why does it have to learn anything at all so if you look at the 180 degree rotation like the only reason you're able to say it's 180 degrees is because there are human faces and you know that they're inverted right so that means to be able to say this 180 degrees you've | 3,318 | 3,345 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3318s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | identified that there are humans faces in the image similarly like look at the first image there's a bird and there is it you know there is a tree and so on but but but you know that if the normal view would have been that the bird is like if it was tilted by 90 degrees the tree would have the bark would have been horizontal and the bird would be standing standing in the same vertical | 3,345 | 3,368 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3345s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | position so this is basically trying to identify if there was a photograph for that captured the actual image how is the photograph of position right so so so that is an inductive bias that is physically or geometrically grounded which is Kam image formation is something that's fundamental and that's how we all record images and most of images on image net have been captured | 3,368 | 3,393 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3368s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | where the object was very centralized and so there's a lot of information as to why the camera was placed at what what pose and so on to capture the actual object so because you rotated it you're actually trying to identify in some sense you're trying to do inverse graphics of the camera parameters here which the only parameter you care about is the rotation angle here but but since | 3,393 | 3,416 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3393s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | it's physically grounded it's gonna learn something useful so here is how they implemented it which is you take an image be rotated by various possible angles and you can struck these rotated versions you pass them to the same convolutional neural network and it is pretty clear rotation angle so you just do it for all possible images in your dataset and you you | 3,416 | 3,440 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3416s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | you would learn really good features so one interesting point is you might think hey the more angles you had to a dataset the more good features that you can learn that's not particularly true so the authors found that if you just use for rotation angle 0 90 180 270 the and you put a linear classifier on top and train it on C for this is a small data set you can get 89% op1 accuracy but if | 3,440 | 3,472 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3440s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | you add multiples of 45 degrees which is like one level more fine-grain your performance drops by 0.5% approximately and if you just use two angles which is less fine-grain the performance drops even more by two percent and and if you think about it as vertical versus horizontal like 90 270 instead of 0 180 the performance drops like another two percent so it is particularly important | 3,472 | 3,499 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3472s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | to use both horizontal and vertical and it is more important to use vertical and it is also not so important to use a lot of angles maybe the amount even that that's breathing eight rotations wasn't trained well but it's sufficient to predict four rotations you don't have to be so fine great so here the results that they have where they basically train an allit so at that time it will | 3,499 | 3,527 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3499s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | be Alex net was the common backbone used for self provision studies so they basically trained an alux net on all these different tasks like like for instance rotation that is there rot net was there paper but the baselines are coming from the car existing substitution papers which we also covered so if you use the conlusion l 4 & 5 which are the fourth and fifth | 3,527 | 3,554 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3527s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | collusion filters in alex net and put a linear classifier on top the the results from just using supervised learning features are the topmost row 59.7 percentage top one which is pretty close to what alex net gets in terms of top one accuracy so just using the random features gives you like 27 percent and 12 percent respectively and the context paper doors are always the paper we saw relative | 3,554 | 3,584 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3554s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | position prediction and and that's able to get fork on forcibly at forty five point six percent which is which is way better than random but not as good as supervising internet and colorization work which we saw the Richard Zhang's work that's five percent lower than doing relative with a transposition prediction so you can clearly see that doing more puzzles like tasks is better | 3,584 | 3,609 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3584s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | than doing colorization and the jig suppose a task is on par with the context the relative position prediction it's similar 45% and bygone which is a paper we already covered in the again lecture is not as good as these puzzle tasks but it's on par with colorization and finally this rot net paper has a substantial improvement over the state of the art at the time so basically the | 3,609 | 3,640 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3609s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | state of the art self supervision method had forty five percent and rot net accrued it to fifty percent and so that that's real clearly good and even on the con filer it shows like for the doors at all the the numbers are really low like 30% and the rot net is like clearly better forty three percent and way better than the other methods as well and these are like more more detail | 3,640 | 3,671 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3640s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | results for various convolutional layers and you can see that the rutland numbers are significantly better than the other self supervision techniques at the time and it also transfers well to pascal so on pascal classification and detection and segmentation it ended up being sphere the arts of supervision method and was significantly better then context prediction so but but still | 3,671 | 3,711 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3671s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | the gap between rot net and imagenet labels is really large if you look at the detection results it's close down like like fifty-four point four fifty six point eight is pretty pretty small but on classification the significant gap was seven percent and on segmentation there's a significant gap of nine for me not being mean my intersection over Union so so while this | 3,711 | 3,738 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3711s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | was a pretty promising technique it was still not there yet so that's it for like puzzle-based us next we'd actually get into this context based prediction techniques so predicting the neighbor context was the final line of work that we wanted to cover and again like I wanted to go back to this likkle slide where you you're basically interested in tasks in the neural network to predict | 3,738 | 3,771 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3738s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | missing parts from given parts or like neighbors given the neighbors you're trying to predict the surrounding context so one idea which was explore way back in 2013 was this word to back and we're going to cover that for us because it's very foundational so this is a figure from the bar from the 2:00 to 4:00 in class with Stanford and where the goal is to learn good word | 3,771 | 3,799 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3771s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | embeddings so we're the bearings are very fundamental like you have a lot of words in the vocabulary and you would like to represent them vectors so that similar words are having similar kind of vectors either directionally or they are very close to each other in the high dimensional space and the name word embedding you can use this is one hard encoding and that's hardly infinitive of | 3,799 | 3,826 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3799s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | any similarity across words so let's say that you have a bunch of sentences and you create account matrix of what words are occurring how many times so in this case you're basically trying to do some co-occurrence matrix if that that's very popular in NLP which is how many times each word co-occurs with the other and that that's usually used to construct these similarity matrices and | 3,826 | 3,857 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3826s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | so this is how the count matrix looks like so I in like are occurring together because there are two sentences with them but but IND don't go together like like basically once this matrix is constructed you can think of applying single value decomposition to this matrix and this is really how recommender systems have been built in the sense that you will have a | 3,857 | 3,884 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3857s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | history of what uses goodbye war items and you would construct a user ID matrix and then you would do an SPD on this matrix and you would get a user embedding in an item embedding and you just cluster or similar users in similar items and you will use it to build a recommender system so similarly you can think of building a term frequency resistant frequency matrix here in a low | 3,884 | 3,910 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3884s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | peak and constructing a sweetie and get word embeddings so that's that's precisely what's being done here and you can get these U and V vectors so so what is the problem with the SVD approach this video prods the first one sparsity right so obviously there may be very related words which should be not necessarily co-occur with each other because you just not have that possible | 3,910 | 3,938 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3910s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | sentence so sparsity is a big issue the resulting matrix you construct is very likely to be sparse and the computation cost right so SVD is a third order to compute so it's not going to be easy to optimize and there's also this problem of infrequent words which is when you have any dope when certain words are not particularly frequently present they're going to be hard to optimize because the | 3,938 | 3,971 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3938s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | word embeddings for them will not be very accurate and you can also have noise from very frequent words like like like you know words like a or D or articles and these are going to be very frequently present so you have to use some heuristics like inverse document frequency to make sure that they don't corrupt your bearings and it's however all of this sounds like a | 3,971 | 3,994 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3971s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | very hockey engineered pipeline and it's not particularly efficient and it's not going to scale well with like larger larger the vocabularies or like larger data sets it's it's very hard to scale this approach so that comes the idea of using Engram language models which is hey you just have if you just have a bunch of terms and in a sentence you say that the probability of sentence is | 3,994 | 4,021 | https://www.youtube.com/watch?v=dMUes74-nYY&t=3994s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | probability of individual trans present in it and that's the unigram model but a bigram model is like the problem you you you bet you basically try to say that you take into account the previous word percent in the sentence and say that the probability of a word is conditioned on its previous word and that's saddam v you can start counting pairwise occurrences of words instead of just you | 4,021 | 4,048 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4021s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | know frequencies of single words so an Engram model basically generalized stuff so so let's let's actually go to the word Tyvek idea which is going to clearly generalize all these things and this is first proposed by niccolò back in 2013 and so we're two back what's the area here you're going to have a bunch of surrounding words and you're going to try to predict the center word so you | 4,048 | 4,086 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4048s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | have a sentence you're you pick a particular word and treat that as a center word make all the surrounding words treat them as surrounding words and embed each of them and try to identify what is the center word from the surrounding woods so that is referred to as the Cibao model and the script skip grandmothers exactly the mirror image of this model which is it tries to | 4,086 | 4,111 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4086s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | take the center word and it tries to predict all the surrounding words so one way is you embed all these individual one heart and coatings of the original words into a word embedding matrix which is basically like the word embedding matrix would be the number of resistance embedding dimension so by embedding one heart word it's basically if we went looking up what what what the | 4,111 | 4,136 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4111s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | corresponding word embedding is and the C bar model would just average the embeddings of your surrounding words and try to identify the embedding of your missing word and the skip grammar model would basically try to use the word embedding and do an individual softmax over all the surrounding words so let's actually look into the math of how this works out so consider the Cibo Matto and here | 4,136 | 4,163 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4136s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | you're trying to maximize the log probability of the surrounding word or the context word given the neighboring words and so you can think of this locked bu WC given all this WC minus ni for different values and so so so the way it's actually constructed is huge is average the embeddings of your neighboring words it's a very simple model so once your average that you just | 4,163 | 4,194 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4163s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | have a single vector and now you just say that this is a nonparametric softmax over all possible words that that you can have for your Center word and so that way you don't have to explicitly take a soft max you just optimize for the dot products of the averaging words from the neighboring words and the possible product of you are sent over and that's what this loss amounts to and | 4,194 | 4,231 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4194s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | oh and this way the parameters of your loss function end up being your word embedding matrices and all you need to do is take lots of different chunks of text pick particular sent over to pick some respondent neighboring words embed them average neighboring words try to maximize the dot product of the actual central word relatively all the other words in your vocabulary and if you do | 4,231 | 4,254 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4231s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | this for waste chunks of text and optimize for a while you're going to end up with a relatively good word of any so that is the idea of what effect the SIBO model and the Skip Graham model is exactly the mirror image of this model where you're going to try to predict every independent sounding word with the key assumption it makes us that given the center where the surrounding words | 4,254 | 4,277 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4254s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | are all independent of each other that's in the probability of a surrounding word given the Center word is independent of the other surrounding words so that's kind of similar to the Navy base assumption which is given the class the term frequencies are independent of each other so it's similar assumption to simplify the computation and so then you just have a similar kind of | 4,277 | 4,300 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4277s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | nonparametric softmax over the possible word embeddings of the surrounding words and you just perform a similar optimization so a main idea here in the nonparametric softmax is you need to optimize over all possible word embeddings in your vocabulary and that could be very computationally inefficient especially back in that time when GPUs were not the go-to mechanism | 4,300 | 4,335 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4300s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | for deep neural nets and the main aim of where Tyvek was to have some software that where you just feed in a chunk of text and it could just run and spit out word bearings for you but this whole process can just run on a very simple lightweight CPU so the authors went for very clever techniques like using negative sampling and not normalizing over all the words in the vocabulary in | 4,335 | 4,364 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4335s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | the denominator so the partition function is now going to cover the entire vocabulary and as long as you pick really negative samples you can be very efficient in terms of the kind of embeddings you learn and you don't really and and and so we won't really go into the details of that but you can refer to the paper in terms of what how hierarchical softmax is and negative | 4,364 | 4,389 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4364s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | sampling was used to make word to that really efficient and in terms of results the authors had really good results for the resulting word embeddings that we learned for instance here if you look at the word embeddings of different countries and their capitals you can see that the difference vectors are all quite parallel alright so the vector from China to Beijing and Russia to | 4,389 | 4,418 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4389s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | Moscow they're almost parallel so it means that the relationships captured between the aspera the words are easily lay you can translate these relationships easily to like another capital and you basically have to translate the country by the same translation vector so so it's geometrically very consistent and you've also seen earlier about how DC Gann was able to do vector arithmetic on like | 4,418 | 4,446 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4418s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | Sarah Brady faces it was all pretty inspired from word to back because you can because it's geometrically translation consistent you can think of taking the vector of Portugal and you know you can add a you and you can take the vector of Spain you can subtract those two vectors and you can think of the difference vector being similar to the difference vectors between the | 4,446 | 4,469 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4446s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | capitals Lisbon in Madrid there are difference in the position vectors so so here are like various different you know clustered word embeddings for categories of different different categories of words okay so in word to back well the authors tried to look at how the word embeddings cluster together and you can see that various newspapers are clustered together | 4,469 | 4,505 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4469s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | and various NHL teams and ba teams you can actually see that if you take the nearest neighbor of if you take a nearest neighbor of Detroit you get Detroit Pistons or Oakland and get Golden State Warriors off you take the nearest neighbor of Steve Ballmer you basically get Microsoft for Larry Page look at Google for Airlines you basically see that Spain and Spain are | 4,505 | 4,538 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4505s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | closes to each other are same for Greece and egg and Airlines so so you basically can please see that the word embeddings because they have looked at what terms are good to next to each other and so forth they've understood relationships between companies and their CEOs or like you know Airlines and the countries that they operate in and so forth and that's that's true that's really interesting | 4,538 | 4,562 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4538s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | I'm just going to switch to PowerPoint okay so so so next thing is how does the clothes and tees relate for different short phrases using the vertical bearings so here is one thing about various explorers like let's go to gamma and you can see that there's a relationship between Italian Explorer and for chess master chess grandmaster there's a relationship between Garry | 4,562 | 4,623 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4562s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | Kasparov and and so so even these sharp phrases the closest entities that are being very exactly relevant and you can also see how it's relevant to the airlines and you know or or if you add two different embeddings what is the closest entity you get so for instance if you add Vietnam embedding for Vietnam the embedding for capital you're ending up getting the | 4,623 | 4,649 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4623s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | closest and the closest nearest neighbor and bearing up that of Hanoi which is exactly right and similarly if you had the embedding for German and Airlines and it flew it searched for the entity or the word with the nearest embedding you're getting airline Lufthansa which is really cool and similarly Russian plus River get Volga River Moscow and you get various | 4,649 | 4,675 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4649s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | French actresses and and you also get the currency for Jack so basically it's understanding relationships at phrase level not just word level and understanding relationships between multiple phrases so another interesting example is how can get all these different skip phrasing bearings so you can see that the closest tokens for skip grammar models are like way better | 4,675 | 4,712 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4675s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | compared to the other models so these are like different models and like me at all is another example of noise contrasting model those strain on words and skip phrase model too McCollum or Oh like like a basically the nearest neighbor to the topmost row redmond is they the most relevant in the skiptrace model compared to the other models for instance redmond Ross Redmond | 4,712 | 4,739 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4712s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | Washington Microsoft and you know graffiti he get spray paint and graffiti taggers etc whereas for the other things for instance for graffiti the nearest neighbors from these model is basically things like anesthetics monkeys Jews which doesn't really make sense at all so so that way you know it's understanding the actual things in the script our model so next we look at this | 4,739 | 4,768 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4739s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | paper called representation learning contrast to predictive coding in some sense the best way to understand CPC is if someone were to do work back on all the modalities and not just text how do we go about it right so you remember that word to back has a very interesting model but it's also very primitive that is you if you look at the c ba model you know you're averaging the embeddings of | 4,768 | 4,796 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4768s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | your neighboring words and then try to predict the context word so but average this is like a rape a you know you don't it's only important if you really care about the simplest possible linear model that you want to get working but in principle you can aggregate context using neural networks right like we have really powerful neural networks for context aggregation like continents or | 4,796 | 4,818 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4796s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | transformers or LST amps so why not actually try to put this all together use the contrast loss that were to accuse is to predict the neighbors in a nonparametric softmax but try to replace all these embeddings of individual words and surrounding words with very powerful expression you know networks so that forms the basis for contrast the predictive coding then by Ironman the | 4,818 | 4,844 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4818s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | New York so here's the idea let's say you have an audio signal the raw audio signal and let's say that you're trying to predict the future audio signal from the past or you're trying to for relationships between the future audio in the past audio so call the past as see a context see and call the future as X and instead of predicting the actual audio like a vein net what do you want | 4,844 | 4,872 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4844s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning | |
dMUes74-nYY | is to just operate at the latent space so let's say that you encode the context in with an encoder and you also encode a future audio chunk we call you can call it as a target and you encode the target at the same encoder so now our goal is to maximize the mutual information between the context and target don't really worry about like what mutual information well why does it come out of | 4,872 | 4,900 | https://www.youtube.com/watch?v=dMUes74-nYY&t=4872s | Lecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised Learning |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.