video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
hv3UO3G0Ofo | modified by the queries so the query can still pay attention the difference is the keys depend on the input while the positional encoding does not depend on the input so the queries can decide i want to gather information from this and this and this type of information so that would be the key or it can decide i would like very much to look at pixels that are | 1,368 | 1,396 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1368s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | somehow on the bottom right of the pixel that i am now that would be the um positional encodings and that's that's the mistake i made when i said it's equivalent to a convolution it is not because the query can still it's still modulated by that query vector um of how to aggregate information otherwise you would have this to be a standalone multiplied by the input right here but it sort of | 1,396 | 1,424 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1396s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | pays off to think of it like what you do in the convolution so in the convolution you learn how to aggregate information basically based on on position um relative position to the position that you want to output and here you do a similar thing you learn static position embeddings that you then can attend to with your queries all right so these are the position | 1,424 | 1,450 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1424s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | embeddings and they make use of those position embeddings in fact they attend them to the following in this work we enable the output to retrieve relative positions beside the content based on query key affinities formally so the problem up here is that okay you have these position embeddings um and here are the outputs but if you do this in multiple layers right if you do let's let's go | 1,450 | 1,482 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1450s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | with 1d sequences if you do this in multiple layers and here you annotate the position let's just go one two three four um and okay this layer can make use of that right we gather stuff from here but then when this layer when this layer gathers information from here the where the information comes from in the layer below is some is how somehow getting lost right so it cannot kind of pull | 1,482 | 1,515 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1482s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | through this information to here or at least it's very complicated this model extends this positional embeddings in order to pull through that information so as you can see there are two new things right here the biggest important new thing is that right here we don't so here is how we aggregate information okay and here is the information that we aggregate over | 1,515 | 1,545 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1515s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | now you can see previously this was just this value vector and now it is extended to the position to positional embeddings learned positional embeddings okay so the this with this you're able to route the positional embeddings to the output and also here you can see the attention gets fairly complex so you have query key attention which is classic attention the queries can attend to positional | 1,545 | 1,578 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1545s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | codings but also the keys can attend to positional encodings so not only can uh not only can the the node on top say i would like to attend to position three um position three can also say well together with me uh positions two and four are are fairly important i guess that's what that's what that is maybe i'm mistaken here but you can see right here there is an interaction | 1,578 | 1,611 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1578s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | between the keys and the positional encoding right here now these position encodings they are different for the queries keys and values but um ultimately we don't it doesn't make too much of a difference so here is a contrast between what a traditional attention layer would do and what they would do so a traditional attention layer gets the input x and transforms it by means of these linear | 1,611 | 1,644 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1611s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | transformations right here into the queries these are the queries let's call them q into the keys and into the values okay then it does a matrix multiplication with the keys and the queries and puts that through a softmax so this here is going to be our attention matrix this is the attention matrix and the attention matrix is multiplied here by the values and that determines our | 1,644 | 1,678 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1644s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | output okay again the attention matrix defines how we aggregate information and the values is what information do we aggregate you know for the output in contrast when we introduce these positional encodings you can see right here again we have query key and value now it gets a little bit more more more complex right here namely we do this query key multiplication right here but | 1,678 | 1,714 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1678s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | we also multiply the query by these uh positional embeddings for q we also multiply the keys by the positional embeddings for k and all of this together so this is a big plus right here all of this together is routed through the softmax okay and now the diagram is a little bit complicated uh now you can see the softmax aggregates information from here and from this learn position | 1,714 | 1,746 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1714s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | embeddings i would rather have they would just use it like they did in the formula uh do v plus r and say that's going to be the information that we are aggregating and the soft max here the output of the softmax is going to be how we aggregate information this is the attention all right i hope that's sort of clear you introduce these positional embeddings for queries keys and values | 1,746 | 1,778 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1746s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | and that allows the model to have a sense of where the information is coming from basically what positions which if you drop the convolutions so the convolution had this intrinsically because in your convolutional kernel right uh can i i'm i'm dumb if in your convolutional kernel the number right here if there was a seven right here that meant that wherever you are | 1,778 | 1,807 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1778s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | whatever is on the bottom right is seven important okay so that's that was the the convolution had this intrinsically here if you just do attention we as humans we see it in a in this kind of grid form but the machine doesn't the machine simply sees a set of pixels it simply sees you can this is to the attention mechanism this is exactly the same as a long list of pixels or a | 1,807 | 1,837 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1807s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | discontinued set it doesn't matter to the machine so it's like the problems a feed forward network has so we need to annotate it we have to give it positional information and learned positional information seems to work very well right here though you could think of static positional information okay this is the first thing the positional embeddings um that now help the attention mechanism | 1,837 | 1,865 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1837s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | see where the information is coming from that's really important in pictures uh so we add that the second thing they do is this so-called axial attention now axial attention is a sort of a let's say a trick in order to reduce the [Music] load on a the load on an attention mechanism so what does it mean we've already we've already seen in sequences right if i have a sequence | 1,865 | 1,898 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1865s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | a sequence layer that's going to be n squared connections between the two now there are various ways to restrict that so instead of having all of these connections let's say from onenote we've already seen wait if we just restrict it to let's say only this thing right here only this stuff that can be that is lower right that is lower in complexity and this in this case it would be just a | 1,898 | 1,924 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1898s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | neighborhood so that's what we've done that's this this m thing right here however we can also do it in different ways since this is a set anyway we can simply say uh maybe we should just always skip one we could like do attention like this and that would be just fine too right that would also leave away some of the information but you gain in computational efficiency there are | 1,924 | 1,952 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1924s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | various trade-offs now in a picture you have the same options right so you can do the neighborhood thing as we did or you can say where should the green pixel pay attention to axial attention says the green pixel should pay attention to only the row where it is in okay that's it should ignore the rest of the input it should only pay attention to that row where it is in and then in | 1,952 | 1,983 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1952s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | the next layer we'll flip it then the green pixel the same green pixel will pay attention to only the column it is in okay so that's that's called axial attention but don't think like don't don't there is nothing special about this being an axis or whatnot you could also define and it would not be called axial attention but you could define it it makes the same sense to say well that green pixel | 1,983 | 2,015 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1983s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | just depends on this diagonal right here just in the in this layer it just does this diagonal and then in the next layer it does like the anti-diagonal um you can say uh i just choose five random pixels in this layer and five random pixels in the next layer and that would work as well we've already seen this in this paper called big bird right the big big big bird but big | 2,015 | 2,044 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2015s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | bird so big bird explicitly used random connections in the attention mechanism and their argument was well if we use different random connections in each layer then information can travel pretty fast through the network so what's the problem with these neighborhoods right here what's the problem with neighborhood attention like this the problem is that you break the long range | 2,044 | 2,074 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2044s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | dependencies so let's see what happens if information needs to go from this pixel or to this pixel or this node to this node but if information needs to travel from this node to this node in a classic attention mechanism everything's connected to everything so that node in the next layer can simply aggregate information from here well that's not possible if you do this | 2,074 | 2,099 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2074s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | kind of neighborhood attention as we've done here if i do neighborhood attention then at most right because the neighborhood is three long at most this node right here can aggregate information from this node and then again it's three long in the next step so now this node can aggregate information from this node okay because the in the neighborhood is three long | 2,099 | 2,123 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2099s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | and you can only attend to within your neighborhood this means that if i want to send information to something that's really far away i need to um i need to go many many layers right i need to go layer layer layer layer and this has been well known this has already been a like a problem this has already been a property of convolutional neural networks so convolutions | 2,123 | 2,152 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2123s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | specifically traded off the fully connectedness of fully connected layers two local connections convolutions but that means that you have to go very deep in order to make long range connections you can't just make them in one step the same problem right here now this paper big bird argued that if you have random connections instead of neighborhood connections | 2,152 | 2,176 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2152s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | just the property of random graphs mean that um you you are pretty fast in sending information around so because in a random graph of size n you on average all two nodes are connected by path lengths of log n this is much faster because in this neighborhood thing two nodes are connected in a path length of order of n right you can you can pretty easily see that if i make | 2,176 | 2,208 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2176s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | the sequence longer i need that many more steps in order to send it around in fact it's like something like n divided by m this neighborhood size in a random graph it's log n and in this axial attention that's why i introduced it it's two okay every uh every two nodes are connected by two steps if if node if this node right here needs to send information to this node right here | 2,208 | 2,241 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2208s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | in a classic attention mechanism you could do some one step because every pixel attends to every other pixel however right now we have to um we have to see so this node attends in this layer sorry i have to think so how do we send information between the two we select this node right here in the first layer this node pays attention to this row okay which includes the red | 2,241 | 2,270 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2241s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | dot so the red dot can send information to the x in this layer in the next layer we select this node right here which is our target node where the information should go to it pays attention to all of this column which includes that x that before right this this x right here where we send information to so it takes two layers two steps to send information from any node to any | 2,270 | 2,301 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2270s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | other node that's pretty good so this um axial attention if you stack them on top of each other you sacrifice a little bit of uh of being able to send information from anywhere to anywhere for the pleasure of not having this quadratic attention anymore as you can see your attention mechanism is now as long or as big as your column or is wide or your row is high again this | 2,301 | 2,333 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2301s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | isn't this isn't specific to rows or columns you could do this as i said with these kind of uh diagonals you could do it with any other sort of sub pattern where you can sort of guarantee that the overlap between the layers is enough so you can send information around pretty efficiently and they use this right here so this axial attention you can see the formula is exactly the | 2,333 | 2,363 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2333s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | same the only change from before is this part right here you can see that the neighborhood that they aggregate over is no longer m by m it is now 1 by m so we've seen them going from if this is the the full input image and you wanna you wanna see where to attend what this paper does is it says a classic sorry a convolutional neural network would be attending to some sub part | 2,363 | 2,399 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2363s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | right this is convolution an attention mechanism pure attention would attend to everything right this is attention then what we are doing sorry that was a mistake what other people were doing were reverting back this attention um to a subpart this kind of neighborhood attention okay but that was still you know you still have m squared you still have o of m squared | 2,399 | 2,432 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2399s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | because of the attention mechanism now what we are doing is we are going even lower we're actually going one by m okay this this is with with axial attention so in general it's one by m and then in the next layer we can go one by m in this direction and have that property um and because it's so cheap now right because it's now o of m to compute this we might as well | 2,432 | 2,464 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2432s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | make m as long as the row itself okay so their last step is going to be to say okay we have one by m right here and that's going to be the row itself now you can see right here that they say axial attention reduces the complexity to hwm this enables global receptive field which is achieved by setting the span m directly to the whole input features optionally one could also | 2,464 | 2,496 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2464s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | use a fixed m value in order to reduce memory footprint on huge feature maps which is something that they're going to do later on imagenet i believe so when they have big inputs or big outputs they actually do use a smaller m what you can see right here is that i wasn't really that wasn't really correct of me to say that it's now o of m because you you still have the entire query space so you | 2,496 | 2,523 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2496s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | multiply query by by keys now even if you make the keys to be 1 by m yes you reduce definitely you reduce this from height times width to times height times width to this but then you can see on this thing right here if you take it and let's say we have this kind of row pattern and we replace m by the width then we have width squared so again the square appears however | 2,523 | 2,560 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2523s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | it's smaller than the original attention the original attention was h squared w squared right because hw is the image and you need that squared in order to do the attention mechanism now we've basically reduced one of the factors it is still an attention mechanism so there's still a tension going but we've basically transformed the the image we've reduced it to one column | 2,560 | 2,587 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2560s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | now the one column is still a tension so this is still a tension like here so this now reduces to the attention that you see in a in a single sequence okay if you see the image as a long stretch of pixels what this does is basically it's up it simply subdivides that into neighborhoods so we're back to neighborhoods basically um but we shift the neighborhoods | 2,587 | 2,619 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2587s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | from layer to layer so in the next layer the neighborhoods are going to be just alternating right the neighborhoods is going to be this is one neighborhood connected to this neighborhood connected to this neighborhood i hope this makes sense so it's going to be it's basically a mix between if you if you were to do this in convolution you could do one layer where it's neighborhood | 2,619 | 2,646 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2619s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | convolution and then one layer where it's like convolution with holes in it i think they're called actress convolutions or something like this with like giant holes in it that are exact is exactly the anti-pattern of the neighborhood convolution from before that's what this is so you see their axial attention block right here their axial attention block replaces the resnet block so if you know | 2,646 | 2,674 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2646s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | resnet i've done a paper on resnet resnet basically takes the input pipes it through straight and adds to it whatever comes out of this operation okay that's a residual block now usually this thing here would be convolutions and convolutions and they are now replaced by these multi-head axial attention you can see there is a multi-headed tension in the height | 2,674 | 2,703 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2674s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | and there is a multi-head attention in the width and that gives us the property that every node can send around information to every other node in two steps i don't like the fact that there is only two because um well this i guess this gives a significant bias to one or the other direction depending on the order that you do them in if if i had done this i maybe would have | 2,703 | 2,729 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2703s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | used three of them because it depends on how you want to aggregate information right like here you train the network specifically to aggregate information first in this direction and then in this direction which might work and it will give you that sending around information anywhere so maybe they've actually tried and it it just performed the same so i i just might | 2,729 | 2,751 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2729s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | have a dumb suggestion right here in any case they simply replace in we've come a long way right we've gone to like neighborhoods and blah blah blah ultimately take a res net replace the convolutions with the height axis attention and the width access attention and we're good and then we come to results so that's it you have these positional embeddings you have the axial attention | 2,751 | 2,778 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2751s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | and it turns out that on imagenet they perform fairly fairly well so you can see that models like a resnet 50 model will get a 76.9 on imagenet which is not state of the art but it's also not it's not bad right the resnet 50 is pretty good model um you can see the full axial attention right here uh achieves a 78.1 also not state-of-the-art but still pretty good and as they say it's the | 2,778 | 2,812 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2778s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | best fully attentional model on imagenet as our standalone attention model on imagenet so where this model really shines is where you really have to make long-range connections between pixels and that's these kind of segmentation tasks and i want to skip the tables right here yeah their best and everything and go to the appendix where they have some examples of this so here you can | 2,812 | 2,842 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2812s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | see specifically uh this is the original image you have a ground truth and you have the differences between their model this axial deep lab and the panoptic deep lab um that is a baseline for them and you can see that the the failure cases here are are pretty you know show how show how the axial deep lab is better i don't know if they are cherry picked or not but | 2,842 | 2,874 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2842s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | at least you can see that at some point so it handles occlusions better it handles instances better so here you see that the ground truth separates the person from the tie and the axial attention is able to do this but the the baseline is not able to do this correctly because it labels part of that white shirt also as and you can see why there's kind of a delimiter line here | 2,874 | 2,903 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2874s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | here here here but if you have long range dependencies right if you have long range dependencies in the model the model will recognize wait wait that's that must be the same thing as this thing here and this thing here and this thing here so that must be the same object um it's simply that the shirt was occluded by the tie and goes beneath it and now appears | 2,903 | 2,927 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2903s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | again it's not a different it's not part of the tie and it's not part of the um of a different object it's actually part of the shirt so the long range attention you can see at these examples sometimes here okay this might not be an instance of super duper long range dependencies this is simply where the model performs better so you can see here the ground roof has that surfboard | 2,927 | 2,955 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2927s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | segmented and the baseline does not um that this can also just be you know there are a lot of tricks to make this work of course and you throw a lot of compute at it and sometimes you just get better numbers or part of the better numbers because of the additional compute right here what do we have so you can see occlusions it appears to handle occlusions uh in a better way | 2,955 | 2,984 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2955s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | and this might be due to this axial attention it might be due to the positional embeddings but you can see that the ground truth here has the laptop between the person's hands segmented the baseline cannot do that but the axial tension does do that and i don't know what this is honestly this is um uh you can you can see though the axial attention also misses the fact that it | 2,984 | 3,010 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=2984s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | should segment this in the background and if this occlusion handling you can see best in this example where the person in the back reappears on both sides of that person so you can see that the axial attention manages to segment that where that is just a mutant person right here though the ground truth is equally shaky i think there is might be some ambiguity of how you can | 3,010 | 3,039 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=3010s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | segment these images obviously but you can see the fact that there are long range dependencies probably helped with this saying that wait in this image there's this white stuff right here and there's this white stuff right here and um connecting these two regions with attention probably helped in segmenting uh these to be the same object even though you can see | 3,039 | 3,065 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=3039s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | there is a break in the object so there is a break no at no point is the object on the left uh touching or the segment on the left touching the segment on the right and still the model manages to put those into the same label category there is the last um last thing where they they want to research what their heads learn and usually you can do this right you can kind of | 3,065 | 3,097 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=3065s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | visualize what the attention has learned so in this case right here in the column heads the way you have to read this is that this particular head right here um aggregates information from its column so everywhere where it lights up it there's a lot of information being routed you can see specifically in this here uh the heads of the people or the heads of the | 3,097 | 3,121 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=3097s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | persons in the picture light up fairly well so for example this head right here is probably aggregating information a lot from this position right here and this head here is aggregating information from this position so you can deduce that that particular attention head probably deals with people's faces uh whereas that particular attention head probably deals you can see the | 3,121 | 3,149 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=3121s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | attention is mostly on the grass right here and you can see the same with the for the row heads now their description here is that we notice that column head one corresponds to human heads while calm at four course correlates with the field only which you know you can interpret it as this this seemed pretty clear but then they say something like row head six focuses on relatively large | 3,149 | 3,177 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=3149s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | relatively local regions where column head five pools all over the image so row head six which is this thing right here you can see that okay it maybe focuses on small regions though you can see okay what like here you can get it that's a person but in other places um i don't know where column head five pools over the whole image and this i don't know maybe they just | 3,177 | 3,208 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=3177s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | needed something more to say because they put these pictures here they were like oh okay the the column heads are really nice because we couldn't like these this one's really nice because it you know just pays attention to the people and this one looks really nice because it pays attention to the field but we can't really put the column head attention without putting the row head | 3,208 | 3,228 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=3208s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | attention but then none of the row heads really are like super distinctive on the particular thing in the image so we need to come up with something that we can say and then he's like ah this one this is there's not a lot of attention so we need to contrast this with something then you would think that they contrast it with another row head but then there's no row head that does | 3,228 | 3,252 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=3228s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | this whole image so there's like ah column at five yeah i'm i'm not sure if there's there's a bit of there's a bit of uh tactical writing going on here i suspect i mean it's still you know it's doing something uh cool but yeah there's there's a definitely an element of sales in when you do when you write research papers and just um not to this data but just props to the lines in front of the | 3,252 | 3,282 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=3252s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | histograms makes it so much easier to read how big the stupid bars are why does everyone put the lines behind the histogram i probably do that myself and now i'm just i'm realizing how much easier that is all right there is a big big big experimental section right here and there's a big appendix where you can read up all of the different numbers comparisons ablations what not um ultimately i just | 3,282 | 3,310 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=3282s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
hv3UO3G0Ofo | wanted to go over the method basically putting this into context with other things like putting this into context with stuff like big bird axial attention other positional encodings uh how it co how it relates to convolutions how it relates to feed forward networks and what convolutions did to feed forward networks and so on i hope you at least a little bit gain an understanding of | 3,310 | 3,336 | https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=3310s | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained) | |
tOLhT3LNjho | [Music] Tocantins the way human whip [Music] [Music] with the powers code we are able to create technology that solves some of the most fundamental issues with humans in the first place we can cure diseases we can solve world problems we can do so much that we never put down without the power of technology feels like home in terms of conferencing I like the format a lot it's very the | 0 | 74 | https://www.youtube.com/watch?v=tOLhT3LNjho&t=0s | WeAreDevelopers Congress Vienna 2019 Aftermovie | |
tOLhT3LNjho | audience is more technical than usual so I definitely appreciate this conference as a sort of contrast to what usually takes place in many venues [Music] I'm a programmer myself you know so basically talking to other developers is the next experience just to see you know how other people are dealing with the same maybe constraints that you are dealing to [Music] | 74 | 122 | https://www.youtube.com/watch?v=tOLhT3LNjho&t=74s | WeAreDevelopers Congress Vienna 2019 Aftermovie | |
tOLhT3LNjho | but I was very amazing I really like the location so it's really really cool to be at Hope work [Music] [Applause] both changes the way people think and people find in Kord impacts the way that different social challenges can be solved [Music] [Applause] and the cause changes the way we consume information technology can help us to create fake news but they can also help | 122 | 174 | https://www.youtube.com/watch?v=tOLhT3LNjho&t=122s | WeAreDevelopers Congress Vienna 2019 Aftermovie | |
b-yhKUINb7o | [Music] in this video we'll be discussing the concept of semi-supervised learning semi-supervised learning kind of takes a middle ground between supervised learning and unsupervised learning as a quick refresher recall from previous videos that supervised learning is the learning that occurs during training of an artificial neural network when the data in our training set is labeled | 0 | 30 | https://www.youtube.com/watch?v=b-yhKUINb7o&t=0s | Semi-supervised Learning explained | |
b-yhKUINb7o | unsupervised learning on the other hand is the learning that occurs when the data in our training set is not labeled so now onto semi-supervised learning semi-supervised learning uses a combination of supervised and unsupervised learning techniques and that's because in a scenario where we'd make use of semi-supervised learning we'd have a combination of both labeled | 30 | 51 | https://www.youtube.com/watch?v=b-yhKUINb7o&t=30s | Semi-supervised Learning explained | |
b-yhKUINb7o | and unlabeled data let's expand on this idea with an example say we have access to a large unlabeled data set that we'd like to train a model on and that manually labeling all of the state ourselves is just not practical well we could go through and manually label some portion of this large data set ourselves and use that portion to train our model and this is fine in fact this is how a | 51 | 73 | https://www.youtube.com/watch?v=b-yhKUINb7o&t=51s | Semi-supervised Learning explained | |
b-yhKUINb7o | lot of data use for neural networks becomes labeled but you know if we have access to large amounts of data and we've only labeled some small portion of the data then what a waste it would be to just leave all the other unlabeled data on the table I mean after all we know the more data we have to train a model on the better and more robust our model will be so what can we do to make | 73 | 93 | https://www.youtube.com/watch?v=b-yhKUINb7o&t=73s | Semi-supervised Learning explained | |
b-yhKUINb7o | use of the remaining unlabeled data in our data set well one thing we can do is implement a technique that falls under the category of semi-supervised learning called pseudo labeling this is how pseudo labeling works so as just mentioned we've already labeled some portion of our data set now we're going to use this label data as the training set for our model we're then going to | 93 | 113 | https://www.youtube.com/watch?v=b-yhKUINb7o&t=93s | Semi-supervised Learning explained | |
b-yhKUINb7o | train our model just as we would with any other labelled data set okay and then just through the regular training process we get our model performing pretty well so everything we've done up to this point has been just regular old supervised learning in practice now here's where the unsupervised learning piece comes into play after we've trained our model on the labeled portion | 113 | 132 | https://www.youtube.com/watch?v=b-yhKUINb7o&t=113s | Semi-supervised Learning explained | |
b-yhKUINb7o | of the data set we then use our model to predict on the remaining unlabeled portion of data we then take these predictions and label each piece of unlabeled data with the individual outputs that were predicted for them this process of labeling the unlabeled data with the output that was predicted by our neural network is the very essence of pseudo labeling now | 132 | 153 | https://www.youtube.com/watch?v=b-yhKUINb7o&t=132s | Semi-supervised Learning explained | |
b-yhKUINb7o | after labeling the unlabeled data through this pseudo labeling process we then train our model on the full data set which is now comprised of both the data that was actually truly labeled along with the data that was pseudo labeled through the use of pseudo labeling were able to train on a vastly larger data set we're also able to train on data that otherwise may have | 153 | 173 | https://www.youtube.com/watch?v=b-yhKUINb7o&t=153s | Semi-supervised Learning explained | |
b-yhKUINb7o | potentially taken many tedious hours of human labor to manually label the data as you can imagine sometimes the cost of acquiring or generating a fully label data set is just too high or the pure act of generating all the labels itself is just not feasible so through this process we can see how this approach makes use of both supervised learning with the labeled data and unsupervised | 173 | 195 | https://www.youtube.com/watch?v=b-yhKUINb7o&t=173s | Semi-supervised Learning explained | |
b-yhKUINb7o | learning with the unlabeled data which together give us the practice of semi-supervised learning so hopefully now you have an understanding of what semi-supervised learning is and how you may apply it and practice through the use of pseudo labeling and I hope you found this video helpful if you did please like the video subscribe suggest and comment and thanks for watching | 195 | 216 | https://www.youtube.com/watch?v=b-yhKUINb7o&t=195s | Semi-supervised Learning explained | |
O1b0cbgpRBw | hi there if you play chess you'll probably recognize the following moves as illegal in the top row pawns move two squares at a time while they are not on their home row in the bottom row you'll see a pawn moving backwards and another one moving sidewards even so in classical chess these moves are illegal but there are variants of chess where these moves aren't illegal where | 0 | 24 | https://www.youtube.com/watch?v=O1b0cbgpRBw&t=0s | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) | |
O1b0cbgpRBw | they are actually explicitly part of the rules these are alternate chess rules and this paper is about exploring those rules what happens if you implement those rules how does the game play change and what can we learn for general games so the paper here is called assessing game balance with alpha zero exploring alternative rule sets in chess by nina thomasev ulrich paquette | 24 | 56 | https://www.youtube.com/watch?v=O1b0cbgpRBw&t=24s | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) | |
O1b0cbgpRBw | demis hospice and vladimir kramnik uh the former three of deepmind and the latter is was the world chess champion for these eight years depicted so the paper tries to bring together two different worlds first it is the chess world so a lot of this paper is explicitly about the game of chess if you don't play chess or if you occasionally play chess like myself | 56 | 84 | https://www.youtube.com/watch?v=O1b0cbgpRBw&t=56s | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) | |
O1b0cbgpRBw | this might not be the most interesting paper though it contains some really interesting kind of bits the other world is the reinforcement learning world which you'll see in the alpha zero name right here so the reasoning behind this is the following chess is a really really old game and rules have evolved over time and have sort of consolidated on the rules we have today | 84 | 112 | https://www.youtube.com/watch?v=O1b0cbgpRBw&t=84s | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) | |
O1b0cbgpRBw | but also strategy has evolved over time and lots and lots of thinking and theory has gone into the strategy of chess and to change the rules around um you can change the rules of chess however you can't really assess how the game would be played by humans uh if the rules were changed because you don't have a thousand years of the entire humanity studying these | 112 | 140 | https://www.youtube.com/watch?v=O1b0cbgpRBw&t=112s | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) | |
O1b0cbgpRBw | new rule sets and therefore you're kind of stuck with assessing the games from the perspective of someone who has learned the old rules but reinforcement learning to the rescue so consider the following rule changes no castling this is a really simple rule change no castling castling is disallowed throughout the game if you don't know what castling is castling is like a special move | 140 | 168 | https://www.youtube.com/watch?v=O1b0cbgpRBw&t=140s | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) | |
O1b0cbgpRBw | where there is this rook and the king is right here i don't know how to the king and if there's nothing in between they can sort of swap positions it's called castling uh it's a special move that you can do and it allows you to bring the king to the outside where the king is safe and to bring the rook to the inside where it can potentially cause a lot of damage | 168 | 193 | https://www.youtube.com/watch?v=O1b0cbgpRBw&t=168s | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) | |
O1b0cbgpRBw | so it's a very very favored move by a lot of players and no castling the rule change probably alters the game a lot because if you think of the chess board kings start about here they can only move one square at a time so to get them to safety will require like four or five um steps for them while you have to move everything else out of the way including the rook that stands here so | 193 | 222 | https://www.youtube.com/watch?v=O1b0cbgpRBw&t=193s | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) | |
O1b0cbgpRBw | players might elect to just leave their kings where they are but then they can't really open up in the middle as much because that would leave their kings exposed so it is fair to assume that just introducing this one rule might change the games around quite a bit how the game is played but as we said we don't know this is from someone who has learned classic chess and all | 222 | 247 | https://www.youtube.com/watch?v=O1b0cbgpRBw&t=222s | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) | |
O1b0cbgpRBw | the grandmasters that we have have played and learned classic chess so how do we assess this this paper says that alpha zero can be used to assess these new rules so alpha zero is a reinforcement learning algorithm that can learn these board games very very quickly in within one day or so and it can learn them so well it can beat humans at the game easily in fact modern | 247 | 279 | https://www.youtube.com/watch?v=O1b0cbgpRBw&t=247s | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) | |
O1b0cbgpRBw | modern grand masters and so on use these algorithms in order to learn and to better their play in order to expand their theory their knowledge of the game to play better against other humans so alpha zero imagine alpha 0 can solve a game to perfection what we could do is we could simply give this rule to alpha 0 together with the all the other chess rules and then let alpha 0 solve the game give | 279 | 308 | https://www.youtube.com/watch?v=O1b0cbgpRBw&t=279s | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) | |
O1b0cbgpRBw | it a day and 50 billion gpus solve the game to perfection and then look at what alpha zero came up with kind of look at the games how they turn out and um whether or not they are more interesting less interesting longer shorter and so on so that's that's what this paper does so there's the implicit assumption which you need to believe in order to believe anything in this paper | 308 | 336 | https://www.youtube.com/watch?v=O1b0cbgpRBw&t=308s | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) | |
O1b0cbgpRBw | is that alpha zero actually has this ability there is pretty good evidence that it does because of zero cans of classical chess and go and shogi and a bunch of other board games um all with the same hyper parameters it can solve them such that it is easily at superhuman power so but you need to recognize that this is an assumption so what is alpha zero if you don't know what alpha zero is | 336 | 366 | https://www.youtube.com/watch?v=O1b0cbgpRBw&t=336s | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) | |
O1b0cbgpRBw | alpha zero is a reinforcement learning algorithm but not in the kind of base reinforcement learning sense it is a reinforcement algorithm that has a planner included what do i mean by this so if you are in a let's consider the game tic-tac-toe so alpha zero for tic-tac-toe in tic-tac-toe you have this board and you have a situation where let's say you play your opponent plays this and | 366 | 395 | https://www.youtube.com/watch?v=O1b0cbgpRBw&t=366s | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) | |
O1b0cbgpRBw | now you're tasked of playing something you wonder should i play maybe here or here or here where should i play so what you can do is you can train a reinforcement learning algorithm you can do q learning what not okay that will maybe work what's better to do is you can plan so in planning what you want to do is you want to build a tree of possibilities so we're going to consider all your | 395 | 422 | https://www.youtube.com/watch?v=O1b0cbgpRBw&t=395s | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) | |
O1b0cbgpRBw | possibilities and in this case you have eight possibilities so we want to consider all the eight possibilities and i'm going to draw just some of them so up here you're going to consider the possibility that you place here and here you're gonna consider the possibility that you place in a different spot right here okay and you can see how this goes so if you want to plan | 422 | 449 | https://www.youtube.com/watch?v=O1b0cbgpRBw&t=422s | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) | |
O1b0cbgpRBw | and here you have your opponent has seven possibilities and here your opponent also has seven possibilities and so on so you get this entire tree of play but if you could do that and if you could do that to the end then you could easily simply choose the path here where you win okay where um no matter what your opponent does you win you can find such a path if it is | 449 | 476 | https://www.youtube.com/watch?v=O1b0cbgpRBw&t=449s | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) | |
O1b0cbgpRBw | possible at all to win which is not in tic-tac-toe right if everyone plays optimally it results in a draw but let's say you could win you could choose the path that gives you the best result and that's it there's no learning involved okay so alpha zero works with a planner and planners usually construct a tree so in an abstract way you are in a situation and you consider | 476 | 502 | https://www.youtube.com/watch?v=O1b0cbgpRBw&t=476s | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) | |
O1b0cbgpRBw | all your options and with all your options you consider again all your options and so on and you do a tree search now this tree in tic-tac-toe it's already huge as you can see um in something like chess it is way way huger okay and therefore it's not possible to actually search the entire tree because you need to consider every single possible future situation from the board position where | 502 | 530 | https://www.youtube.com/watch?v=O1b0cbgpRBw&t=502s | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) | |
O1b0cbgpRBw | you're in right this here is the board position where you're in and this is the future the entire future of the game so every single possibility so alpha zero uses this thing called a monte carlo tree search it has several components so it's first component and they right here they have a description and it's very short alpha zero this is alpha zero this is what it does it's | 530 | 562 | https://www.youtube.com/watch?v=O1b0cbgpRBw&t=530s | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.