video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
9-cyC6O81Bk
the name of the country and we finished the for-loop Papa Papa back instead of IL is the country that we want to know about and because this is bash I need to quote quote quote quote unquote and it got something wrong did I miss anything mmm it's a new error site what what it doesn't matter so I'm going to then move to the next thing that is whenever you are writing a
789
848
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=789s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
program you should always time limit the amount of time that you are expected to use on it this is a good example right because if I keep going I will spend the next 25 minutes just trying to get your work in so whenever you are trying to write a program or Eltham it a task the first thing that you should do it's time limited and if after paint that time limit you are not able to finish it just
848
869
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=848s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
move and do it things manually right now even if you think that that's a waste of time right I tried to for five minutes to get this thing working and it didn't get it well the truth is that you have learned a little bit that's not waste time that's invested time on you learning and getting better okay so whatever yep there is a nice table from a CD that tells you how much time you can expend
869
899
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=869s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
to automate some tasks so have a look at that and if we are talking about pro writing programs what you should always do it's about you will want about graphical user interfaces why because you cannot put a UI inside a for loop you guys don't compost they just live in their little world now I'm not saying that you should never use them because they are extremely useful
899
922
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=899s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
when you are getting started when you are learning something new but once you are past that phase of beginner you we'll actually want to do more complex stuff and you guys just constrain what you what you can do and if we're talking about a body in your eyes the first way that you want to avoid is your own applications you are right there is nothing less efficient than starting
922
949
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=922s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
application clicking things around and filling up forms to know if you're featured the new feature is working or if you broke anything a part of making this a more efficient automated test also give us the give us the confidence to refactor and change code because it's going to catch bugs and bugs are the worst time waste of all first you need to write that the bug then somebody has
949
975
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=949s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
to review the back then you need to put the back into production and then by the time some user notice the bug you have gone through this massive context which because you probably wrote the bug several weeks ago and so even if you wrote the code that has debug the code is already alien for you and you have to dig into it and then you need to fix it you need to go review it
975
999
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=975s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
you need to explain it to your boss you need to fill some JIRA issues and then you need to go again through all the release process so bugs are just a big waste of thing but worse than a bug it's having the same bike twice right so whenever you go and fix a bug the first thing that you should do is write a test to prove that you are able to reproduce the bug you see it fail and then you fix
999
1,024
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=999s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
it and the last thing that you want to avoid to do manually is setting up the development environment right this is not going to make it just more efficient but the whole team more efficient this is how distractions for any project that I joined look like from my point of view and the only thing clear on them about them is they're going to they are not going to work
1,024
1,049
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1024s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
maybe they are missing some step or they're not precise enough or maybe I will make some silly mistake when I try to follow them and the result is always the same two three four days of wasted time what you want to achieve its instructions as close as possible to this just one command and that one command should bring all the tools and configure them to be able to build run
1,049
1,073
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1049s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
and test your application if you need a database it should install the database and configure it and seed it with some data if you need any bill to maven NPM whatever it will download the correct version of maven and install it and configure any SDK that you need as you can see my tool of choice right now to do this it's docker compose which is part of the toker suite if you are not familiar with
1,073
1,097
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1073s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
it it looks this an example and here we are think that our development environment in its three containers Postgres DP Redis DP and our own application this has multiple benefits right first thing it takes use minutes for somebody new to get started but also if something stops working on your development environment you can just easily just wipe the whole thing and
1,097
1,119
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1097s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
start again if there any change on the development environment you know share immediately with the whole team and these structures never get out of date also because docker is running things in isolated environments it means that if two projects that you're working on they use completely different versions of a database or a JD SDK well they're going to be completely isolated so it doesn't
1,119
1,145
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1119s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
bother one and the other and also because it's so easy to make changes it allows you it encourage you to experiment if you want to try a new JDK or SDK or a new version of the database just make the changes started and if you don't like it you just completely what the whole environment and the last section that we are going to talk about feedback doesn't matter what you were what you
1,145
1,170
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1145s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
are working on you should always try to find the shortest and tightest feedback loop possible feedback is what it tell us if we are going in the right direction feedback make us at the same time more efficient and more effective you want feedback often and early to make sure that you don't wander on the wrong path for too long with the consequent waste of time
1,170
1,194
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1170s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
an energy if we talk about the benefits of automated test yeah we save time give us save it with it catches bugs allow us to the factor one is the best moment to try test well my opinion is before you start doing any coding if you're not familiar with the TDD workflow it's basically this going to go really fast through it you first write one test and only one test you run it you see it fail
1,194
1,222
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1194s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
you see it right and then just write enough code to make that test pass and then you refactor you clean up your code running the test just to make sure that you didn't break anything there are least four reasons why you want to use the this workflow the first one is the fast feed but that gives you as you are building the new feature to note that your code is doing what you
1,222
1,244
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1222s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
spread it to do the second reason is that if you truly believe that automated test saves you time you want that benefit as soon as possible as you are developing the new feature the third reason is organizational I have here years too many times the phrase I don't have time to write this or I'm not given the time to write this and for me just actually means that well I always write
1,244
1,272
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1244s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
my code I finish my feature and once I finish my features when I do try my test and if there is any time pressure well you know I'm not going to I don't get time to write those tests and because you don't write tests it means that you don't refactor your code because to refactor code you need a very good automated persuade and because you don't know factor code it means that your code
1,272
1,294
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1272s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
starts to accumulate garbage and because your code starts to accumulate garbage it takes you a little more time to actually build new features and because it takes you more time to build features you get more time pressure and you add more time pressure so you have less time to write test closing a vicious circle cycle that always end up with the same with us developers crying for our right
1,294
1,315
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1294s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
and the four reason why you want to write your test first it's one of a mechanical reason because seen a test fail is the test that tests that the test test what is supposed to test or in simple words how do you know that your test doesn't have any bug if you write a test and you see it right there is a strong indication that is some piece of production code some logic
1,315
1,344
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1315s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
that is not there if you write the test and you never say bread what you don't know if it is because you're ready implemented a feature or because you forget on a certain in your test or the setup code is not correct now when you present this idea to a lot of people they always come up with this phrase I can't write a test first because I don't know what I'm going to
1,344
1,368
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1344s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
build and this can mean different things it can mean that you don't understand what business is asking you to do right and in this case it's true you cannot write any test but you cannot write any production code either what you have to do is go back to business and ask for clarification what do you want to do the other case is that you actually understand business and you're too early
1,368
1,391
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1368s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
understand the logic that you need to build but you don't know if you are going to write one class or ten classes or if you are going to put an if statement or a switch or a factory factory factory you don't know what you're going to do right but you know understand the logic and you know understand the mechanics of the side effects so you know which database you
1,391
1,409
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1391s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
are going to use you have juicy ten thousand times already you know the table you know everything in all these cases you can actually run a test first but it's true that sometimes we actually don't know what how to do the side effects that we are asked for we for example maybe the logic for your new application functionality in its it needs to cause some restful endpoint to
1,409
1,432
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1409s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
get some for exchange and you have never used it and you don't know the end point and you don't know what you need to give to it and you don't know what it's going to give you about or maybe you need to consume some messages from a message queue and you have never done that so you don't know which libraries to use you don't know how they work in all those cases you don't really know what
1,432
1,452
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1432s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
you need to what you are going to do there is always this face of exploration that we have in our in our job that is that we used to fill up those gaps to convert known side-effects into known side-effects and that's something that TDD doesn't help you with what you want to do it first read the documentation to see if you are able to fill those gaps and the second thing you
1,452
1,477
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1452s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
want to write a lot of little programs to test to play around with that technology for this the best tool that I know it's a rebel rebel stands for read eval print loop and it's just basically a fancy way of saying that you have like a common line interface inside your application starts off trying to explain it let's see it in action if it works this time so have already start an
1,477
1,505
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1477s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
application with a rebel inside and what I'm going to do it from my IDE I'm going to connect to that rebel so let's say that you didn't know how the plus function works so I'd write a piece of code in my IDE and I'm not should using a shortcut to send that piece of code to the application and the application tells me that 2 plus 3 is 5 yep so it writes the code on the top
1,505
1,532
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1505s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
screen and I get the result on the bottom screen so as as I was saying this allows you to experiment with the library so maybe what happens if I pass three parameters seems to work what happens if I pass a very big number I get an exception what happens if I just pass one parameter it works no parameters it works so this is just understanding how they are how the library works and this
1,532
1,560
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1532s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
could be an HCP library messaging library some concurrency library your were just writing little programs and executing them to see what's the result let's do something slightly more fancy let's say that your business manager it tells you that you have to build a new feature and you know it some for exchange for that feature and one of your mates told you that there is a
1,560
1,582
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1560s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
restful endpoint to do that and it gives you the URL so we're all an HTP we make an estimate request and we see that we are getting some exception let try to format that sketched exception try/catch exception okay try it again so it tell us it's a 400 which means it's our fault and we see here somebody so what we are going to do is let's get the body okay that seems some piece of JSON so let's
1,582
1,619
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1582s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
parse the JSON there it is so it seems that we're missing some date some query parameters query parameters yeah we get these change rates what happens if I pass all and older what happens if I pass something in the future I'll still returns data that sorry to be worried what happens if I pass a string I get that error so what we are doing is proving how the real world works and how
1,619
1,654
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1619s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
we are doing it we write a little program we run it we see the result so it's a very very fast feedback cycle now you may be when the world why don't you use something like postman to do this right it's just xdp restful must be postman well there's somebody feats of doing it like this way the first thing is you have a full language this is a production language the one that they
1,654
1,672
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1654s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
use in production which means that I can do four loops and if statements if I want to mix this data with something from the database well I know how to make database calls from the JVM and also if I now go and I grab this exploratory code inside a function this that you see here this is production code as you see it there is going to go to production I'm making the changes in
1,672
1,707
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1672s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
the project this is not a different tool that then I need to get what I got from the tool and translated to Java or.net or whatever you are using this is production code it's ready to go also because the repple is running inside your running application you can actually go and poke at the state you can look at the state of your running application and what we are doing here if you notice
1,707
1,729
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1707s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
we are modifying or running application and we are doing all of this without having to compile or restart anything that's a very very quick feedback loop and I don't know if I mention it but we are connecting to this rebel through a socket and because we are connecting through a circuit it means that we don't really need to be running this process in my local box it can be running tests
1,729
1,753
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1729s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
or production so you could be suspecting modifying adding log statements into production code as without stopping the application this is extremely power powerful and you know with great power power comes great responsibility so use it with care the last thing that we are going to talk about it's code reviews code reviews tell us if the design of the code that
1,753
1,778
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1753s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
we are doing if it fits the application it allows other other one of your your teammates to tell you if you have any bugs and it also we can use it to share knowledge right it's a way of sharing knowledge so efficient developers want that code to be code review now there is something very truth behind code reviews when we are presented with this huge massive changes I don't know what your
1,778
1,806
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1778s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
reaction but my reaction is something like oh my god yep when we get those but when we get small changes we are able to give useful feedback to the alpha of all the of the change right because we are able to understand the change also even if you are very disciplined developer and you go through that really painful review process in my experience what does it
1,806
1,830
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1806s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
happen when you go and tell the other like well you know I think your design is your no no no your sorry it's gonna improve your design or we could use a different library that will save us some time or some resources or whatever what usually happens tell whoever say like yeah I think you are right but you know I have already spent like several days or weeks working on
1,830
1,854
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1830s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
this and the end of the Sprint is tomorrow so even if I think you're right I don't think I'm going to have time to do your change what you're suggesting because it's going to take me several more days to do it also you know it's already working so let's do something different let's just commit the change as it is and we are going to ask the product owner to create a refactoring story I'm sure he
1,854
1,885
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1854s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
will be delighted to put it on top of the priority queue I will know that those things never happen so you end up again with words code that leads to this lower implemented feature with plaplapla blah blah blah so efficient developers they don't want just code reviews they want small and early code reviews so what they actually want discontinues co reviews this practice consists on on
1,885
1,913
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1885s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
getting one of your teammates to sit just beside you and as you are implementing the future this this developer sitting beside you is going to suggest improvements on your code it's going to be catching bugs that you are doing and for their for the reviewer the changes are really really small yeah as you type them he see those those changes and for you as the author you
1,913
1,939
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1913s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
can get feedback even before you start writing any code additionally if for whatever reason you cannot able to finish the feature this other developer it's able to pick up that feature without any effort because he has been behind each of your decisions so you avoid those knowledge silos within the team also this this other developer can work as your personal stack overflow
1,939
1,964
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1939s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
because maybe he has already found that similar issue and he already knows how to fix it and sometimes you don't even need to ask the question because he sees what you are doing some people call also this program so that's all that I have very briefly focus master your ID your tools are both manual work and find yourself the fastest feedback loop possible and last words you should always find
1,964
1,996
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1964s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
time to stop reflect on how you are working and never ever stop learning thank you very much thank you so much for your tips I will definitely start with the notifications part tomorrow we got a lot of questions during your talk so let's start with the first one how do we balance avoiding interruption with work in small dynamic teams these need rapid feedback loops
1,996
2,041
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=1996s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
and frequent communication okay if your work is if your team is really small and if you are doing pair programming your team becomes tiny yeah and because the thing becomes tiny it means that that need of communication the number of not ages on day on the communication graph it just reduces so try to pair programming Thanks where can we find your slides everybody
2,041
2,074
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=2041s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
wants Terry my laptop if somebody wants to grab my laptop the other can somebody come and pick up I would polish them in my personal blog that probably none of you great I don't know if they I will tweet it I will treat them now and would put them in in someplace that you can find it that would be great additionally there are recordings of all sessions so you can watch them later or
2,074
2,101
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=2074s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
sent the links to your colleagues another question how can we efficiently automate ourselves out of the job it depends if you work for yourself that this is the best thing that you can do yeah this is it free money I think it's it depends on your ethics right if you can actually if you think that you can actually automate your work why not write you finish people that are using
2,101
2,134
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=2101s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
that tool or your business will be very thankful and you know we have plenty of jobs around the world so don't worry about your job there is a better world job working for you okay I think it's time for our last question what's the worst distraction for a developer and it's it's like well why is it slick I don't think it is luck to be honest I think the worst distraction if
2,134
2,162
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=2134s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
9-cyC6O81Bk
you have a two-year-old that is knocking on your door and you work from home I think that worse but I don't think I'm a remote worker most of the advice that I give you it's when I was not a remote worker and we used to slack a lot and I just mute everybody but they know that is a really neat Mian they'd really need to reach me they know how to get hold of me so I don't know what we feel offended
2,162
2,190
https://www.youtube.com/watch?v=9-cyC6O81Bk&t=2162s
Habits of Efficient Developers
https://i.ytimg.com/vi/9…axresdefault.jpg
hv3UO3G0Ofo
transformers are quickly coming for your favorite models yesterday they replaced lstms in nlp they used to be good at nlp but blah we now have transformers think again today we're going to see that maybe in the near future transformers will replace convolutions in image processing so this paper is a step in toward towards this direction you just wonder what is it going to be
0
28
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=0s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
tomorrow maybe linear regression is going to be replaced just by giant transformers trained on 5 000 tpus uh who knows we'll see in any case we're looking at axial deep lab stand-alone axial attention for pan-optic segmentation by hui wang yukon chu bradley green heart week adam alan yul and liang chia chen of john hopkins university and google research so this paper
28
56
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=28s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
combines a bunch of techniques that have been introduced recently uh to deal with attention in problems where you would traditionally use a convolution so in this particular case they deal with this problem of pan-optic segmentation which basically you'll see you'll get an image and there's a bunch of stuff on the image like a cat here and a house right here and you're supposed to color
56
87
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=56s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
the pixels of the same object the same so you see you see all these pixels here are house and then all these pixels these pixels right here are cat and so on and then there's also the background so all these pixels right here i know beautiful beautiful beautiful our background so for this problem um it's kind of important that there you you you're very precise
87
115
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=87s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
first of all so you can look at you know pixels or clusters of pixels and also that you take long-range dependencies into account because if you for example recognize that this is a house and you recognize that here's a wall right here um you might be able to much better classify what is wall over here and what isn't okay so the kind of long-range dependencies play
115
141
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=115s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
a role in these problems across images and usually attention mechanisms are pretty good for these long-range dependencies but they're also expensive and that's what this paper deals with so they use this axial attention that has been introduced for exactly resolving this problem in types of data like images or higher order tensors and they also combine this together with
141
169
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=141s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
learned positional encodings which we've also seen um time and time again throughout the kind of transformer and attention literature so the combination of axial attention these learn positional embeddings allows them to replace the resnet backbone that usually is found in panoptix segmentation models with the with a standalone attention so they build models that are partial
169
198
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=169s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
replace the convolutions with attention modules or replace them entirely so the entire model is going to be just an attention model so no more convolutions in it and they perform pretty well in classic tasks like they they test on imagenet classification they perform pretty well and they achieve state-of-the-art on some of these segmentation tasks so we'll go through the model right here
198
225
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=198s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
this is a very very extensive paper in terms of experimental evaluation what i want to get into is mainly how the method works um and show you what their model looks like so we'll go through it and as always let me know what you think in the comments and tell me if you liked it or not uh share it out if you did all right so they go over a very long list of prior work which is you know
225
254
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=225s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
pretty pretty cool and here they say their contributions so their contributions are four fold first of all the proposed method is the first attempt to build standalone attention models with larger large or a global receptive field and we'll see what that means we propose position sensitive attention layer that makes better use of positional information without adding
254
279
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=254s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
much computational cost we show that axial attention works well not only as a standalone model on image classification but also as a backbone on pan-optic segmentation instant segmentation and semantic segmentation maybe what i did before described before was instance or semantic segmentation and not panoptix segmentation excuse me if that's the case as you can
279
306
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=279s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
see it can be used for various various image tasks lastly our axial deep lab improves significantly over bottom-up state-of-the-art on cocoa achieving comparable performance of two-stage methods we also surpassed the previous state-of-the-art methods on mapillary vistas and cityscapes so these are various tasks as i said and also what they don't mention here is
306
332
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=306s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
that they perform fairly well on imagenet in fact in the abstract they formulate this as um in particular our model outperforms all existing standalone self attention models on imagenet like that's you know that's a way to phrase it uh you just exclude all of the other models until you're the best outperforms all existing standalone self-attention models on imagenet yeah
332
358
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=332s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
i mean that's good i i mean there's something to be said of comparing apples to apples but you can also you can also go overboard if you want to make your work look as good as possible of course you know everyone everyone does that and there's no particular shame in it okay so if a we're going to build up our model right here and the basic element of this um
358
389
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=358s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
model is going to be this self-attention mechanism now quickly because i know you all know what it is but very quickly you want to perform this action right here over a region right here so there is always a query and now the subscripts here are going to be important in this paper okay so the query is at a given position position o and you can see that's the o
389
419
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=389s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
right here that's the i'm going to call it the output i guess that's what they said as well so the output position you want to go over all of the input positions and you want to aggregate data from all of the input positions so that's right here and how do you aggregate data by this softmax operator right here and you can see the key also has a p right here and the softmax is over the
419
446
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=419s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
axis of p so in particular case of the images what does that mean if you have an image right here it's made into pixels okay so you have pixels now a transformer or gen in generally these attention models what you can imagine is they always transform a data point into a data point of the same dimensions now this doesn't have to be actually and i think one of the
446
473
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=446s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
developments that is going to come in coming years or months or weeks maybe someone's already doing it is in fact to play more with this with this arbitrary constraint that we're imposing on ourselves because it's not really clear that this is the best thing but for now an attention layer is always transforming a data point here a four by four image into a data point of the same size also
473
502
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=473s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
a four by four image right here now this is as i said this is quite simplified but it is true in nlp where we always transform our whatever 512 sequence token sequence into a 512 token sequence and it is true here now the output is is going to be here on the right and the question always is okay so i'll go over these um these these pixels right here and for every pixel
502
532
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=502s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
let's say for this pixel i'm going to ask what data goes there what's the output of the layer at that particular pixel and the output of the layer is going to be somehow dependent on on the input right here now if you know classic convolutional models what the classic convolutional model says the output of this is going to be dependent on this region right here if it's like a
532
558
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=532s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
three by three filter okay so you have this convolutional filter and that means that blue dot on the right is going to pay attention to you know its own location in the input plus everything around it okay and then every single uh data point here is going to do that so for example this green data point is going to pay attention to this region right here now there's a
558
584
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=558s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
border um so there's maybe some padding but the question is always where does the information come from and how is it aggregated okay in a convolutional layer what happens in a convolution layer in a convolution layer you simply have your filter right you have your filter and the filter has numbers in it like three and five and eight and so on and what you're going to do is you're
584
607
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=584s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
going to take this region right here this blue region of the lower layer and that's maybe that's also you know filled with numbers like seven what's a good number zero zero is a purple is a good nice number and you're going to multiply those and then you're going to sum them up and then you're going to put that on where the blue dot is okay so where does the information come
607
633
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=607s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
from in the convolution from around the location from around the output location but in the input okay so you go to the input at the same location as where you want the output to be you take the neighborhood and there is a fixed a fixed scheme of aggregating the neighborhood okay and then you sum you multiply and you sum across it in contrast to this in a uh fully attentional
633
661
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=633s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
model where does the information come from let's again look at the blue dot and let's consider it fully attentional okay where does the information come from everywhere anywhere anywhere at all okay the information comes from everywhere now how how do i know um how to aggregate the information so it's no longer in a neighborhood how do i know how to aggregate the information
661
693
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=661s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
that's also different so two things are different um now in a convolution i would have another four by four grid here that's pre-specified but in the attention model this here is basically all filled with question marks question mark question mark where how what what number goes here how do i in the end i also do this multiply and i sum it up and i put it right here
693
722
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=693s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
okay but how do these numbers come to be well these numbers also come these are dynamically computed also from um from the input it's a bit special but this is how attention works okay so every pixel gets to decide where information comes from and how it is aggregated it basically it comes from anywhere and how it is aggregated is dynamic depending on the pixel
722
758
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=722s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
if you don't still don't understand it maybe pay out to watch a video on attention itself i happen to have made one but you can watch any one when you understand that you will understand the um the extension here to the image is the exact same thing as with the sequence except uh the pixels are basically one long sequence in the image okay so this would be a fully
758
787
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=758s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
attentional model down here now what's the problem here the problem is that pictures are pretty large so even even something like mnist which is like 28 by 28 is like 700 pixels plus i don't remember exactly but it's like about 700 pixels and our big transformers now so bert a very famous transformer takes inputs uh that are like 512 in length and you already need pretty decent hardware
787
822
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=787s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
to run this and the requirements on memory and compute scale quadratically with the input length so already with mnist you're in pretty pretty shady territory um if you go up to something like imagenet which is like 225 by 225 you're that's bad right that's not good um so you have to come up with something else so people have been playing around the reason why i introduced it this way
822
856
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=822s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
is people have been playing around a bit with sort of coming up with an intermediate with a compromise between the two so the compromise that this paper here focuses on is going to be is going to be a compromise where we you remember when i said where does the information for a given pixel come from and we said okay it can come from anywhere in the attention framework and that's
856
883
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=856s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
good because that allows us to make super long-range connections so any pixel can aggregate information from any other pixel and not even in a fixed way but in a dynamic way so depending on the pixel value itself and the other values it can it decide how it wants to aggregate information that turns out to be expensive right every pixel together with every pixel
883
905
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=883s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
well that's quadratic okay so what do we do we make a third method that's going to be a compromise and the compromise is going to be the following the compromise is going to be all right we still do the dynamic aggregation which means that we still do the attention thing however however we're going to restrict back to this neighborhood region of the convolution
905
934
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=905s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
so in this model where this information for the blue dot come from it again comes from this neighborhood right here and this number the size here is going to be called m so it still comes from that m by m neighborhood so a pixel can only aggregate information from its neighbors but contrary to a convolution how it aggregates the information like this what in convolution would be a
934
960
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=934s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
kernel the kernel is made dynamically by the attention module and it's made dynamically on a case-by-case basis okay so we restrict it to a neighborhood multiply sum it up and then put it into the output and we do that for every pixel now it resembles much more a convolution simply a convolution with this dynamic with this dynamic matrix right here and that's the starting point
960
989
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=960s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
for this paper so this paper does two things to this it says okay um we can augment this by so-called positional embeddings uh a positional embedding you might know from the sequence transformers so if i have a sequence my cat is tall i don't even know what that means for a cat but okay what in a positional encoding so if you use a transformer and you transform this as we said into a
989
1,025
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=989s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
sequence of equal length and then transform is basically information routing the transformer simply sees the lower layer sequence as a set not as a sequence it has no notion of what's neighboring to what what comes from where so it pays to tell the transformer by the way this is word one this is word two this is word three this is word four there are various
1,025
1,048
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1025s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
ways to do it um transformers usually have fairly complicated kind of sine wave based positional encodings that bring many advantages with them um in this case they say well it might pay a payoff to learn where actually these things are in this neighborhood so they experiment with relative positional encoding which means they annotate this neighborhood with something like
1,048
1,078
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1048s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
look here in the middle it's a zero zero here it's like zero one here it's zero negative one negative one zero and so on so they annotate it with uh these positional encodings now this is this would be the easy way what they actually do is they simply they give the model a matrix like this and they learn that matrix by heart let's say um so the positional encodings are relative
1,078
1,112
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1078s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
positional encodings and they are learned okay so you can do that you can learn position on coding so if you don't want to do the one two three four right here you simply say well here is a vector here is a vector here is a vector and here is also a vector now model you're already learning like all the ways to make this thing here happen and you're already learning your
1,112
1,138
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1112s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
output weights up here right using back propagation why don't you learn yourself what you would like for position one like what kind of information you would like to be to have there using back propagation right so the model you provide them you always provide the same vector so this is the same vector for position one and you have a different vector for position two and
1,138
1,161
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1138s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
you have a different vector for position three right so um but across all of the data points these vectors are going to be the same so the vector one is always going to be that same vector for all of the data points so the model somehow must learn independent of the data point what it means to be in position one so the model must learn how it wants to fill that vector that's called a learned
1,161
1,184
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1161s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
positional embeddings we've seen this in many models so far it usually works pretty well and i guess here it works especially well if you have these relative positional encodings and so this thing here is not going to be an actual matrix filled with these numbers it's going to be a learned matrix a trainable matrix that is filled that the network is allowed to fill
1,184
1,211
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1184s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
with numbers right like three five eight and you might be you might notice that we've seen this before right um so ultimately the information in this blue thing right here is going to depend on this dynamically created aggregating of information through the neighborhood and this statically learned aggregation of information throughout the neighborhood which is a con which is sort of a
1,211
1,248
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1211s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
convolution right um because in the convolution you've already seen here this is a statically learned map of how to aggregate information from the neighborhood of a pixel so i think even though there are slight differences um they for example say this these are the same across attention heads and so on um however i suspect that you you can think of these learned positional embeddings
1,248
1,280
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1248s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
to be um to be kind of like what you learn in a convolution not exactly so um no i i think i made a mistake and we'll see it in the formula we'll see it in the formula yeah okay so here they introduce these positional embeddings okay so you see that we previously we had the soft max previously we had this and this okay so this is the lower layer this is the information
1,280
1,318
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1280s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
that comes into the layer and now it's it's transformed into values by a linear matrix but essentially this is the lower layer and for each of the output locations you want to know how should i aggregate information from that lower layer and you do this by this thing here this thing here is this dynamically constructed attention matrix using also the softmax
1,318
1,340
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1318s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hv3UO3G0Ofo
okay so how should you aggregate information this comes from this query at the output position and the keys at the input position and now you add to that this method this thing right here which is again an inner product between the carrier query and the positional encodings okay so the positional encodings are going to be learned and hard coded but they still are
1,340
1,368
https://www.youtube.com/watch?v=hv3UO3G0Ofo&t=1340s
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg