video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
n1SXlK5rhR8
Jeff Dean says this is a clear example here is an illustration that seemingly minor choices in learning algorithms or loss can have significant effects so bias in ML systems is about much more than just avoid data bias ml researchers and practitioners must pay attention to these issues and I think they are and Lacan doesn't say anything against that he says as I point out in my comment to
673
695
https://www.youtube.com/watch?v=n1SXlK5rhR8&t=673s
[Drama] Yann LeCun against Twitter on Dataset Bias
https://i.ytimg.com/vi/n…axresdefault.jpg
n1SXlK5rhR8
this tweet is much more efficient to correct this kind of bias note that Yann Lacan actually differentiates between the different kinds of biases by equalizing the frequencies of categories of samples during training than be hacking the loss function correct because if you hack the loss function you're trying to counter one kind of bias by another kind of bias
695
718
https://www.youtube.com/watch?v=n1SXlK5rhR8&t=695s
[Drama] Yann LeCun against Twitter on Dataset Bias
https://i.ytimg.com/vi/n…axresdefault.jpg
n1SXlK5rhR8
Meredith Whittaker says this is very racist and even if it recognized non-white people it would be very racist this is Coptic it's designed to allow those with power to surveil and control those with less power diverse training sets aren't going to fix it advocating that we should never build these systems and that's a discussion to be had but let me break this to you this isn't
718
741
https://www.youtube.com/watch?v=n1SXlK5rhR8&t=718s
[Drama] Yann LeCun against Twitter on Dataset Bias
https://i.ytimg.com/vi/n…axresdefault.jpg
n1SXlK5rhR8
going to help the cops this isn't actually giving you the face of the person that was down pixeled this is simply going to give you the most likely face associated with that down pixel picture given the dataset the algorithm was trained on I don't see this whenever any machine learning algorithm does anything with faces at all people jumping up going like this is cop
741
763
https://www.youtube.com/watch?v=n1SXlK5rhR8&t=741s
[Drama] Yann LeCun against Twitter on Dataset Bias
https://i.ytimg.com/vi/n…axresdefault.jpg
n1SXlK5rhR8
technology well in line with all the broader impact statement advice can't it also be used to find lost children from very very bad security camera footage and if I already mentioned that this doesn't actually give you back the person on the down stamp old image it will give you back the most likely person given the data set so with that I want to conclude this section please
763
787
https://www.youtube.com/watch?v=n1SXlK5rhR8&t=763s
[Drama] Yann LeCun against Twitter on Dataset Bias
https://i.ytimg.com/vi/n…axresdefault.jpg
n1SXlK5rhR8
stop the witch hunting young account made a completely fine tweet here and there's no reason why people should pile on him this hard he doesn't dismiss any of the other problems just because he doesn't mention them and while we all enjoy a good discussion where people disagree genuinely is not helpful to accuse him of things he never said or meant I mean where does this all
787
808
https://www.youtube.com/watch?v=n1SXlK5rhR8&t=787s
[Drama] Yann LeCun against Twitter on Dataset Bias
https://i.ytimg.com/vi/n…axresdefault.jpg
n1SXlK5rhR8
lead the result of this is going to be that small labs that don't have the resources to collect their own data sets or check for all the possible biases in their models that are reliant on the data sets that we do have even if they are biased and flawed we'll just be disincentivized from publishing their code or actually doing research at all so this as every other additional
808
830
https://www.youtube.com/watch?v=n1SXlK5rhR8&t=808s
[Drama] Yann LeCun against Twitter on Dataset Bias
https://i.ytimg.com/vi/n…axresdefault.jpg
n1SXlK5rhR8
constraint on research is going to help the large corporations with lots of money and maybe that's just my opinion but we should be able to just talk about a problem and the solution to it without always having to make sure that we rabble down all the different things that are and might be wrong according to the Canada and big props to young lecounte here for holding his own 90% of
830
854
https://www.youtube.com/watch?v=n1SXlK5rhR8&t=830s
[Drama] Yann LeCun against Twitter on Dataset Bias
https://i.ytimg.com/vi/n…axresdefault.jpg
yexR53My2O4
hi there today we'll look at distributed representations of words and phrases and their compositionality by thomas McAuliffe Ilya sutskever chi-chan Greg Corrado and Jeffrey Dean this is another historical paper it's one of three papers it's the middle one that introduces the original word to Veck algorithm and if you as you might know work to back was extremely influential
0
25
https://www.youtube.com/watch?v=yexR53My2O4&t=0s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
in NLP since this paper basically until recently where it's sort of gone out of fashion a bit in research with the rise of things like Elmo and Bert but it's still a very very relevant so we'll look at this historical paper today with kind of the hindsight of being a couple years into the future in fact as you see right here this was released in 2013 so it's
25
51
https://www.youtube.com/watch?v=yexR53My2O4&t=25s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
seven years later now and we'll look back and we'll see what they said back then about the system this is not going to be like a very you know well enhanced PowerPoint presentation of how we're to Veck works this we're going to look at the paper and read it together if if you if you like you know content like this if you like historical paper readings let me know in the comments
51
76
https://www.youtube.com/watch?v=yexR53My2O4&t=51s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
share it out if you do like it and of course subscribe because this these kind of historical papers I enjoy them but you know many people might already know what these things are so yeah ok let's you know go through the paper and pick up their ideas and kind of put them in context they say the recently introduced continues skip graham model is an efficient method for learning
76
103
https://www.youtube.com/watch?v=yexR53My2O4&t=76s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
high-quality distributed vector representations that capture a large number of precise syntactic and semantic world relationships so the Skip Graham model was already introduced by mclubbe in an earlier paper that came out I believe not like one or two months prior to this one as I said where to veck is a series of papers I don't think there is a paper called were to vector they here
103
127
https://www.youtube.com/watch?v=yexR53My2O4&t=103s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
have released the code along with the with the paper in the code was called - vac so the Skip Graham model was introduced previously but it is replicated right here so this in the Skip Graham model what you're trying to do is you're trying to get a distributed word representation so what does that mean that means that for each word in your language let's take these words
127
152
https://www.youtube.com/watch?v=yexR53My2O4&t=127s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
right here for each word in the language you want to come up with a vector that somehow describes that word in a continuous fashion so that with in a the - like me might be mapped to I don't know 0.1 0.9 and 0.3 Learn might be mapped to negative 0.5 and so on so each word gets assigned a vector in the same dimensional space and what the previous paper kind of discovered is that if you
152
182
https://www.youtube.com/watch?v=yexR53My2O4&t=152s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
do this correctly then these vectors they have they have some kind of properties and we can already kind of jump ahead because this was already a bit a bit researched in the last paper the semantics of these vectors will be something like this so here they have a two dimensional PCA so these are the first two dimensions of the one thousand dimensional skip gram vector so the
182
208
https://www.youtube.com/watch?v=yexR53My2O4&t=182s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
vectors they obtain they can do things like this where they can show that in these spaces for example there appears to be a vector Direction that characterizes the capital of a country so if you take a few countries and their capitals and you average that vector you get a kind of a direction for capital Ness of a city given a country you can see that there is a a pretty clear
208
236
https://www.youtube.com/watch?v=yexR53My2O4&t=208s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
relation here now some of these things have later been revised to such that they they are ultimately ended up being not that impressive for example there was always this kind of math with vectors and I don't I believe this is this might not be in this this is in the last paper where they discovered that if you take the vector for King and you subtract the vector for man
236
264
https://www.youtube.com/watch?v=yexR53My2O4&t=236s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
and you add the vector for a woman then that would result in the vector for Queen so the way they did it was basically they did this calculation right here and then they searched in the point they ended up they searched for the nearest neighbor in their vocabulary and that turned out to be Queen but in order to make it Queen actually you have to exclude the original word King and
264
294
https://www.youtube.com/watch?v=yexR53My2O4&t=264s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
people quickly discovered that if you don't exclude the original word it you know the result of this kind of arithmetic will almost always lead back to the original word and then a lot of these analogy tasks are simply the result of you then discarding that word during the nearest neighbor search and then Queen just happens to be one of the closest words and it's it sort of much
294
320
https://www.youtube.com/watch?v=yexR53My2O4&t=294s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
less dependent on which exact calculation you do here so there's been a lot of follow-up work kind of analyzing criticizing these vector maths but definitely we know that these word vectors turned out to be extremely extremely helpful and syntactically and semantically relevant in downstream tasks because they have performed very very well so how does the Skip Graham
320
344
https://www.youtube.com/watch?v=yexR53My2O4&t=320s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
model work how does how does it assign vectors to each to each word so first of all it has a dictionary so there is a word an input word and for each word you have a big dictionary and the diction dictionary is basically says that you know - the word - is going to be mapped to this vector point one Dada Dada Dada and so on the word learn is going to be mapped to that vector and then you also
344
379
https://www.youtube.com/watch?v=yexR53My2O4&t=344s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
have these output vectors right here and what you're trying to do is you're trying to take a phrase from the data set like this one right here and you take out one word like this word vector right here and you're trying to frame this as a prediction task so you're trying to frame this as in this case four different prediction tasks so you're telling your machine I give you
379
412
https://www.youtube.com/watch?v=yexR53My2O4&t=379s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
the word vector and which other words are around the word vector you just tell it that you don't tell it anything else you just say which other words are around the world vector and the correct answers in this case would be to learn word and representations so these you construct for different training examples where you have an x and a y so the X is always vector and the Y is 2
412
444
https://www.youtube.com/watch?v=yexR53My2O4&t=412s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
and then the next training sample the X is vector and the Y is learn and and so on ok so this here each training sample is a classification task right and the classification task is and is as you can see no you can't see right here but the classification task is you have the input word and you classify it into one of many many many many many many classes namely there are as many classes as you
444
479
https://www.youtube.com/watch?v=yexR53My2O4&t=444s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
have words in the dictionary so each word in the dictionary will have a class associated with it right so an image nets you have like a thousand classes but in these and that's already a lot but in these tasks you're gonna have a hundred thousand classes because there are a hundred thousand words in the English language that you wanna want to treat and there are many more but in
479
504
https://www.youtube.com/watch?v=yexR53My2O4&t=479s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
this case they leave away all the words that appear less than five times in their corpus that's still a lot of words so it's like a super duper duper lot of a classification task but ultimately if you do something like this then the originally so the representation that you end up with is going to be very very good at doing these kind of downstream tasks and
504
527
https://www.youtube.com/watch?v=yexR53My2O4&t=504s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
that's what they discovered so their skip gram model is nothing else than taking a word in predicting the surrounding words from that word and this is what it means this is the formal statement of the skip gram objective what you want to do is the objective of the skip gram model is to maximize the average log probability this one so for the word we're considering the word t we
527
558
https://www.youtube.com/watch?v=yexR53My2O4&t=527s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
want to maximize the log probability of each word W that is in around the word C I'm sorry around the word W in a context window of C that's exactly what we did before we take a word like this model right here and from it we predict all of the words around it in energy in a given window right that's all that's the entire objective and that will give you very good representations and this is
558
593
https://www.youtube.com/watch?v=yexR53My2O4&t=558s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
how you would implement that so what you'll have is you'll have these vector representation V that comes from your original dictionary those are the things you learn and then because you have like an 30,000 way classifier you know that a classification layer is nothing else than a linear layer followed by a soft max operation and that linear layer also has parameters these are the the Prime's
593
619
https://www.youtube.com/watch?v=yexR53My2O4&t=593s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
okay so first you have the look up in the dictionary for the word vector right here and this is the vector of the classification layer now there are modifications where you can use like the same vectors and so on or you can also make use of these vectors but ultimately you care about these vectors right here and the vectors here are simply the classification layers weights so here
619
646
https://www.youtube.com/watch?v=yexR53My2O4&t=619s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
you can see that there is what you're trying to maximize is the inner product between the word that you're considering and the words around that word and you're trying to do a classification task so you need to normalize now this is the normalization constant and it goes over all of your vocabulary so that's what they tackle here they say W is the number of words
646
681
https://www.youtube.com/watch?v=yexR53My2O4&t=646s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
in the vocabulary this formulation is impractical because the cost of computing the gradient is proportional to W which is often large and that's 10 to the 5 to 10 to the 7 terms so many 10 like tens of millions of terms in your vocabulary that's just not feasible right so people have been you know sort of trying different ways to get around very very large number of classes and
681
708
https://www.youtube.com/watch?v=yexR53My2O4&t=681s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
here it seems that that is really our bottleneck in the previous paper they've already shown that this objective can give you very good word representation but now we need to get around the fact that we have such large vocabularies so the first idea here is hierarchical softmax and this is kind of a tangent i find this paper by the way it's sort of hard to read because it's like a half
708
731
https://www.youtube.com/watch?v=yexR53My2O4&t=708s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
engineering paper but yeah so first I introduced this hierarchical softmax which is kind of a a distraction it's kind of a here is what we do here is what we considered first but then didn't end up using really they do compare with it but the flow of text is sort of that you expect this to be part of the final model which it isn't so in the hierarchical softmax what you do instead
731
757
https://www.youtube.com/watch?v=yexR53My2O4&t=731s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
of having this giant multi-class classification task right here you take all of these classes right here and you put them in a sort of a tree okay so you take this and you put them into a tree so instead of classifying you know let's say we have a thousand classes instead of classifying a thousand ways we first classify in two ways and then we classify in two ways again from each one
757
786
https://www.youtube.com/watch?v=yexR53My2O4&t=757s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
and then we classify in two ways again as you know a thousand is like two to the ten so we need approximately ten layers of this before we are actually arriving at a thousand classes but it also means that we only have to weigh classifications each time so in the hierarchical softmax we build trees like this and then we so we have a word we look up its vector sorry its vector and then we
786
815
https://www.youtube.com/watch?v=yexR53My2O4&t=786s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
classify it for each of these nodes so your output isn't going to be a thousand a thousand log probabilities your output is going to be a log probability a binary log probability for each of the nodes right here so you want to know okay here is it in the upper half or the lower half of my classes okay cool it's in the upper half okay here is in the upper half or the lower half and so on
815
843
https://www.youtube.com/watch?v=yexR53My2O4&t=815s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
and you learn all to predict all of these junctions right here and that's going to end up you with you having to predict it less now of course you are constrained you impose a very big prior on the class distribution classes are an independently anymore namely if two classes here are in the same subtree that means that they are going to be predicted their predictions are going to
843
868
https://www.youtube.com/watch?v=yexR53My2O4&t=843s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
be correlated because the path to them is the same partially so how you arrange the classes here is very important and there has been a lot of work in this but as I said this is rather a distraction right here hierarchical softmax is a way to solve this however they went with a different way right here they went with this approach called negative sampling negative sampling has been it's been
868
900
https://www.youtube.com/watch?v=yexR53My2O4&t=868s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
very influential not only in word to vecna --get of sampling is one of the cornerstones of the current trend in self supervised learning in contrastive estimation and so on so this all of this you know it pops up in unlikely ways in other fields and it sort of I'm not gonna say it originated here but definitely it was introduced into the popular deep learning world
900
931
https://www.youtube.com/watch?v=yexR53My2O4&t=900s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
right here so they say an alternative to hierarchical softmax is noise contrastive estimation okay so in noise contrastive estimation posits that a good model should be able to differentiate data from noise by means of logistic regression you know that seems very reasonable this is similar to the hinge loss and so on yada yada while NCE can be shown to approximately
931
958
https://www.youtube.com/watch?v=yexR53My2O4&t=931s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
maximize the log probability of the softmax the Skip current model is only concerned with learning high-quality vector representations so we are free to simplify in all these contrastive estimation as long as the vector representations retain their quality we define negative sampling by this following objective so this is very interesting they see okay noise contrastive estimation you know it
958
981
https://www.youtube.com/watch?v=yexR53My2O4&t=958s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
approximately maximizes the law of probability so the noise contrastive estimation would actually be the correct way to approximate their problem however they say well as long as you know as long as something reasonable comes out we're free to change that up a bit so they go with this negative sampling approach right here and you can see that this is this is almost the same so it's
981
1,009
https://www.youtube.com/watch?v=yexR53My2O4&t=981s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
written a bit differently from the original softmax thing because the original softmax thing was written as a fraction and here it's as a sum but what you're trying to do in the noise contain the negative sampling framework is you trying to maximize the following you're trying to maximize the inner product of the word you're considering and the words around them okay so if
1,009
1,034
https://www.youtube.com/watch?v=yexR53My2O4&t=1009s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
you're trying to still predict the words around you but now instead of having this prediction soft max over all of the classes you only have the soft max over a subset of classes so what you'll do is use sample words from your vocabulary at random and you sample K of them and you're simply trying to now minimize the inner product between those words and your
1,034
1,065
https://www.youtube.com/watch?v=yexR53My2O4&t=1034s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
okay so what is that ultimately lead to it ultimately leads to the following you have a word like this word here- and what you're trying to do is you're not trying that much to predict the word sampling what you're trying to do is you're trying to say that in my space right here I simply want sampling to be closer than any other words that's not in the context window okay so so here is
1,065
1,097
https://www.youtube.com/watch?v=yexR53My2O4&t=1065s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
my word negative and here is my word sampling and I want these two to be close and if I if I sample another word like here this is the word cake if I sorry if I sample that I simply want that to be far away further than than the word sampling okay so this is now a comparative it's not I classify sampling as the highest class it's simply I want to classify the word sampling against
1,097
1,127
https://www.youtube.com/watch?v=yexR53My2O4&t=1097s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
the other classes higher all right so and this is now much much easier so instead of a thousand or ten thousand or a million weigh classification and now maybe have I have a k plus one way classification right pretty easy right I simply sample K other words and as I assumed because I have so many words chances that I actually sample one that's in my context window is very
1,127
1,157
https://www.youtube.com/watch?v=yexR53My2O4&t=1127s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
small right so I simply sample other words and I say well these other words are random they have nothing to do with the current frame that I'm looking at so they should be you know they can be whatever they want but at least they should be farther away than the words that are actually in my in my context and that is negative sampling the process of sampling negatives this
1,157
1,184
https://www.youtube.com/watch?v=yexR53My2O4&t=1157s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
right here and then making sure that the positives which are these here in this case the words in the context are classified with a higher probability than the negatives for a given in right this here is the input word that's it that's negative sampling and of course yeah as I said you recognize this from current things like yourself supervised learning where you wanna have
1,184
1,216
https://www.youtube.com/watch?v=yexR53My2O4&t=1184s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
the same image augmented twice go through the pipeline you know you augment you put a little bit of different noise and then you have a different image and at the end you say these two should be close together well this other one should be far apart it's the exact same thing here except that you have a different way of obtaining the positive and the negative samples in
1,216
1,241
https://www.youtube.com/watch?v=yexR53My2O4&t=1216s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
this case positive samples are everything that's in the context negative samples are just randomly sampled from the data set and that you know works of course that works much much much faster and you can see that this this turns out to give you vectors that are pretty good and you can train with higher vectors sorry with higher dimensional vectors you can train with
1,241
1,268
https://www.youtube.com/watch?v=yexR53My2O4&t=1241s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
bigger vocabularies with this this has turned out to be very very influential as I said now with the rise of Burke and so on work to back is kind of getting forgotten but this was a revolution and distributed vectors so it wasn't the thing really it kind of was a thing before that but it wasn't really a thing that people used what people would do is still they would do n gram models before
1,268
1,296
https://www.youtube.com/watch?v=yexR53My2O4&t=1268s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
that so they would kind of diss diss they would sort of chunk up their sentences into engrams into overlapping engrams and then have a big giant table further where they index their engrams so the word I don't know so the word hello is ID 1 the word hello there is ID 2 and so on so you have a big table for all the engrams and then what we would try to do is you would try to do this
1,296
1,326
https://www.youtube.com/watch?v=yexR53My2O4&t=1296s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
kind of bag of words estimation where you would take a you know whatever engrams appeared in sentence and you would have this big you know classification where you'd associate the engrams with each other and so on so distributed word representations were kind of a revolution at that point especially distributed representation that actually outperformed these old Engram methods so
1,326
1,353
https://www.youtube.com/watch?v=yexR53My2O4&t=1326s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
there are a number of tricks right here that are I think not understood until this day for example the question is how do you sample these negative samples right right here this basically says get K words from your vocabulary at random according to this distribution right here now how are you going to do that basically you have a spectrum of options the one side of the spectrum is going to
1,353
1,382
https://www.youtube.com/watch?v=yexR53My2O4&t=1353s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
be completely uniform okay we sample each word with the same probability and the other side of the spectrum is something like sample this according to their unique gram these are two different things they're they're opposites in this in this fashion so here you say hey some words appear way way way more often than other words shouldn't we prefer them when we sample right shouldn't we if we
1,382
1,412
https://www.youtube.com/watch?v=yexR53My2O4&t=1382s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
have a corpus and shouldn't we sample from the corpus and if in the corpus one word appears 50 times more than the other word then shouldn't we sample that 50 times more as a negative because it's you know so abundant and it should read get a higher classification accuracy whereas on the other hand you could say no no we should simply sample every word in our dictionary uniformly they came up
1,412
1,437
https://www.youtube.com/watch?v=yexR53My2O4&t=1412s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
with something in between which they say both NC and negative sampling have noise distribution as a free parameter we investigated a number of choices and found that the unigram distribution raised to the 3/4 power ie unigram to death recorder outperformed significantly the unigram and uniform distributions for both NC and neg on every task which including language modeling this I think
1,437
1,469
https://www.youtube.com/watch?v=yexR53My2O4&t=1437s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
is a mystery until today and it actually turned out that this exponent right here is magically much better than like the exponent of one or even the exponent of one half like you might be reasonably assumed that the square root you know might be something but the 3/4 I think turned out to be very good and very mystical so what does it what does it mean it means that you have kind of a
1,469
1,495
https://www.youtube.com/watch?v=yexR53My2O4&t=1469s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
balance between words that appear often in words that don't appear often usually in these kind of things you have a power law where we have very few words that appear very often and then you have okay that's the tail shouldn't go up but you have a very long tail of words right and what you want to do is in this case you want to sample these words here more but
1,495
1,517
https://www.youtube.com/watch?v=yexR53My2O4&t=1495s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
they are they appear so much more often than if you simply sample according to their unigram distribution you basically not regard these words right here you'll forget about them and your performance will suffer because they do appear every now and then so what you want to do is you want to push the dose down a little bit and the optimal amount for the little bit turns out to be to raise it
1,517
1,541
https://www.youtube.com/watch?v=yexR53My2O4&t=1517s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
the you raise it to the 3/4 strange but you know turned out to work well the other thing they do is they do the they do a sub sampling of frequent words so again this is a way to kind of push down the often appearing words where they say the most frequent words can easily occur hundreds of millions of times like in the array such words usually provide less information value than the rare
1,541
1,573
https://www.youtube.com/watch?v=yexR53My2O4&t=1541s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
words for example while the Skip current model benefits from observing the co-occurrences of France and Paris it benefits much less from observing the frequent co-occurrences of France and the as nearly every word Co occurs frequently with in a sentence with the so they do another trick here to counter this imbalance between rare and frequent words use a simple subsampling approach
1,573
1,597
https://www.youtube.com/watch?v=yexR53My2O4&t=1573s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
each word in the training set is discarded with probably I computed by that formula right so therefore formula right here and might be asking again why why this formula so this is the sampling probability of a word and it goes with 1 over T T is a temperature parameter and F is the frequency with which the word appears in the corpus so as you can see as the word
1,597
1,626
https://www.youtube.com/watch?v=yexR53My2O4&t=1597s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
appears more in the in the corpus then so this is the frequency as the word appears more then this thing goes down then this thing goes up so it's discarded with this probability so it's discarded with a higher probability if it appears more often where F is frequency if the word T is a throat T is a chosen threshold which shows this subsampling formula because
1,626
1,655
https://www.youtube.com/watch?v=yexR53My2O4&t=1626s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
it aggressively sub-samples words whose frequency is greater than T while preserving the ranking of the frequencies although this subsampling formula was chosen heuristically we found it to work well in practice it accelerates learning and even significantly improves the accuracy of the learn vectors of the rare words as will be shown in the following sections
1,655
1,676
https://www.youtube.com/watch?v=yexR53My2O4&t=1655s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
so again something sort of arbitrary it's even it's more understandable than the 3/4 but still it's sort of arbitrary they experimented around they found this works well and then every everybody ended up you know using that so that's how this kind of stuff happens ok so now we get into the empirical results and the empirical results in this case we're already sort of given in the previous
1,676
1,702
https://www.youtube.com/watch?v=yexR53My2O4&t=1676s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
paper but here they have these the analogical reasoning task where you can see that the negative sampling did outperform the others by quite a bit right here so the negative sampling approaches outperformed the hierarchical softmax and the noise contrastive estimation and in the previous paper they also compared with other baselines and saw that it also outperforms those while being
1,702
1,734
https://www.youtube.com/watch?v=yexR53My2O4&t=1702s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
right time time-efficient so you can see that especially with these subsampling approaches the time here there's 36 minutes for and they again I think they have like a huge corpus that they train on these were to vac code turned out to be really really efficient code and that's why I got so popular as well they did the same thing for phrases right here so for phrases like New York Times
1,734
1,767
https://www.youtube.com/watch?v=yexR53My2O4&t=1734s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
and so on but this was kind of more of a this was more of a side thing the phrase vectors turned out to be you know rather a side thing from the actual code right here so yeah as as I said this paper is very different from other research papers and that it's it's sort of half an engineering paper and all of these papers are they're kinda hard to read because they just kind of state
1,767
1,799
https://www.youtube.com/watch?v=yexR53My2O4&t=1767s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
some things in the order is kind of weird sometimes why they do things is kind of weird sometimes but you can't you know you can't deny that it had the quite the effect on the community and now this it is a very cool paper a very cool series of papers and it's very cool that actually they release the code and they made the code such that it is super duper efficient even like on a single
1,799
1,830
https://www.youtube.com/watch?v=yexR53My2O4&t=1799s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
yexR53My2O4
machine and that was very cool because you know being Google they could have just released code that is very efficient on a distributed data center and they didn't do that so that this is it's sort of not really like today anymore or like today when they release code it's always you need you need like 50 cloud TP use to do it and it's still cool that they release
1,830
1,858
https://www.youtube.com/watch?v=yexR53My2O4&t=1830s
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
https://i.ytimg.com/vi/y…axresdefault.jpg
T35ba_VXkMY
hi there today we're going to look at end to end object detection with transformers by Nicolas carrion Francisco masa and others at Facebook AI research so on a high level this paper does object detection in images using first a CNN and then a transformer to detect objects and it does so via a bipartite matching training objective and this leaves you basically with an
0
28
https://www.youtube.com/watch?v=T35ba_VXkMY&t=0s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
architecture that is super super simple compared to the previous architectures that had all kinds of engineering hurdles and thresholds and hyper parameters so really excited for this as always if you like content like this consider leaving a like comment or subscribe let's get into it so let's say you have a picture like this here and you're supposed to detect all the
28
54
https://www.youtube.com/watch?v=T35ba_VXkMY&t=28s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
objects in it and also where they are and what they are this task is called object detection so a good classifier here would say there's a bird right here and so this is a bird and then this here is also a bird right they can be overlapping these bounding boxes so this is you see the first problem that bird why is that green nevermind okay and those are the only two objects so
54
85
https://www.youtube.com/watch?v=T35ba_VXkMY&t=54s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
there's a number of very difficult things here first of all you need to sort of detect the objects you need to know how many there are it's all it's not always the same in each image there can be multiple objects of the same class there can be multiple objects of different classes they can be anywhere of any size that can be overlapping in the background small or
85
106
https://www.youtube.com/watch?v=T35ba_VXkMY&t=85s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
across the entire image they can include each other partially so the problem is a very very difficult problem and previous work has has done a lot of engineering on this like building detectors and then kind of you want to classify every single pixel here and then you you you get like two detection right here that are very close for the same classes they are that must maybe be the same instance
106
131
https://www.youtube.com/watch?v=T35ba_VXkMY&t=106s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
right so there's only one thing here and not things and so on so there there used to be very complicated architectures that solve these problems and this paper here comes up with a super simple architecture and we'll kind of go from the high level to the low to the implementation of each of the parts so what does this paper propose how do we solve a task like this first of all we
131
155
https://www.youtube.com/watch?v=T35ba_VXkMY&t=131s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
put the image and the image here without the labels of course we put it through a convolutional neural network encoder since this is an image task it's you know kind of understandable that we do this mostly because CNN's just works so well for images so this gives us this set of image features and I think this this vector here is not really representative of what's happening so
155
182
https://www.youtube.com/watch?v=T35ba_VXkMY&t=155s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
let's actually take this picture right here and throw it in kind of an angled way and what what we'll do with the CNN is we'll simply sort of scale it down but have it multiple so here it's three channels right it's red green and blue like this three channels but we'll scale it down but we make it more channels so yeah so more channels okay but it's still sort of an image right
182
216
https://www.youtube.com/watch?v=T35ba_VXkMY&t=182s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
here it still has the image form okay so the the CNN basically gives us this thing which which is sort of a higher level representation of the image with many more feature channels but still kind of information where in the image those features are and this is going to be important in a second because now this thing which is this set of image features goes into a transformer encoder
216
242
https://www.youtube.com/watch?v=T35ba_VXkMY&t=216s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
decoder and this is sort of the magic thing here as as a component we'll look into that in this in a second but we'll take it out right here is this set of box predictions so outcomes each of these boxes here is going to be consisting of a tuple and the tuple is going to be the class and the bounding box okay so an example for this could be bird bird at x equals two y equals five okay
242
276
https://www.youtube.com/watch?v=T35ba_VXkMY&t=242s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
that that's an example another example of this could also be there is nothing at x equals seven y equals nine okay so nothing the nothing class is a valid class right here and that's also important but safe to say there is this set of box predictions and then that is basically your output right these things are your output if you have those things you can draw these bounding boxes you
276
307
https://www.youtube.com/watch?v=T35ba_VXkMY&t=276s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
can assign the labels the question is how do you train it now what you're given is a database of images and these images as you see here on the right these images already have by human annotators drawn these bounding boxes in and also labels so this here would be annotated with bird and this here would be annotated with bird but it doesn't have any of these like it doesn't
307
333
https://www.youtube.com/watch?v=T35ba_VXkMY&t=307s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
annotate the nothing classes or and so on so the question is how do you compare the two can you simply say okay if the first one here is the bird and then and the second one is this bird then it's good but then you know that the ordering shouldn't matter you simply simply care whether you have the correct bounding boxes you don't care whether you have put them in the correct order and also
333
360
https://www.youtube.com/watch?v=T35ba_VXkMY&t=333s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
what if your classifier does something like this it outputs those two boxes we see here but it also outputs this here and says bird or like one that is slightly off and says bird and so on so how do you deal with all of these cases so the way that this paper deals with all of these cases is with their bipartite matching loss this thing right here so how does it work let's say your
360
389
https://www.youtube.com/watch?v=T35ba_VXkMY&t=360s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
where can we go let's say your classifier so here is an image I'll have to wait for this to catch up here is an image and we put it through this entire pipeline and we get a set of predictions right and they're going to be class bounding box class bounding box class bounding box now the first thing you need to know is that there are always the same amount of
389
419
https://www.youtube.com/watch?v=T35ba_VXkMY&t=389s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
predictions right there are always this size here is fixed that's large n okay that is sort of that's kind of a maximum of predictions since you can always predict either a class or the nothing class in in this case you could predict anywhere from zero to five objects in the scene right okay and then the second thing is from your from your database you get out an image with its bounding
419
446
https://www.youtube.com/watch?v=T35ba_VXkMY&t=419s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
box annotations right that are made by human laborers let's say these two and you also do class bounding box class bounding box but now you see we only have two two instances so here we just pad with the nothing class so I don't know what the bounding box should be for the nothing class it doesn't really matter nothing no bounding box nothing no bounding box
446
475
https://www.youtube.com/watch?v=T35ba_VXkMY&t=446s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
no bounding box so your ground truth labels if you will are also of size n so you always compare n things here on the left that your classifier output with n things on the right now as we already said the question is how do you deal with you can't simply compare one by one because the the ordering should not be important but also you don't want to encourage your classifier to always kind
475
509
https://www.youtube.com/watch?v=T35ba_VXkMY&t=475s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
of if there is if if the one bird is very prominent right you don't want to encourage your classifier to say do you say that here's a bird here's a bird there's a bird right here hey hey there's a bird there's a bird there's a bird and basically just because the signal for that bird is stronger and basically ignore the other bird what you want to do is you want to encourage some
509
529
https://www.youtube.com/watch?v=T35ba_VXkMY&t=509s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
sort of your classifier to detect if it has already detected an object it shouldn't detect again in a slightly different place so what the way you do this is with this bipartite matching loss so at the time when you compute the loss you go here and you compute what's called a maximum matching now what you have to provide is a loss function so we can there's a loss
529
557
https://www.youtube.com/watch?v=T35ba_VXkMY&t=529s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
function L and L will take two of these things L will take the read the predicted thing of your model and L will take the true under one of the true underlying things and L will compute a number and we'll say how well do these two agree so you can say for example if either of them is the nothing class then I have no loss like I don't care about them that gives you no loss but if the
557
590
https://www.youtube.com/watch?v=T35ba_VXkMY&t=557s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
two if the two classes agree and the two bounding boxes agree then it's very good right and we maybe even gives like some negative loss or give loss zero but if if the bounding boxes agree but the classes don't agree then you say that's bad or the other way around if the classes agree in the bottom or even if everything disagrees it's the worst what what you're basically saying is if if
590
618
https://www.youtube.com/watch?v=T35ba_VXkMY&t=590s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
these two would correspond to each other right if the thing on the left were the prediction for the thing on right which we don't know right it could be that the thing on the right refers to the bird on the right and the thing on the Left refers to the bird on the left so would be natural that the bounding boxes aren't the same but you say if these were corresponding to each other what
618
643
https://www.youtube.com/watch?v=T35ba_VXkMY&t=618s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
what would the loss be how well would they do and now if you compute this bipartite matching what you want I guess it's a it's a minimum matching in this case what you want is you four to find an assignment of things on the left two things on the right a one to one assignment this is an example of a one to one assignment everything on the left is assigned exactly one thing on the
643
668
https://www.youtube.com/watch?v=T35ba_VXkMY&t=643s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
right such that the total loss is minimized right so you're going to say I'm going to align the things on the left with the things on the right such that it's maximally favorable right I give you the maximum benefit of the doubt by aligning these things and what so in the best possible case what's the loss okay hope this is this is somehow clear so this you're trying to find the
668
698
https://www.youtube.com/watch?v=T35ba_VXkMY&t=668s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg
T35ba_VXkMY
assignment from the left to the right that makes that basically is the best case for this output right here where you really say oh okay here you output the output a bird very close to the bird ear in the ground truth label that's this here so I'm going to connect I'm going to connect these two because that's sort of it's it's it gives the model the most benefit of the doubt and
698
725
https://www.youtube.com/watch?v=T35ba_VXkMY&t=698s
DETR: End-to-End Object Detection with Transformers (Paper Explained)
https://i.ytimg.com/vi/T…axresdefault.jpg