video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
F5aaXrIMWyU
animations so you can look how does one of these games turn out now this is a free market game and you can see the agents moving around collecting things building houses and you might notice that one of the agents namely agent one is just building all of the houses and generally just kind of being a dick being in everyone's face and kind of building things everywhere
652
676
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=652s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
and the other ones don't and or or just very few like the light blue on the on the bottom left build some houses on the right you can see how the distribution of wealth is is structured and you see agent one ends up with most of the wealth now the size of the circle i think is the total productivity so you can see this grows over time mainly because agent one becomes so
676
705
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=676s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
rich and if you analyze this if you analyze what's happening here then you'll see that agent one and i might be yeah they have a graph up here so so it is very interesting what happens this is kind of the same game so agent one here is this orange dot and agents two three and four are these dots here and this graph here is coin from trading so how much money they
705
742
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=705s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
win or lose from trading now you the green bars are trading wood and the the brown bars are trading stone so you see agent number four which is the lowest skilled um the skill is just determined at the beginning of the episode it will just make all of its coins basically by selling wood and agent 3 will make all of its coins by selling stone and agent 2 will collect both and sell
742
772
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=742s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
both and agent one will just spend money in trading so you'll have a specialization here agent one which is the highest skill one right here will buy resources in order to build more houses because it clearly profits from building lots and lots and lots and lots of houses so it will use that money to buy more resources rather than go in collecting them while all the other ones basically
772
801
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=772s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
forego building houses in favor of they just collect the resources and they just trade them way to the agent one that's more profitable for them than building houses themselves so you see this kind of specialization emerging in these games which i find i find this to be pretty cool that you see something like this like a really stark division of labor emerging just from
801
826
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=801s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
these very very uh small set of rules and you can analyze this game in different ways they have a few more plots where this becomes quite apparent that um sorry that that these agents specialize so you see here resources collected sorry about that resources collected uh if you have the lowest skill and the highest skill labors the reas the lowest skills they mainly about this this should be a
826
860
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=826s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
pen they mainly collect resources while the highest skill labor mainly goes for building things it doesn't collect resources but net income from building is really high while everyone else just doesn't build at all all right so we have a division of labor emerging now this was a free market let's actually compare the different algorithms so if you look at social welfare this is this thing
860
893
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=860s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
here equality times productivity you can see that the ai economist will outperform over time over the training progress it will outperform all of the other systems so it will outperform the free market the u.s federal tax system and the sas formula um if trained for long enough which is to be expected right if you put rl onto a cost function it will then optimize that cost function but it's
893
921
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=893s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
pretty cool to see that it had there's a lot of lot of headroom here over what we currently have now let's look at some of these strategies it comes up with so what do these games look like where the ai has imposed different tax strategies so this is with the size strategy you see that here again you you see this inequality emerging with the yellow player here
921
950
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=921s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
building most of the houses with the ai economist again there is inequality but you can see at the distribution that agent one only ends up with about half of the wealth where if you compare this to the free market here then agent one ends up with like two-thirds of the wealth right this is the game we saw before um but there is not qualitatively that much of a difference
950
978
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=950s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
uh but there is in the end result all right let's look at what the these policies actually come up with so what is the tax policy that the ai comes up with so this tax policy outperforms on this social welfare metric and this is very interesting right so first of all you see that it's right zigzag it's like down up down up uh which is already weird so the first very weird thing
978
1,012
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=978s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
is the the spike at the very bottom so that thing here what's that thing here those are the poorest people in your society and you're taxing them the highest right so just imagine this you're here uh downtrodden by life abandoned by society you have no money no house no nothing and you're just trying to get a job you're just getting like a little bit of money and
1,012
1,043
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1012s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
you can buy a cheeseburger and then the government comes give us that us that money come on so basically this these are the poor and the poor in this system is just fu fu the poor now the reason why this happens is pretty clear right the reason why this happens is because you want to encourage people to go here to earn more money right so so it's not like the government makes any money from
1,043
1,080
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1043s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
the poor people independently of how it how high it taxes them but it is a basically an incentive structure to make them move over to the somewhat more productive population because here it's assumed kinda that even the lowest skilled ones can move over a bit if you just tax them enough at the low brackets right so um this this is what i find to be you just have to realize that it is
1,080
1,109
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1080s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
so hard i believe it is almost impossible to encapsulate what we really want in a system into a formula to be into a cost function to be optimized it is so incredibly hard and you see that here of course it is going to result in a better social outcome but it just doesn't feel right to tax the poor at what 60 okay so f the poor right and then you get to to this to this level right here and
1,109
1,140
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1109s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
interestingly if you earn even more you'll be taxed high again right so this this um this we're kind of used to that you earn little you pay little you earn more you er you pay more but then comes this entire valley here what's up with that right like wtf doesn't matter and this can be this this is now of course the same reasoning as you have with this science formula here is where the
1,140
1,172
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1140s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
rich people you want to tax them less so that they are more productive such that they generate more coins and even though you tax them less percentage-wise they will end up paying more uh money in absolute terms because because you basically encourage them to produce more so that is that is can that is the i guess the reasoning behind this but what you have to re you have to
1,172
1,203
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1172s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
recognize what's happening here right what are we optimizing we're optimizing this productivity times equality right and what do we get you see you get two big values of attraction one here and one here and that means that this algorithm favors a two-class society right and i believe this is this is partially the limitations of this simulation here the fact that you're only a f4 agent the
1,203
1,233
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1203s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
fact that you can only do two things either collect or build right it encourages a two-class society this specialization that you saw right so you say these here are the money makers right and these here are the collectors and it is very hard to move from one group to the other because if you you earn more coins as a collector you're here and you're really discouraged here
1,233
1,259
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1233s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
if you move there you want to move all the way over here right now the people that are are already over here if they earn an extra coin that doesn't bother them too much so they're very encouraged to earn more money but the very the poorer people on this side they're basically discouraged from earning more money because the system needs them to stay at that collector level right
1,259
1,284
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1259s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
so the system encourages the two-class society because we have not built social mobility into the into the into the equation we have not built a measure for social social mobility into the cost function and therefore the ai doesn't care that the poor people will stay poor and the rich people will stay rich uh it just knows that this is the best outcome for society overall
1,284
1,313
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1284s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
given the cost function that we had again this just doesn't seem like fair to us like what we want we want someone to be able to make it over here right even if they start out from the bottom and so we'd have to we have to build that in so we have a system that is effing f the poor right no social mobility mobility no and then look at what happening at the end
1,313
1,345
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1313s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
what's happening at the end this is beautiful very rich people these are the money maker right this is the this is the monopoly guy top hat monocle wearing scrooge mcduck bathing in coins this is where the the government makes their money and um the discrepancy is really stunning because you could also argue hey why don't we apply the same reasoning as we applied here and here right
1,345
1,376
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1345s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
it's not is it not like the case that if the rich people if if you tax them lower they'll pay more money and so on i believe again this might be just a result of this how the simulation is set up so we'll move away quickly and we'll come back to this here is what i find particularly interesting about this paper which just confuses the heck out of me it is a double periodic game so it's an
1,376
1,404
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1376s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
inner outer loop game what do i mean by that they have these episodes right here is the start and here is the end and they subdivide this into as we said 1 000 steps so an agent is here and they can do step step step step step and it can perform these actions this is the agent there are 1 000 steps here and the agent just tries to collect as much coin so this is your classic rl
1,404
1,433
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1404s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
problem but also they divide this into 10 what they call periods and i'm just going to draw maybe four periods right so this thing here they call one period where the whole thing is an episode now the purpose of the period is that at the beginning of each period the government the government can impose a new tax schedule so the government doesn't only fix the taxes once but it
1,433
1,466
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1433s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
can change the taxes over the course of the episode right now this is what i find i i just don't see why so now you're formulating the tax giving objective as a sequential decision making it's like the government saying well today we have high taxes but tomorrow we have low taxes and the day after that we have high taxes again and it just doesn't make sense to to
1,466
1,495
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1466s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
for any government to do this um what you should do is you should set taxes once at the beginning of the episode and then see how that turns out and then try to maximize uh your tax schedule because all we're looking at um we're only ever looking at how the taxes are at the end right the things that we've examined are just the last taxes that the ai has issued
1,495
1,520
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1495s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
we don't know the dynamic of what happens in between this might be super wild actually what the ai does in between and i just don't see the framing as a as a as a sequential decision problem and i believe this is just an over engineered thing because someone wanted a reason and here is the architecture right you see someone wanted a reason to put an lstm in there someone is thinking
1,520
1,548
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1520s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
like well rl that means like sequential decisions and so on and rl in this outer loop the way i propose it would just be a one step per episode decision which is a banded problem and as we all know bandits are boring so they didn't want this to be a bandit problem they wanted to be a sequential problem and that's why they made this period thing which i find dumb um
1,548
1,574
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1548s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
so another factor here and i'm going to tell you how this relates to the to the weird rich people are taxed high another factor here is look at this it's a cnn an mlp an lstm and an mlp and the agent as well and i can tell you right now the cnn has two layers two and the lstm has like 128 units in its hidden state so these are tiny tiny models and it is not a model based rl it's
1,574
1,607
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1574s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
model free or else proximal policy optimization and the the um the ability of these agents or planner to learn anything substantial here i believe is just not uh super duper uh well right so the i i believe that these are rather dumb agents and you can see the tax rates given by the planner is fed into the agent model but i don't think that the agent given such a small
1,607
1,642
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1607s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
model can actually adjust to these inputs because you have to do some pretty good logic in order to from these tax brackets to determine uh how you should act right now what i think is happening is the agent just kind of is aware of its skill level and through its rewards it's trying to maximize its in future rewards and then when the government changes the tax rate
1,642
1,667
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1642s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
it will not i am almost positive it will not directly change its response to that but it will kind of observe that something's happening in the world and then adjust maybe a little bit its overall strategy uh but not in that particular instance and it will be delayed or it will be like an overall strategy and this might be one of the reasons why the tax brackets
1,667
1,695
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1667s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
here might be screwed up because who says who says if i were this ai what i could do is in period one through nine i make the taxes really low for the rich people so i just encourage everyone to make more money right like come on become more productive and i get the benefits of that and then in the last episode and last period right i just freaking jack up that final tax bracket
1,695
1,726
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1695s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
it's like you you have lots of money give it to me right and then you just redistribute what you got there to the poor people in the very last period and thereby you achieve your goal of this social welfare function but of course this is not sustainable because all the rich people would just be kind of screwed through that and move down again but it's the end of the
1,726
1,748
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1726s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
episode so what are they going to do so i think the fact how this is framed that there are just two different ways to get coins uh the fact that this is this periodical nature of the outer loop all might lead to something that becomes slowly more and more and more uninterpretable uh still cool though all right so the final thing they do this with humans yes real humans
1,748
1,781
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1748s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
so they let humans try it and they have this interface here and the humans they behave quite differently from the ai so there are a few different things where the humans act but look at that here ai economist this is what the agents do right so this ai economist is the tax strategy so just take these developed tax strategies and let the humans be the agents so that
1,781
1,810
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1781s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
the you you just want to observe how the agents act and whether or not the tax strategies also work when it's real humans acting in this environment and not rl agents so compare this to how the humans act the humans they just build their houses in like neat little packets or straight lines or stuff like this i just i just find it to be very funny now there are some things
1,810
1,838
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1810s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
lacking in the human environment which i find really important so first of all they have no cost for moving which i guess is minor but um second of all they have no trade and i think that is that just kills the whole experiment because now of course what you're gonna get is the wealth is just going to be proportional to how much you get coins per house which is
1,838
1,861
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1838s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
different for each agent right so to me that that is now a pointless experiment if you can't uh trade because the outcome is just predictable and i don't think that the human behavior changes in response to the different tax brackets i think they'll just do and however they can make money they'll make money they'll build more houses until it becomes unprofitable and
1,861
1,885
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1861s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
that's it so i don't see the i don't see the value of these experiments even though they show that again the ai economist outperforms the other tax strategies in this equality times productivity metric and also in another metric that they measure um the second problem i have is for the human experiments they take this distribution here they say well the a this is one of the
1,885
1,913
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1885s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
distributions that the ai came up with but you notice the lack of the fu poor people and the lack of this big spike here for the rich people which i find um are one of the two features of the other distribution so i think there's quite a bit of variance in what this ai comes up with or maybe it's just because this is periodical but this is really confusing because they show and
1,913
1,938
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1913s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
discuss that other distribution and now all of a sudden they say well we use this distribution that was also created by our ai and it seems to be qualitatively quite different in any case um let's look at how the humans behave under the um under the different strategies so in the size formula you'll see that yeah the light blue person here is kind of spreading out a bit
1,938
1,965
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1938s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
probably playing correctly everyone else is just neatly building their houses look at humans are so territorial and most of them they kind of they kind of stay in their little corner and they're like this is my corridor i'm gonna build my houses here in a nice thing and under the ai economist again you don't really see a different thing just because the taxes are different uh
1,965
1,989
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1965s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
the qualitative behavior is quite the same it's just building straight lines and here i think the difference is more between the humans so i think it's not always the same humans and um the difference might be more between the humans and you kind of see that the humans clearly don't haven't really trained or discovered the optimal strategy they're just doing
1,989
2,010
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=1989s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
something and you what you're seeing is just a result of the taxation uh it's not different behavior and this here this this is the best okay watch the on the bottom right the human they're just first they do something and they're just walling off walling up the other players and this is this is the best i'm going to build a big beautiful wall and i'm going to
2,010
2,037
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=2010s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
have the orange guy pay for it it's donald trump in the game amazing and look at the end they actually managed to lock in the other players so they can't move anymore donald trump wins amazing though actually the yellow player appears to win economy-wise but what do you want with lots of money if you can't move so i again i find these human experiments to be rather pointless here because you
2,037
2,072
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=2037s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
F5aaXrIMWyU
disable trade and you don't train the humans to find a good strategy all right but in that i find the entire paper to be pretty cool code is going to be released they promise and they have checked that they have no ethical problems of course i invite you to check out the paper if you like content like this please uh subscribe share and leave a comment of what you think
2,072
2,099
https://www.youtube.com/watch?v=F5aaXrIMWyU&t=2072s
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
https://i.ytimg.com/vi/F…yU/hqdefault.jpg
SsnWM1xWDu4
so yeah it's a pleasure to be here and today we'll be cooking through the labels a couple of words about me I'm a kegel competition Smith master located in Minsk Belarus my name is Kegel is V dot e dot s dot and currently I work as a data scientist at h21 so today we'll be talking about some distinctions between labeled and unlabeled data then we will talk about
0
31
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=0s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
what is said at Lackland actually is some use cases and some recipes of how to cook so the label to suit the labels and some example on the real okaycome competitions where to the labels were applied and achieved good results so here is the general supervisor on problem train and test data and for train data we are given a label so have some kind of wave of data and our goal
31
59
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=31s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
is to build a model to predict the test labels so test is kind of unlabeled data and generally it is a kind of usual keiko competition scheme where we are given some kind of label data and you need to make a model to predict what the unlabeled [Music] and the problem we'll be talking about today is the kind of situation when a lot of unlabeled data and label data is
59
85
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=59s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
very small so this could happen to both in Europe some kind of usual motion journal projects and as well as on Kegel competitions and here probably you could some kind of create some tricky models to build them on this small label data and apply some different techniques but probably the better approach is to somehow use this huge unlabeled data and the reasons why like what will have such
85
116
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=85s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
situations when the label data is very small the first one is it is expensive so to label your data you need to acquire some special people or you need to acquire domain experts or use some special software in order to label your data and obtain label data so consequently it is also time consuming so you need for example one month to label one more potential for your data
116
142
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=116s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
and of course your management will not be satisfied with with this approach that you need - too much time and you need to put the model right away there are some other reasons for example seconds it could be some sophisticated experiments you need for example build some I don't know very very hard experimented start to establish and there are lots of lots of stress this is
142
163
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=142s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
a need and it is hard to repeat it frequently and that is why it is also could be expensive a time consuming so it's kind of basic reasons why why we have such deviations with a small label data and here is a quote by entry in there it is not who has the best algorithms that wins it has the most data and probably it is much more crucial nowadays when there are lots of
163
187
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=163s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
machine learning models that work in out of box so it would apply and you could solve almost any problem type are just using some some some predefined models and the problem is that you can't apply models when you don't don't have data or this is the reliable data is too small so probably it could all it could also be extended for personal data so if I have small label data's and it's also
187
211
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=187s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
hard to do some kind of huge supervisors to supervise or model and in case if you're unable to get more label data and if it is impossible or it is hard to acquire to reduce the message codes same supervised chlorine is introduced in a simple example here on the left you have a classic supervised subra specification a problem where we have for example two classes triangles and squares and our
211
241
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=211s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
goal is to build a classifier so a decision boundary that would distinguish these two classes so here on the image pad on the image B we have some kind of sample decision boundary between these two classes what same service were allowed to do it allows you to utilize all cells and label data so this is red dots some kind of way observe this data but we don't we don't have the real
241
265
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=241s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
labels and actually from this red dots with some kind of get the structure of the data and it allows to sometimes change our decision boundary itself uses this knowledge and we see that this is and right now is more more reliable and more generalizable probably yeah and what a sexual thrill delivers so the labeling it's kind of the simplest semi-supervised learning so simple
265
293
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=265s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
superest Waldron have has lots of different approaches but see the labeling is like the most petition and so it's the most easy to use and the idea is pretty straightforward so we have the label data we train Air model like some supermodel one when this label data and afterwards we just make the prediction spawns and label data and actually these predictions are already
293
316
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=293s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
through the labels so we treat all the test observations is that we have predicted by our model as to the labels and then we could some kind of concatenate these two data sets so initial data and our predictions made by our models and treat all of this data as a kind of extended version of our label data and users to the labels in our subsequent training yeah before before
316
344
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=316s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
speaking of how to with how could we utilize to the labels I'll talk about some kind of couple of ingredients the first one is confidence so instead of taking all the predictions from the code test set we're interested only in the content predictions the reason for that is if we add to power of two the labels some observations that are hard to predict or some special cases some
344
369
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=344s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
corner cases then it would like been late when been detect our our subsequent training because it introduces some noise and some bias in now in our model and we need which is only the confidence intervals in order to like I select only the reservations that our model is confident in there are different different definitions of what is obscure or what is a confidence
369
395
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=369s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
prediction for different types of of the problems so for example falsification problem the easiest way is to kind of take the probabilities for each process that were predicted and for example if at least one one plus have probability over 0.9 then okay it is reliable reliable observation we could edit in the content through the labels for image segmentation problems for example we
395
418
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=395s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
could use some kind of percentage of motivated or pixels away we'll just find in 110 pixels on the image and then we're treating like the person of course two pixels for example over eighty percent those senses alteration is confident and of course aggregation type of process it's a little bit tricky because it's hard to tell what is kind of confident prediction for the
418
442
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=418s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
regression problem but one approach what you could use for example in different problems is that you can look at your predictions from one epic to another unit year training of the neural network and if you see like huge jumps for one equal to another during the nest neural network training it means that probably this observation is unreliable and it is not not a good idea to include it in the
442
468
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=442s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
to the labels and overall pseudo labels is kind of the message that is widely used in the in the deployment context as wrist posture is that no metals allows to treacherous terrain online and you can add some new data to either intentional training so but probably not probably so for sure it could also be used for some classic machine learning problems but it is not
468
497
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=468s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
not so popular so my talk will be more about about some neural networks and the deployment context yeah ii ii degraded in isn't semblance so instead of creating one one model and predicting what one set of students who could train multiple models sending samples Emmet's in some way and obtain a new set of to the labels the reason for using an semblance observed two
497
523
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=497s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
different reasons the first reason is that worse it is better to use in similar photos instead of one model if you talking about the quality so single model is almost always be the first versus it's an example of model and the second reason for that is that in several photos allows to get diversity to up to the labels so we for example using only single model from one page to
523
548
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=523s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
another and Catina training with a single model sinu put some kind of propagate the errors or homes on this model and if you're using example since the diverse models could just somehow eliminate this effect and we obtain a more generalizable to the labels all right so the first recipe how how we could utilize to the labels about change multi nia sleaze so it consists of two steps
548
572
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=548s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
the first step is that we just Union two datasets label the data and see the labels obtained and treat it as a mutable data so of course he could from to deliver secret select one with continent predictions were some kind of only subsets Anderson dish but which is also the labels as well as a column as a label and afterwards this new label data set could be used to train a new model
572
599
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=572s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
so now instead of using original we use both pseudo inverse sine theta and trainer model and it occurs is it such approach allows to get better results and change with a single model I mean the model is a zero strain on unlabeled data so identity so what is the process it is to concatenate train data and to the labels all right another approach is called pre-trained it's a kind of a little
599
630
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=599s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
different in the previous approach was based on some kind of on a data level so we can created our data in the city in the interested single set of labels now we are solving some kind of a model level problem so we take on the schudle labels change the model only opens up to the labels and we have change some kind of weight in civilization so we will save the way so this one is a bit we
630
654
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=630s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
have obtained and this weights could be used as a starting point for subsequent train on the label data on the initial table data and as the reason why it is working is that afterwards after you obtain your model or some pseudo labels now you wait wait no information about you your data set about your domain you're working with and this installation works better than for
654
677
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=654s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
example if a man in image net installation so the pipeline works in the following way we have our pertained models of Australian condoms with to the labels we initialize the weights and it allows to train faster and obtain better results if a fan changes this model or going to label data on the initial label data and after we look ups to point to you is this model at NU model you could
677
698
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=677s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
make predictions on general data and again this approach allows to get better results compared if we have started for from here and just insulin insulin ensures the weight with the image net okay so each recipe has some herbs and spices and here we'll talk about the validation when we are talking about pseudo labels case absolutely labels is kind of agreeing a great way to - or
698
731
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=698s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
that kind of over senior model and yoke and abused or fits into the leaderboard or fit into a validation data and you need to establish the proper way how to compare the models before you can apply to the labels and after the second version of will be applied and for example have this for four volts away using the basically k-fold cross-validation and the first approach
731
754
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=731s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
could be we just train four different models for each of the poles and samples them and obtain through the labels and since these ends the specific labels are used in the first or second recipe in order to continue the training however this approach is weak because we now see the label dataset contains the information about also also targeted target Morocco's also late labels is a
754
781
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=754s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
train data and if for example after work limited to the labels we want to measure the quality ones there for example for sport then it may acure that our our quality is too too too too optimistic because so the label school reticle since the right labels also causes this of this first hole so the better approach is to use some kind of out of focus to the to the labels so for each
781
807
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=781s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
fault the train has separate a separate model and predict and create a separate suitable data change independently and afterwards it kind of provides a reliable reliable schema validation so then we can compare the models before the first ago that was Jack have been applied the only drawback here is that we need to train for different models for each mode and obtain only one set of CB
807
835
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=807s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
labels and they're not really reliable because we need this kind of an example ingredient when we are developing multiple diverse models and actually in practice everyone uses these schemes just like a skin so it is it is letter Western model I focus as I said in the kit but the reason for the usage of this scheme is that we are already obtained an assembler model so they don't have
835
867
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=835s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
four models and if for example for each move with well-developed three different architectures narrow Network center cap over 8 to 12 models and it is really a great world chain of two the labels and divorce so yes the take to to this rotation scheme and now I will talk about a couple of examples where the labels showed pretty pretty good results and one of their competitions is Canada
867
898
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=867s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
model and education it was posted by keiko last year and the problem was to kind of quantify photos made by some camera into the camera it was taken by so it was a kind of multi conservation problem with ten classes and the classes are kind of Apple some soon some some kind of other devices and for example this particular image was taken by a HTC One and yeah it's kind of impossible to
898
926
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=898s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
say just looking but by eyes so it's another nano net network approach have been applied here for this competition and actually in this competition train that is data we're about the same size so so what about I guess 3/7 of images and we remember that do the labels show great great value when it have some kind of small small label data sets and large unlabeled data but the reason why
926
951
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=926s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
syllabus are working here on this problem is this train and and test we'd actually also train images were taken by a single physical device so for example this is green form but the test images for example might may be taken by the same model for example HTC or iPhone 4 but they were taken by different physical devices is this orange one and so the goal here was to some kind of use
951
976
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=951s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
assisted predictions in terms of silver labels and probably if it could allow find as some particular features some particular ste facts that are specific for this country to first particle of polish model and the said that is ins ik handle learned for flow from from the green model okay here is kind of a recipe what has been applied and they want what would
976
1,002
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=976s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
have worked or for this particular competition so firstly we're just making a kind of classic classic approach we train multiple models and sample them and sample them so Lucas for example tried different architectures different training procedures and so on so welcome to the labels names this approach allows us to get 66 place on the quanta private leaderboard
1,002
1,025
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1002s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
the next step is could be the kind of route would take take this to the labels pretend pretend on them and printing on train data and it gives like a huge boost and we have receiving a top 20 position however as we've discussed there like distribution between train and test data it is probably the better idea to train on the unpure through the labels so the source step is kind of
1,025
1,050
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1025s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
minute eliminates the second one and an instead of points Union on train data we train our model on pure silver labels so at this step we don't use as a the initial train data at all and in such case if it was to get even better results and it achieves the same place or on the private leaderboard alright the next the next example is salt identification challenge it is the
1,050
1,078
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1050s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
problem was in in semantic image segmentation so we were given some kind of images obviously some kind of or surface under earth and each pixel of the image was classified in the two classes whether it is sold on unsold and the goal was to build a model that predicts the salt deposits like some kind of mask masks represented all cones right yeah in this case in this competition as
1,078
1,108
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1078s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
a trained data contains only four four thousand images and the test data contains eighteen thousand images so it's really a perfect candidate for pop pseudo employment when I have this Assessor says what difference between labeled and another available data and again we start with a simple approach will train multiple models obtain filter labels and it gives in this case is a
1,108
1,129
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1108s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
forty six position it's pretty pretty good I guess it was around three thousand participants in this competition and on the second stage we apply our second recipe so we retain the model on CD labels twenty non-trained and obtain citizenship is a place in top ten and in order to achieve better results we just repeat this this is this these two steps multiple times so what
1,129
1,155
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1129s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
does it mean so after second second step of ten you set up to open up a new set of see the labels we again train multiple models obtain new pursue the labels and also I can pretend which allows a French Union trade so repeaters just look multiple times allows to give the better results and a ship to touch top one position for this competition and actually the skin looks likely cut
1,155
1,178
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1155s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
off in this way so initially we have only label data between a model in this first round send a predictive see the labels on the unlabeled data select on the confident ones and retrain the model general repeats is this test for example K times and at each iteration which this improvement is a sports oh yes the sports improvement are kind of degrade so it's a smaller and smaller but it
1,178
1,199
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1178s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
should iteration gives more and more information about syllabus improves the calls up see the labels and achieve the better results also either board and actually one more dish that we are using here is that where a train is a model so at each stage we train the model from scratch so what what does it mean if we're trains we forgot to change model from with the
1,199
1,219
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1199s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
first stage to the second and sort and so on we could finish with the situation when our like error propagates through each iteration so if I have made an error in the first round and out to the label I'll quit inaccurate then this error will propagate through each round so with each round we're just starting from imagenet weights and start start start a model from scratch okay so yeah
1,219
1,245
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1219s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
basically there is no kind of universal universal recipe of how to cook soup without the labels and how they could be applied because it's really well specific to data uuuu you're using ends it's a problem type you're dressing but you have some kind of building box that could be a temple night together with some kind of your sub some spices and and some ideas that could improve your
1,245
1,272
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1245s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg
SsnWM1xWDu4
performance patron performance of your models yes this approach could be applied in both competitions and in real machine learning projects and it really performs very well when it have very small label did the data the data set but unlabeled data is available kind of in fact in in in yeah so unlevel did this is a lots of unlabeled data and when when what is a good idea what when
1,272
1,300
https://www.youtube.com/watch?v=SsnWM1xWDu4&t=1272s
How to cook pseudo-labels | by Yauhen Babakhin | Kaggle Days Dubai | Kaggle
https://i.ytimg.com/vi/S…axresdefault.jpg