video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
1sJuWg5dULg | unicorn skin when their own language and also talks about a scientist who is able to observe all this phenomenon and this shows that language modeling at the level of a paragraph or even multiple paragraphs as possible by just training large models which used to order aggressive structures this slide shows the evolution of language models over time we're on the first you see Shannon's | 339 | 372 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=339s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | three grand models which I reasonably good but not super coherent across the full sentence and then Ilya sutskever is model of using an RNN is able to produce a couple of sentences but not completely making sense and then over time they using bigger LSD and bigger transformers you ended up with the quality that's UPD to experts right now so all these huge advances have been possible due to | 372 | 399 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=372s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | multiple reasons and let's go through them quickly the first thing is just being able to train with larger batch sizes because of more computer availability and training with larger bad sites then we stabilizes the training of these models and optimizes these losses much better making the models wider making a modest deeper figuring our clever race condition your | 399 | 422 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=399s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | next year you're building a conditional class conditional or audio condition or text condition model the figuring out ways to get the conditioning information cleverly is very useful pre-processing like in wavenet we use some new law pre-processing to quantize continuous audio into discrete entities are for example in pixels you're actually using categorical information for modeling | 422 | 450 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=422s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | rather than rather than using gaussians so these are these are and in language you using by parent coding which is pre trained on a huge corpus and therefore your mod in on modeling neither at the character level or at the word level but your modeling in the sub word level and that's much more useful for generalization and also building more efficient models | 450 | 475 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=450s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | compute power and as we progress in the last two three years we just have at we just where access to a lot more compute like TPU so I like big GPU rigs which have lots GPUs connected really with a really fast interconnect and therefore be able to train data data parallel model is much better and we're to train see that several weeks or basic training are usually producing much | 475 | 504 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=475s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | better results and also making fewer assumptions about the whole problem like before trying the idea of this predicting categorical distributions for every pixel why would he want to imagine that pixels are definitely gonna be modeled with calcium's instead of categorical distributions like indy really doesn't make any sense but then practically it's better for a neural | 504 | 532 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=504s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | network to work with cross entropy losses there are also been architectural advances that made all these was much better so mass conversions were applied in the original Pisa CNN but as transformers and dilated communist art exists the samples just got much better with more coherent structure across long range dependencies and and making the whole modeling problem look more like | 532 | 557 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=532s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | supervised learning helps a lot and therefore relying relying heavily on oh they'll be here crossing will be lost and optimizes that have been much better tuned for this loss ensures that generative modeling can also benefit from all this but engineering advancements so now what's the future for our regressive models we're only scratching the surface of what's | 557 | 583 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=557s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | possible and and once we have motor pilot training we'll be able to realize a lot more for instance be able to train trillion parameter models on all of the Internet's text and that that way we could compress all the Internet's text into a giant neural network that can be a like a know-it-all language model and secondly we can figure out ways to Train one single model for multiple modalities | 583 | 612 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=583s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | just even bigger generative model they could work at a video level on YouTube or image level Instagram text level Cabiria so that way it's able to probably correlate information across multiple entities and chameleons for expansion so for all these kind of modeling requires hardware and software advances from auto pilot training we should it's also possible to make or aggressive | 612 | 640 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=612s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | models more useful by figuring out faster ways to sample with better low-level primitives at the CUDA that will like for instance fast kernels and and better act like for example wave are an N uses all these mechanisms for production components and doesn't need to be distilled into something like a parallel bayonet this work as a standalone auto regressive model and | 640 | 666 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=640s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | still be deployed on an Android phone hybrid models with much weaker or aggressive structure but that can be trained on a large escape could be revisited and and of course all these architectural innovations that help in long-range dependencies would always help in you know as you keep moving to bigger image this or a video or something like that these kind of ideas | 666 | 691 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=666s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | should up a lot so like a summary of auto regressive model could be that it is an active topic but a lot of cutting-edge to us and there's a lot of moscow for a new engineering and creative architecture design and larger models and data sets are clearly needed to you know realize the full potential of these class of models and standalone they are very successful across all | 691 | 720 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=691s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | modalities without any conditioning information like class labels so that's that's like a very appealing property of these models every Universal in that sense and also they can work without much engineering for sampling time so that makes them really look creative but but but nevertheless for production if you you should really cut down on the sounding time to be useful and so | 720 | 744 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=720s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | innovating on the low-level primitives was very important so that said there are a lot of negatives for aggressive modeling one is you don't extract any representation there is no bottleneck structure and sampling times not good for deployment it's not particularly usable for downstream tasks like for instance a language Maru you need to sample multiple times to see coherent samples | 744 | 772 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=744s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | so you can't just roll out a language model that's a software and there are no interpolations that you can see to visualize what the models actually learning and every time you sample it's going to take a long time to produce like a diverse set of samples so that's it about auto regressive models now let's look at flow models in flow models it all started with the nice | 772 | 797 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=772s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | architecture by loaned in and those the model was already producing very good digits on the endless data set and on the T of tedious it was producing reasonable phases but it really was bad on see far and SPH India said the samples were very blurry but it all improved with the real end we'd be architecture which introduced other kinds of flows and rational room to make | 797 | 818 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=797s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | the models better and then the glow model from King model was published where the real and Ruby model was taken to another level by making it prettiest much larger images and overdone in our lab called flow pass class advanced the likelihood scores for flow based models to competitive scores that with that of autoregressive models for the first time and this was done by this architecture | 818 | 850 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=818s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | engineering and scale so this shows the power of flow models of potential they have in terms of closing the gap in density estimation between autoregressive models without having the powerful or aggressive structure but at the same time being really fast with sampling and also potentially useful for inference so given all these practices there's a lot of future work left in | 850 | 874 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=850s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | terms of how to learn the masks how do you actually completely close the gap with our regressive models whether you want to use very expressive fluids but very few or whether you want to use shallow flows which are not particularly expressive but then keep on stacking them so that you can get a very expressive compose model how do you use multi scale losses for a trait and how | 874 | 901 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=874s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | do you trade off between your density estimates and your sample quality and how to use the representations you derive at various levels of the flow model for downstream tasks all these are like fundamental advances think about for flow models and also how do how do you carefully initialize so that flow models can train very fast so in terms of core achievements that you can aim | 901 | 926 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=901s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | for you can aim for producing low level samples which are truer models that have way fewer parameters the globe uses half a billion parameters for all the celebrity faces and that's unlikely a scale and how do you make it work potentially for even larger images how do you do dimensionality reduction with flows and think about other other flow models like conditional flow models and | 926 | 955 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=926s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | you know how do you actually close the cap and sample quality de Gans and also close the likely skoros gap between autoregressive models so the models would provide the pathway to do both and it's it's interesting to think about how to do all these things together so the negative of flow model says you expect to have the same dimension at every layer every stack of | 955 | 979 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=955s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | the flow and so it's unlikely to scale if your data is getting bigger and higher dimensional and unless you innovate on how to do dimensional reduction sauce it's unlike it'd be useful and you really need to carefully initialize and use things like AK norm for good numbers so that's that's another negative because it may not be directly usable for another modality or | 979 | 1,001 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=979s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | another data set or another kind of architecture so let's look at late engraver models will see the various different be strengths and weaknesses and what have been some visible successes in bas it all started with the original Emnes modeling by dirk Kingma where you could see various types of digits and strokes and the slopes of the strokes and shades across multiple digits and | 1,001 | 1,034 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1001s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | then it got extended to much better more powerful data sets like Elsa in bedrooms by pix ovae and also image not 64 by 64 creating much better global sound globally more coherent samples 10 pixel CNN because of modeling latent structure and then there's the latent variable models innovation in terms of using hierarchical models and multi stack using hierarchical Laden inference and | 1,034 | 1,066 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1034s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | producing really high quality sound really faces on par with slow models so there are well-known applications of V like sketch iron and role models and BW is used for modeling visual concepts and there are applications like deep mines jeredy cry networks which does view synthesis of a separate view by taking in two provided views and embedding into a latent rifle and interpolating the | 1,066 | 1,096 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1066s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | lane space for a query view across across multiple possibilities and therefore you can just collect data in a completely new environment from first-person vision you can you can keep a track of all their poses when you're recording things and then in principle you could figure out how a particular scene looks like from any other viewpoint and therefore reconstruct the | 1,096 | 1,120 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1096s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | entire room or entire environment completely through this kind of a synthesis model that has rational inference so we have practically used in these kind of architectures and there are lots of advantages of EA's you get a compressed bottleneck representation you can get approximate density estimates you can interpolate and visualize what the model learns you can potentially get | 1,120 | 1,145 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1120s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | disentangle representations where different readings correspond to different aspects of data and it is like a model that allows you to do all these things together at once like you basically can sample so you are a gyrator model you have a density estimate so you can use for our distribution detection as a density model you have latent variables so you you do representation learning and you | 1,145 | 1,167 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1145s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | also have a bottleneck representation so you are able to reduce the dimensionality of your original data set so a VA is the only model that lets you do all these four things together and that makes it very appealing that said there are disadvantages you often end up with Lurie samples and assumption of a factorize Gaussian for the posterior or for the decoder this may be very | 1,167 | 1,192 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1167s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | limiting and you need more powerful decoders or more powerful posteriors and large scale successes are still yet to be shown and even though people have tried to like get more interpretable more disentangling variables by prioritizing the KL term over the reconstruction term the last it's still only work on toy problems and they may actually be better ways to do | 1,192 | 1,215 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1192s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | representation learning or generation or like yeah interpolation in some form hierarchical Layton's individually so expecting for one model to all of them well may be truly hard and so a we may not be the state-of-the-art models on anything but maybe a model that lets you do all all that it all these things recently well in using a single single single modeling framework so that's that | 1,215 | 1,248 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1215s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | that's the that's what you lose when you want is everything within one model so that these are the disadvantages to me but there's obviously scope for future work you can but you can use bigger decoders more powerful posteriors you can think about how to do hierarchical Leyton's to learn covers and fine-grained features and discrete Leyton's like weak uva and | 1,248 | 1,273 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1248s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | also large scale training like slow models have been done like glow or focus bus so next let's cover implicit models but we look at general adversarial networks and just just basically what what's happening ganz though we also covered moment matching energy based models in class the Gann samples the quality of Gann samples has dramatically advanced from the primitive samples that you saw | 1,273 | 1,305 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1273s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | in the original Gann where you saw X reasonably looking good faces but then the c4 samples it's not pretty cooing too critical in terms of what is the object or class of C far that's been captured but it certainly looked different from Larry BAE samples at the time next you saw DC Gann which clearly advanced some the some quality of dance to a state where again to assign you a | 1,305 | 1,336 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1305s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | looking much and much more exciting than any other model because the samples were much sharper and all these bedrooms were very high dimensional and then recently again giving again has been taken over by began stag and classic models were clearly careful attention to detail in terms of architecture design and also really really large-scale training like large pad sizes and a lot of | 1,336 | 1,363 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1336s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | stabilization tricks can produce these amazing photorealistic samples that you've already seen plenty of times in the class so I'm not going to go over them in terms of future work for Ganz I think I think it's really hard to bet against cans to say hey this is work cans weakened its most likely that if you put sufficient effort in engineering you can get it again to function well on | 1,363 | 1,389 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1363s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | those things as well but but nevertheless there's still more progress we made an unconditional cans more collapse and also more complex scenes and video generation will be cool for instance will be nice to get a model that works on real driving data where and a lot of pedestrians are walking and then you want to be able to simulate future you have to keep track of multiple | 1,389 | 1,412 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1389s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | people multiple objects multiple cars road signs and so forth so it's a very complicated jeredy modeling problem and it'll be interesting to see it ganz which are known to identify only a few cues in your dataset would they still work in such complex settings where you need to keep track of multiple things at once so future work in terms of modeling you can like think of more purchasable | 1,412 | 1,441 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1412s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | Lipsius knows better conditioning tricks like how to feed noise if your various levels like for instance stai again basically innovated their batch or instance normalization of how to design better architectures working on sampling and down something ops to use how do you how to do channels of sampling and done something without introducing a lot of parameters what is the right objective | 1,441 | 1,464 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1441s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | function for your discriminator and how to scale and train ganz in a stable manner for like larger problems and how to preserve it at various different levels like how do I instance noise a feature noise so that it can stabilize the training of the discriminator much better so all those things are very very interesting and think about in terms of negatives again scans if one could say | 1,464 | 1,491 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1464s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | there's plenty of engineering details and it's hard to clearly identify which is the most important core component that helps you reproduce these high-quality images and it's also very time consuming to ablate for these details so and and and and and it's very clear we need to improve on the sample diversity but then we also don't have very good metrics for evaluations so we | 1,491 | 1,516 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1491s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | need to work with what we have and even though it may seem like we're improving a lot on the current metrics we use for again evaluations objectively the sample diversity is not a spurious likelihood based models so how do we actually come up with better valuation measures also one thing to think about with all these aspects like good evaluations good metrics | 1,516 | 1,540 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1516s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | relations these are not particularly specific to the scans these can be said for any any any kind of model as with any other model so if you were to make a choice between Ganon or density model one would imagine you need a lot of engineering details for Ganz but it's not particularly true even for density models the architectural engineering has been comparable level of detail and you | 1,540 | 1,569 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1540s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | know trickery that you need for Ganz and secondly there is a lot of attempted theoretically understanding Ganz so the trade-off between having blurry samples versus of being okay with mode collapse is basically the same trade-off that you make when you care more about compression at the cost of sample quality was this you wanting to have really good samples at the cost of | 1,569 | 1,595 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1569s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | missing some modes so it's basically which direction of kale that you care about and the reverse direction you care about more if you don't want any spurious sample but the forward direction you care about more if you really want to make sure that your modeling is good and you're not going to make any mistakes even though your you're not gonna miss out anything you | 1,595 | 1,619 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1595s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | in there you may make some mistakes at some of some of the points so mostly apart from the fact that they can produce amazing samples cans are popular because they can work with much less compute for instance in order to generate a 1 megapixel image for an auto regressive model or even a Leighton space our aggressive model you need to use at least 512 course or TPU to do | 1,619 | 1,644 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1619s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | that because you need such large pad sizes whereas for gans you can make it work with a single V 100 GPU and then so there so that's that's one reason why gangs are clearly preferred over than 10 C models because I'm amount of time taking the train as a sample and you can also see better interpolations and better conditional generation in cans so this dis leads to adoption by people who | 1,644 | 1,670 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1644s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | are more interested in art and fine tuning to like interesting artistic datasets you're not particularly machine learning relevant and that's one of the other reasons again a speaker plot so on the bright side we can think about how like many technological advances have been possible without the correct science and so ganz can we consider in that way as well and this is a slide | 1,670 | 1,697 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1670s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | from young Conan the epistemology of deep learning where explains that several technologies in the past have preceded their science that explains them for example the steam engine was before the thermodynamics so it's doing better theory for ganz is something that could still be innovated on in the future so here is a taxonomy of generative models from in Goodfellas new | 1,697 | 1,724 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1697s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | ribs tutorial apart from Markov chain Boltzmann machines and Markov change and are the stochastic networks we have pretty much covered everything else we've covered Nate may fix Lauren and how do you exchange of variables scale the flow models or really be models all these are explicit density models and then we also covered approximate in steam models vary from our encoders | 1,724 | 1,749 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1724s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | the variation lower bound and then recovered implicit density model estate they can other models that I'm not being covered are not particularly popular or very used so that's the reason we focus on the more popular ones and if you have if you're if you have been and trained density models and you're figuring out which density model you should be using here are some pointers if you only care | 1,749 | 1,773 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1749s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | about the density estimates disco for our aggressive models you don't worry about sampling time here if you care a lot about sampling times in autoregressive may still be fine if your sequences are not that big or if you use lightweight models but if you really cannot afford to wait for the sampling time you really want really fast samples but you still want to go for a density | 1,773 | 1,793 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1773s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | modeling you could think about using Vikan regressive models like paralytic so CNN and you could also think of doing latent space modeling like like latent space or like a week you BAE you may probably not even needed quantization bottleneck it could still work with like continuous values and so models are also pretty billing for modeling continuous value data that density estimates for | 1,793 | 1,823 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1793s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | continuous value data especially even when they're actually continuous and it's hard to figure out how to even quantize them so so that that's that's another interesting aspect of flow models and if you also want to think about how how to have like representations and also sampling but you want to have a simple possible model v's with factorize decoders maybe the | 1,823 | 1,851 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1823s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | natural drugs so given given these appealing properties or density models like when would you use cans you would use guns when you really care about having good samples and you have really really large images high-quality images for and you don't want something photorealistic you have a lot of conditioning information like pose or the class or edge edge maps and you just | 1,851 | 1,874 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1851s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | want to add texture to them cans are really good in these initial image translation problems or rear the video and if all they care about is perceptual quality and controllable generation and you don't have a lot of compute this is often the case for any any kind of start up again it's like the best choice to go for so that's it for generative models next let's look at South provides | 1,874 | 1,899 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1874s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | representation learning it which is our final topic so south supervised image classification has seen rapid advance in the last one and a half years just the end of 2018 the top one accuracy of image net linear classification benchmark was 48 percent and now it's seventy six point five percent so this rapid advance has been made in multiple labs because of this mode of learning | 1,899 | 1,927 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1899s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | called contrast to learning and contrast the learning task can be simply summarize this a dictionary lookup task and there are two ways to do this pretext contrasted learning which is you either build it as a predictive coding task or you build it as an instance discrimination task and in predictive coding you have multiple mechanisms to do that once you either used end-to-end | 1,927 | 1,951 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1927s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | mechanism or you use the momentum decoder momentum encoder make using the momentum encoder for the keys and the predictive coding success story has been achieved in the contrast operator coding or CPC particularly the CPC version two and and and the instances combination success has been achieved in moko and sim clear moko means momentum contrast and Sinclair's into an instance contrast | 1,951 | 1,981 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1951s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | they use the corresponding mechanisms of contrast learning so let's look at CPC version two moko and Sinclair in terms of their positives and the negatives so CPC version two we're doing spatial contrast prediction so that principle is very generic and it can apply to any morality or domain so you don't need to know the underlying data augmentation in variances in this work and it can be | 1,981 | 2,006 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=1981s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | considered as latent space channel tomorrow and also it's much easier to adapt for audio video text and perform multimodal training disadvantages it splits your input into a lot of patches or frames or even audio chunks and therefore your inputs are now your inputs are now basically split into a lot of different parts that you have to carefully delineate and you also need | 2,006 | 2,033 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=2006s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | to carefully pick what part are you predicting from what so that involves a lot of design choices to make type of parameters that you can only know by trial and error so that makes it really hard for you to use it on a domain or task that you don't really understand well and then you require multiple forward passes for these smaller versions of the inputs now and so that | 2,033 | 2,055 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=2033s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | means that you be pre-training on something much smaller but potentially fine-tuning are much larger versions of the sequences or images so this may not be an optimal thing to do when you're doing local predictions local spatial predictions Bosch num is hard to use so applying mass ROM is hard but then you really want to use batch room for a downstream task so that makes CPC | 2,055 | 2,079 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=2055s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | version too little sore in sense it's not particularly suitable for downstream tasks if you really care about state-of-the-art performance and finally the splitting process mechanism is very slow on a on a matrix multiplication specialized hardware like GPUs so it's because you do a lot of reshapes and transposes and so it's never an optimal thing to do so here's the summary of | 2,079 | 2,107 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=2079s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | moco one of the main advantages of moco is it is very minimal so it's very easy to use and replicate and it has no architectural change can be easily applied for downstream tasks there is no notion of a patch and it's distilling in variances for images using data augmentations and so the pre-training procedure looks very much like supervised learning and therefore it can | 2,107 | 2,131 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=2107s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | get comparable or even better results and the momentum encoder memory bank can assume adds a lot of stability to the training and decouples back size from the number of negatives and therefore this lets you train with way fewer GPUs than what's needed for CPC or like methods the disadvantage with moco is that because you introduce momentum and date you need to figure out what's the | 2,131 | 2,155 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=2131s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | right decay rate for that and that has an extra type of parameter and another disadvantage is in image augmentation the invariances may not be applicable to other modalities so this may be in method this works only for a visual image recognition and finally let's look at simply er which can be considered as an end-to-end version of Tomoko where you just look you're using all the | 2,155 | 2,182 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=2155s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | negatives from your batch and there is no momentum encoder so advantages or sim clear are the same as that of moko with the additional advantage that you don't have a momentum in kora now so it's going to be asked minimally supervised learning but the disadvantage is now you just need really large batch sizes because you need a lot of negatives because moko decouples the negatives | 2,182 | 2,203 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=2182s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | from a bad size it doesn't need as much compute as sim cleared us and similar to moko they documentation invariance may be very specific to image recognition so in terms of future work left for sauce provision the gap between some supervised learning and supervised learning is to not close if you consider just the same amount of compute training time and the same candidate | 2,203 | 2,228 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=2203s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | augmentations use so and also fine-tuning to downstream tasks the gains are not significantly high enough that the paradigm shift has been made in vision so that way maybe new objectives are also needed and finally all these sub supervised successes have relied on using image net and it's not clear if supervised learning we just work from images in the wild or from the internet | 2,228 | 2,256 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=2228s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | which is really the dream and which is really why people wanna do something so that's it for like subspace learning as in in terms of utility for downstream tasks let's look at always learning in the context of intelligence like being able to act in an environment so here is a video of this quake3 game where yeah like that you can see some characters and then you | 2,256 | 2,284 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=2256s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | can see some bullets that there are going to be fired and you know you see all these different walls and fires and other characters and when you're looking at all this you're able to already accurately parse the scene make sense of what's going on and you're also able to clearly separate out the objects from what's not objects and and so we need to be able to do that as well we shouldn't | 2,284 | 2,316 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=2284s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | be working at the level of pixels we should be able to predict the future in a much more semantically in space and so modeling the pixel space for these high dimensional videos is really hard and in order to build really dungeon agents which that can planning faster than real time we should be able to do it in the lane space that's more abstract so how do we do that what is the right kind of | 2,316 | 2,340 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=2316s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | abstraction to build and how do we learn role models in that Lane space that can this ignore noise and work in a much more semantic space it's really the hardest question to think about and this has also been summarized multiple times by omnicom that if you have very good internal world model you'll be able to plan with it and a wide lot of mistakes there and our relation usually makes and | 2,340 | 2,365 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=2340s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | and how to do that is one of the most important questions so if you want to have the overall view of subspace learning across all these different problems for image recognition we saw or assesses like city scene workers in clear moco version to transfer learning it works really well in language but the exact details will be covered in a future lecture and transfer learning and | 2,365 | 2,392 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=2365s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | vision also works reasonably well now been shown in CPC and moco but there's like close to nothing in terms of how to use of supervised learning for RL so that's the very ripe area for future and then as far as like you know using sound supervision in the context of general intelligence is considered its it's potentially going to be extremely useful in the context of transfer | 2,392 | 2,419 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=2392s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | learning and learning use of abstractions for planning or imaginations so that's just a lot of work to be done there so that's that's that's it for the summary of the class it's pretty much ends with our original motivation which is how do we build this intelligence cake and and a lot of it is gonna be done through supervised learning and and so in terms of future | 2,419 | 2,450 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=2419s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | lectures they're gonna look at more applied topics which are not falling into the main main main lecture stream which is that we be looking at semi spread learning we'll also be looking at the whole area of one square learning for language which is language models and bird and then finally we look at how representation learning or supervised learning has been applied in the context | 2,450 | 2,475 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=2450s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
1sJuWg5dULg | of reinforcement learning so and and we will also cover things like how to do unsupervised distribution alignment that is given completely to different data sets with a lot of common information how do we align the two manifolds together and without any prior data and you see how generative models and unsupervised learning can be used in the context of building compression | 2,475 | 2,498 | https://www.youtube.com/watch?v=1sJuWg5dULg&t=2475s | L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20 | |
gSMI5wZHe9w | [Music] thank you all right my paper was fixed match which is just a cool recent method for doing semi-supervised learning yeah so overview of the paper it came out just last month from Google research and like the headline result here is that they were able to get 78 percent accuracy on CFR 10 using one labeled training example per class which is yeah it was | 0 | 49 | https://www.youtube.com/watch?v=gSMI5wZHe9w&t=0s | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda | |
gSMI5wZHe9w | not selected arbitrarily looks like let's make them look good a couple of caveats about the room I assure you the results are extremely impressive yes as I said semi-supervised learning we'll talk a little bit about that and and the way to achieve this is doing kind of quite a natural combination of two of two previously known methods which will which will describe so what is | 49 | 76 | https://www.youtube.com/watch?v=gSMI5wZHe9w&t=49s | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda | |
gSMI5wZHe9w | semi-supervised learning the motivation for semi-supervised learning is that labeling can there are situations where labeling is very expensive but raw data can be very cheap like for example if you're driving around with the video camera and the thing to understand about the semi-supervised learning is that it is it is distinct from fuschia learning in the sense that you don't have few | 76 | 99 | https://www.youtube.com/watch?v=gSMI5wZHe9w&t=76s | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda | |
gSMI5wZHe9w | examples of the thing you have few labeled examples there still needs to be a whole bunch of unlabeled examples of the thing that you're looking for so for example in this diagram if the only examples of class white and class black that I have to learn from are these two then the classifier boundary that I learn is just this vertical line that's as good as any classifier boundary that | 99 | 121 | https://www.youtube.com/watch?v=gSMI5wZHe9w&t=99s | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda | |
gSMI5wZHe9w | I might come up but if I have a whole bunch of other labeled data available to me I can use that to inform my learning of the classifier boundary because the this tribution of the unlabeled data suggests that there's actually some clusters inside this data set and I can I can use that to you know come up with a a classifier boundary that's better than the one that I would have come up with | 121 | 148 | https://www.youtube.com/watch?v=gSMI5wZHe9w&t=121s | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda | |
gSMI5wZHe9w | if I only have those labeled examples so that's the distinction between few shot and semi-supervised all right so the first method which is a method for semi-supervised learning that went into this paper is so-called pseudo labeling so the point of pseudo labeling is that we've got a few labeled examples which we train some sort of weak model using the few labeled examples that we've got | 148 | 178 | https://www.youtube.com/watch?v=gSMI5wZHe9w&t=148s | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda | |
gSMI5wZHe9w | and then we start to use the weak model to make predictions on unlabeled data that we then treat as if they would have truth if the model is confident beyond the certain point so illustration I start off with my label examples one black one white and I've got all this unlabeled data now the hope is that if the model is trained using just these two data points it will be confident | 178 | 207 | https://www.youtube.com/watch?v=gSMI5wZHe9w&t=178s | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda | |
gSMI5wZHe9w | enough about the data points that are in the facility that I don't have vicinity of the of the data points that I've got labeled to label those correctly and then those form part of our training set so in effect we can jump from here to here we can jump from here to here and then we can sort of we can sort of keep going and expand that and then hopefully you know we end up we end up jumping | 207 | 232 | https://www.youtube.com/watch?v=gSMI5wZHe9w&t=207s | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda | |
gSMI5wZHe9w | correctly throughout those unlabeled clusters and essentially labeling inside those now the risk here is confirmation bias because in real world like problems the clusters between you know that the clusters that that's sort of defined the classes I'm not going to be this neatly separated right we're gonna have extremely high dimensional problems things that are in different classes are | 232 | 255 | https://www.youtube.com/watch?v=gSMI5wZHe9w&t=232s | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda | |
gSMI5wZHe9w | going to be are going to be close to are going to be closer to the points that we have labeled then of the things that are indeed corresponding classes and so on so so this is so this method runs into a problem that it does actually just label things incorrectly and then it keeps on learning from it's incorrect labels that it's generated so what we're hoping to do is to add | 255 | 277 | https://www.youtube.com/watch?v=gSMI5wZHe9w&t=255s | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda | |
gSMI5wZHe9w | another element another method of learning from the unlabeled data so that we can improve our confidence without sort of without having to jump to data points that that would not be correctly labeled by pseudo label so the second method is this idea of consistency regularization now this says this says that if you have if you have an unlabeled data point and the model is | 277 | 307 | https://www.youtube.com/watch?v=gSMI5wZHe9w&t=277s | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda | |
gSMI5wZHe9w | confident about what that data point should be above a certain level we can apply an augmentation to that data point so we can twist it around a little bit you know flip it change the colours invert you know just make it look different and the prediction that the model gives from those two versions of the data point ought to be the same so example if I have here a picture of a | 307 | 333 | https://www.youtube.com/watch?v=gSMI5wZHe9w&t=307s | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda | |
gSMI5wZHe9w | horse I can apply one random data augmentation to the picture and I can apply a second random data augmentation and then what I'm going to do is enforce that the model makes the same decision about this data point for both of those augmentations and the effect of this is that the model can start to pick up things about the image that it might not otherwise have paid attention to so for | 333 | 364 | https://www.youtube.com/watch?v=gSMI5wZHe9w&t=333s | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda | |
gSMI5wZHe9w | instance if my augmentation number two crops out just the lower right part of this force then the model might be forced to pay attention to other regions of the image like the the hind legs and the tail in order to decide that this is in fact a horse whereas if I had if I had not done that other data augmentation it might have only focused on the head and always made that made | 364 | 388 | https://www.youtube.com/watch?v=gSMI5wZHe9w&t=364s | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda | |
gSMI5wZHe9w | the decision on that basis so I'm sort of I'm applying two random augmentations in order to induce the model to look at look at different parts of the image to to learn what features are relevant for identifying horses alright so fixed match is going to combine these two ideas of pseudo labeling and consistency regularization so we start off with like a small label | 388 | 414 | https://www.youtube.com/watch?v=gSMI5wZHe9w&t=388s | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda | |
gSMI5wZHe9w | data set only a few data points we're going to do as much as much learning on that on that data set as possible and then we're going to go and pick some samples out of the unlabeled data set and then we're going to follow first of all the consistency regularization process so we pick we pick we pick an image which is which is which is unlabeled we apply two augmentations to | 414 | 441 | https://www.youtube.com/watch?v=gSMI5wZHe9w&t=414s | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda | |
gSMI5wZHe9w | it what one week augmentation that kind of preserves the general sense of what the images and then a strong augmentation and then we ask the following question on the weak augmentation was the mole I'm very confident about what that image was meaning like it achieved let's say greater than eighty percent you know greater than eighty percent confidence of that image was a horse and if so | 441 | 466 | https://www.youtube.com/watch?v=gSMI5wZHe9w&t=441s | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda | |
gSMI5wZHe9w | we're going to include that into our pseudo labeled set and then we're going to do the consistency regularization process whereby we enforce that the models predictions on the Augmented version of the data set on the Augmented version of the image becomes close to what we now treat as a true label for that image which means that we are sort of we're allowing it to we're allowing | 466 | 497 | https://www.youtube.com/watch?v=gSMI5wZHe9w&t=466s | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence-Covered by Adel Foda |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.