source
sequence
source_labels
sequence
rouge_scores
sequence
paper_id
stringlengths
9
11
ic
unknown
target
sequence
[ "Emoji suggestion systems based on typed text have been proposed to encourage emoji usage and enrich text messaging; however, such systems’ actual effects on the chat experience remain unknown.", "We built an Android keyboard with both lexical (word-based) and semantic (meaning-based) emoji suggestion capabilities and compared these in two different studies.", "To investigate the effect of emoji suggestion in online conversations, we conducted a laboratory text-messaging study with 24 participants, and also a 15-day longitudinal field deployment with 18 participants.", "We found that lexical emoji suggestions increased emoji usage by 31.5% over a keyboard without suggestions, while semantic suggestions increased emoji usage by 125.1%.", "However, suggestion mechanisms did not affect the chatting experience significantly.", "From these studies, we formulate a set of design guidelines for future emoji suggestion systems that better support users’ needs.", "Based on the analysis of emoji counts in the study, we found that although different suggestion levels resulted in similar amounts of inputted emoji, participants tended to pick more from semantic suggestions than from lexical suggestions.", "One surprising finding was that although the usage of emojis indeed affected the senders' chat experience, the suggestion type did not affect the chat experience significantly.", "One explanation is that different suggestion mechanisms only affect how the user inputs emojis, rather than what they input.", "As long as they can input the expected emojis, the chat experience is not affected.", "Looking at participants' interview answers, we found that participants did notice the difference between the suggestion mechanisms, and provided more positive feedback on semantic suggestions than the other conditions.", "Five participants mentioned that semantic suggestions were convenient and timesaving.", "The convenience might come from the relevance of the semantic suggestions.", "P13 pointed out, \"The first one [semantic] is better than the second one [lexical] , showing more emotion-related emojis. The second one is related to the word itself and it makes no sense to use the emoji in the conversation.\"", "Although P19 did not use many emojis during the study, she stated that \"their [emojis'] appearance in suggestion bars makes me feel good.\"", "This feedback supports our finding that people chose more emojis from semantic suggestion than lexical suggestion.", "The quantitative analysis results are similar to the in-lab study: the total emoji inputs were similar between different suggestion levels in period 2, and users chose more semantic suggestions than lexical suggestions.", "Again, based on the survey results, suggestion mechanisms did not influence the online conversation experience significantly.", "[2, 9] , and also provides supporting evidence of why people picked more semantic emojis in our online study.", "Our goal was to examine the impact of emoji suggestion on online conversations.", "In particular, we sought to answer two primary questions: (1) How do emoji suggestion systems affect the chat experience?", "(2) Do lexical and semantic suggestion systems affect daily emoji usage differently?", "We first conducted an online study to evaluate the performance of the two systems, finding that semantic emoji suggestions were perceived as more relevant than lexical emoji suggestions.", "We then conducted two experiments, finding that emoji usage had a stronger effect on senders than on receivers, but the suggestion system in use did not affect the overall chat experience.", "A possible explanation is that the suggestion levels only affect the ease of inputting an emoji.", "Although participants picked more from the semantic suggestions, they could still manually pick their desired emojis if those emojis were not suggested, leading to similar numbers of total emojis inputted with the different suggestion systems.", "However, both our in-lab study and our field deployment revealed that the suggestion systems influenced how users reflected on their own experiences.", "Participants were clearly most excited about semantic suggestions.", "Even without knowing the details of the different suggestion systems, the participants were pleasantly surprised that the predicted emojis were related to the sentiment of their messages.", "During the field deployment, participants used more emojis in their daily conversations from semantic suggestions than from lexical suggestions.", "This finding shows that the semantic suggestions provided more relevant emojis than did the lexical suggestions.", "In this work, we compared two emoji suggestion systems: lexical and semantic.", "Specifically, we explored whether the suggestion type affected the online chat experience and how people perceive the two suggestion types.", "Our online crowdsourced study revealed that people perceived semantic suggestions as most relevant.", "Our laboratory study showed that semantic emoji suggestions were used about 1.5 times more than lexical emoji suggestions.", "Our longitudinal field deployment showed that semantic suggestions led to an increase in emoji usage and were preferred because of their relevance to emotions.", "As other research in this area has found [6, 11, 13] , we can conclude that emojis themselves, rather than the type of suggestion system, affects the chat experience most profoundly.", "Based on our study results, we offered design guidelines for emoji suggestion systems.", "We believe that by incorporating semantic information in emoji suggestion, researchers can provide better experiences in text-based computer-mediated communications." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19607841968536377, 0.8888888955116272, 0.19607841968536377, 0.22727271914482117, 0.05882352590560913, 0.09090908616781235, 0.2181818187236786, 0.08695651590824127, 0.09302324801683426, 0.052631575614213943, 0.11764705181121826, 0.11764705181121826, 0.05882352590560913, 0.10526315122842789, 0.0833333283662796, 0.1538461446762085, 0.2641509473323822, 0.05128204822540283, 0.1395348757505417, 0.10810810327529907, 0.1860465109348297, 0.277777761220932, 0.2448979616165161, 0.22641508281230927, 0.1538461446762085, 0.178571417927742, 0.17777776718139648, 0.0624999962747097, 0.13333332538604736, 0.19512194395065308, 0.10526315122842789, 0.3888888955116272, 0.19512194395065308, 0.05405404791235924, 0.1463414579629898, 0.25531914830207825, 0.1111111044883728, 0.10810810327529907, 0.1904761791229248 ]
b0CbCf171
true
[ "We built an Android keyboard with both lexical (word-based) and semantic (meaning-based) emoji suggestion capabilities and compared their effects in two different chat studies. " ]
[ "Conventional Generative Adversarial Networks (GANs) for text generation tend to have issues of reward sparsity and mode collapse that affect the quality and diversity of generated samples.", "To address the issues, we propose a novel self-adversarial learning (SAL) paradigm for improving GANs' performance in text generation.", "In contrast to standard GANs that use a binary classifier as its discriminator to predict whether a sample is real or generated, SAL employs a comparative discriminator which is a pairwise classifier for comparing the text quality between a pair of samples.", "During training, SAL rewards the generator when its currently generated sentence is found to be better than its previously generated samples.", "This self-improvement reward mechanism allows the model to receive credits more easily and avoid collapsing towards the limited number of real samples, which not only helps alleviate the reward sparsity issue but also reduces the risk of mode collapse.", "Experiments on text generation benchmark datasets show that our proposed approach substantially improves both the quality and the diversity, and yields more stable performance compared to the previous GANs for text generation.", "Generative Adversarial Networks ) (GANs) have achieved tremendous success for image generation and received much attention in computer vision.", "For text generation, however, the performance of GANs is severely limited due to reward sparsity and mode collapse: reward sparsity refers to the difficulty for the generator to receive reward signals when its generated samples can hardly fool the discriminator that is much easier to train; while mode collapse refers to the phenomenon that the generator only learns limited patterns from the real data.", "As a result, both the quality and diversity of generated text samples are limited.", "To address the above issues, we propose a novel self-adversarial learning (SAL) paradigm for improving adversarial text generation.", "In contrast to standard GANs (Figure 1(a", ") ) that use a binary classifier as its discriminator to predict whether a sample is real or generated, SAL employs a comparative discriminator which is a pairwise classifier assessing whether the currently generated sample is better than its previously generated one, as shown in Figure 1 (", "b).", "During training, SAL rewards the generator when its currently generated samples are found to be better than its previously generated samples.", "In the earlier training stage when the quality of generated samples is far below the real data, this self-improvement reward mechanism makes it easier for the generator to receive non-sparse rewards with informative learning signals, effectively alleviating the reward sparsity issue; while in the later training stage, SAL can prevent a sample from keeping receiving high reward as the self-improvement for a popular mode will become more and more difficult, and therefore help the generator avoid collapsing toward the limited patterns of real data.", "We comprehensively evaluate the proposed self-adversarial learning paradigm in both synthetic data and real data on the text generation benchmark platform (Zhu et al., 2018) .", "Compared to the previous approaches for adversarial text generation (Yu et al., 2017; Che et al., 2017; Lin et al., 2017) , our approach shows a substantial improvement in terms of both the quality and the diversity of generated samples as well as better performance stability in adversarial learning.", "Figure 1:", "(a) Conventional adversarial learning that uses a binary real/fake classifier as its discriminator;", "(b): Self-adversarial learning that employs a comparative discriminator to compare the currently generated sample to its previously generated samples for obtaining rewards through self-improvement.", "To better understand SAL, we perform multiple ablation tests in both the synthetic and the real data.", "We employ NLL oracle + NLL gen score with sequence length 20 as the evaluation metric for the synthetic data, denoted as NLL.", "For the real data, we use the perplexity of generated samples trained with COCO dataset as the evaluation metric.", "We compare SAL with the following reduced models:", "• CAL: Replacing the comparison between the generated samples (i.e., self-play) to the comparison between the real and generated samples.", "• w/o comparative: Using the binary discrimination scores of other generated samples as baseline for the policy gradient algorithm, which can be considered as a combination of the self-critic training (Rennie et al., 2017) with RL-based text GANs.", "• w/o \"≈\": Replace the three-class comparative discriminator with a binary comparative discriminator by removing the \"≈\" class.", "• w/o scheduled rewarding and w/o memory replay", "The results of the ablation tests are shown in Table 6 .", "By observing the improvement by SAL over CAL, we confirm the importance of the self-play paradigm in SAL.", "It is notable that the proposed comparative discriminator alone (i.e., CAL) can yield good performance, demonstrating the effectiveness of learning by comparison.", "When replacing the comparative discriminator with the naive combination of self-critic baseline with text GANs, the performance largely decreases because the reward sparsity issue will be intensified when subtracting two already sparse rewards, this motivates the proposed pairwise comparative discriminator which makes self-comparison possible.", "In addition, we find that the \"≈\" option plays a critical role in improving the result, without which the performance degrades significantly because it makes the task less trivial and provides a baseline for the policy gradient algorithm.", "Moreover, the training techniques (i.e., scheduled rewarding and memory replay) borrowed from deep reinforcement learning are also shown useful in improving the results but not so important as the core components (e.g., self-play and the comparative discriminator).", "(Chen et al., 2018) , LeakGAN (Guo et al., 2018) , and RelGAN (Nie et al., 2018)) have been proposed for text generation as adversarial training has received increasing attention in recent years.", "Typically, they address the non-differentiable issue by making continuous approximation or reinforcement learning.", "These approaches introduce several different architectures and optimization objectives of both the generator and the discriminator for adversarial text generation.", "Among the previous studies for adversarial text generation, the most related work to ours is RankGAN (Lin et al., 2017) which proposes a ranker to replace the conventional binary classifier as its discriminator for allowing the discrimination process to involve richer information.", "Another work whose idea is similar to ours is the relativistic discriminator (Jolicoeur-Martineau, 2018) (RGAN).", "It compares binary scores assigned to generated samples and real samples by subtraction as the learning signal to implicitly represent the inductive bias that half of the samples received by the discriminator is fake.", "In contrast, our comparative discriminator directly encodes this inductive bias and assesses generated sentences by comparison with a pairwise classifier, which provides more informative learning signals than subtraction in RGAN (Jolicoeur-Martineau, 2018) and normalized feature similarity in RankGAN (Lin et al., 2017) .", "We present a self-adversarial learning (SAL) paradigm for adversarial text generation.", "SAL rewards the generator when its comparative discriminator finds the generator becomes better than before.", "Through the self-improvement reward mechanism, the problem of reward sparsity and mode collapse can be alleviated and training of text GANs are more stable, which results in a better performance in the text generation benchmarks in terms of both quality, diversity, and lower variance.", "In the future, we plan to generalize our approach to other domains and modals to explore the potential of SAL for adversarial learning.", "Generated samples are presented in the Appendix together with other details, including human evaluation details and qualitative analysis of the proposed SAL.", "A GENERATED SAMPLES We present sentences generated by our proposed model and compared models to provide qualitative evaluation of different adversarial text generation models.", "From the presented generated samples, we can observe that samples generated by MLE training are less realistic compared with other samples.", "SeqGAN yield slightly better sample quality but the loss of diversity is observable even within randomly sampled 15 sentences.", "Adversarial training with proposed comparator, when trained by comparing with real samples, yield better quality but still lack of diversity.", "Finally, with the proposed self-adversarial learning paradigm, both quality and diversity of generated samples are improved.", "A.1", "GENERATED SAMPLES IN IMAGE COCO DATASET Table 7 : Samples generated by SAL in Image COCO dataset a picture of a person 's umbrella in a cell phone .", "a man stands in a green field .", "a young boy riding a truck .", "a man on a motorcycle is flying on a grassy field .", "a girl on a motorcycle parked on a city street .", "a motorcycle parked in a city street .", "a group of bikers riding bikes on a city street .", "a kitchen with a cat on the hood and a street .", "a bathroom containing a toilet and a sink .", "a young woman in a kitchen with a smiley face .", "a jet plane on the side of a street .", "a dish is sitting on a sidewalk next to a baby giraffe .", "a dog on a large green bike parked outside of the motor bike .", "a person on a kawasaki bike on a race track .", "a commercial aircraft is parked in front of a kitchen .", "Table 8 : Samples generated by CAL in Image COCO dataset a man is on a towel on a table outside of a real kitchen .", "a group of lambs at a tall building .", "a young boy riding a truck .", "a man on a motorcycle is flying on a grassy field .", "a man with a computer desk next to a white car .", "a cat is on the walls of a cat .", "a plane on a runway with a plane .", "an elegant , dilapidated plane are standing in front of a parking bag .", "the woman is riding a bike on their way .", "a man wearing an old bathroom with a banana .", "a plane is taking off from the ground .", "a man holding a man in front of herself .", "a woman is walking across the road .", "a kitchen with an island in green tiles .", "a clean kitchen with two small appliances .", "Table 9 : Samples generated by SeqGAN in Image COCO dataset a large image of a herd of racing train .", "man and woman on horse .", "a plane on a runway with a plane .", "a man preparing a table with wood lid .", "a view , tiled floors and a man prepares food .", "a man wearing an old bathroom with a banana .", "a man is is with a camera .", "two people are parked on a street .", "a white and white black kitten eating on a table .", "a toilet is lit on the walls .", "a kitchen is taking off from a window .", "a man is wearing glasses wearing scarf .", "a kitchen with graffiti hanging off from an open plain .", "two women playing with the orange .", "a kitchen with an island in a clear glass .", "Table 10 : Samples generated by MLE in Image COCO dataset a jet airplane flies flying through front from an airplane .", "a furry tub and overhead pot .", "a man in a kitchen filled with dark lights green side , ..", "a cross baby field dressed making cardboard a bathroom with a small tub and oven . a man above a bathroom with an oven room . a jet airliner flying through the sky . a kitchen with a dishwasher , and plenty of pots , pans . a person holding onto two red era arena sits on the street . a bathroom with a toilet and a bath tub . a cat perched on the phone and a baseball cap . the view of the street filled with really parked at the gates on the road . a large hairy dog on a high bike with a cake . a man is riding a white back bench . a narrow bed and white spotted dark tiled walls .", "A.2", "GENERATED SAMPLES IN EMNLP2017 WMT DATASET Table 11 : Samples generated by SAL in EMNLP2017 WMT dataset (1) it ' s likely to be egyptian and many of the canadian refugees , but for a decade .", "(2) the ministry spokesperson also said it now significant connected to the mountain.", "(3) it is the time they can more competitive , where we have another $ 99 .", "100 per cent , and completely on the alternative , and that ' s being affected . (4) we expect $ 200 and 0 . 3 percent for all you form other , and , which then well , it ' s done .", "(5) so we wouldn ' t feel very large in the game , but you fail to fund , and and the paper that ' s like its start .", "(6) other countries made a playoff cut with pages by mrs .", "trump ' s eighth consecutive season as a president . Table 12 : Samples generated by CAL in EMNLP2017 WMT dataset (1) i didn ' t put relatively quiet , we have , ' his work right in the particular heat rate , take steps traditionally clean .", "(2) why the u .", "s .", "then the table is our cabinet to do getting an vital company for the correct review .", "(3) those had trained for that , but no thin percentage of the nhs about being warned about the palestinian election before obama is not connected in israel .", "(4) in course , voters -obama said : \" torture is the outcome , the most powerful tradepopularity is happening in it as a success . (5) \" in 2012 , it is nice to remain -no trump actor established this night -scoring three films .", "(6) we kind of not listen to knowing my most one , only , for a really good vote , and where things fun , you know .", "Table 13 : Samples generated by SeqGAN in EMNLP2017 WMT dataset (1) his missed 4 , 000 the first 95 really 69 -year -olds .", "(2) but just things , you want to thank it as my playing side has begun meeting with \" and \" the score had to train up , so he was tied for 11 years .", "(3) and when he got back doing fresh ties with his election , he will now step in january , back.", "(4) when you ' t know if i saw her task to find himself more responsibility ago . (5) his hold over -up to a nine hike in 2015 , 13 percent of recently under suspects dead day , 24 , and to the city . (6) \" i look up on by the city ' s vehicle on the day in a meeting in november . Table 14 : Samples generated by MLE in EMNLP2017 WMT dataset (1) you know that that is great for our ability to make thinking about how you know and you ? (2) when it ' s a real thing possible , is if you the first time in a time here and get . (3) u . s , now government spending at the second half of four years , a country where the law will join the region to leave japan in germany . (4) deputy president , the issue of government and geneva probe threats and not -backed trump , but well -changing violence for their islamic state militants were innocent people . (5) he suggested in a presidential primary source and comment on its size following protests conducted by 18 , some in 2012 will be looked at tech energy hub . (6) \" it ' s growing heavy hard , \" mr .", "romney said , he says matters that can ' t again become the asian player ." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.17777776718139648, 0.7179487347602844, 0.18518517911434174, 0.10256409645080566, 0.07407406717538834, 0.25531914830207825, 0.1538461446762085, 0.1538461446762085, 0.1764705777168274, 0.5789473652839661, 0, 0.145454540848732, 0.10526315122842789, 0.14117646217346191, 0.3636363446712494, 0.27586206793785095, 0.12121211737394333, 0.1904761791229248, 0.1111111044883728, 0.1538461446762085, 0.05405404791235924, 0.1428571343421936, 0.05714285373687744, 0.1818181723356247, 0.11428570747375488, 0, 0.12903225421905518, 0.22857142984867096, 0.09302324801683426, 0.14035087823867798, 0.2641509473323822, 0.178571417927742, 0.1702127605676651, 0.12121211737394333, 0.2631579041481018, 0.17543859779834747, 0.05882352590560913, 0.08510638028383255, 0.13114753365516663, 0.5806451439857483, 0.12121211737394333, 0.25925925374031067, 0.14999999105930328, 0.09756097197532654, 0.1395348757505417, 0.051282044500112534, 0.051282044500112534, 0, 0.1666666567325592, 0.09090908616781235, 0.1538461446762085, 0.07999999821186066, 0.0714285671710968, 0.07407406717538834, 0.1538461446762085, 0.06896550953388214, 0.13793103396892548, 0.07692307233810425, 0.1428571343421936, 0.1428571343421936, 0.06666666269302368, 0.12903225421905518, 0.07407406717538834, 0.13793103396892548, 0.09756097197532654, 0.07407406717538834, 0.07999999821186066, 0.0714285671710968, 0.06896550953388214, 0.14814814925193787, 0.07999999821186066, 0.12121211737394333, 0.13793103396892548, 0.0714285671710968, 0.1428571343421936, 0.14814814925193787, 0.14814814925193787, 0.1428571343421936, 0.07407406717538834, 0.10526315122842789, 0, 0.07999999821186066, 0.07407406717538834, 0.06896550953388214, 0.0714285671710968, 0.07999999821186066, 0.07407406717538834, 0.0714285671710968, 0.14814814925193787, 0.07407406717538834, 0.07692307233810425, 0.06666666269302368, 0.07692307233810425, 0.1428571343421936, 0.09999999403953552, 0.07692307233810425, 0.12903225421905518, 0.04395604133605957, 0.14814814925193787, 0.0624999962747097, 0.0555555522441864, 0.11538460850715637, 0.08888888359069824, 0.06451612710952759, 0.09836065024137497, 0.0833333283662796, 0.11428570747375488, 0.1304347813129425, 0.1090909019112587, 0.09090908616781235, 0.09090908616781235, 0.07692307233810425, 0.052631575614213943, 0.049382712692022324, 0.05714285373687744 ]
B1l8L6EtDS
true
[ "We propose a self-adversarial learning (SAL) paradigm which improves the generator in a self-play fashion for improving GANs' performance in text generation." ]
[ "Determining the number of latent dimensions is a ubiquitous problem in machine\n", "learning.", "In this study, we introduce a novel method that relies on SVD to discover\n", "the number of latent dimensions.", "The general principle behind the method is to\n", "compare the curve of singular values of the SVD decomposition of a data set with\n", "the randomized data set curve.", "The inferred number of latent dimensions corresponds\n", "to the crossing point of the two curves.", "To evaluate our methodology, we\n", "compare it with competing methods such as Kaisers eigenvalue-greater-than-one\n", "rule (K1), Parallel Analysis (PA), Velicers MAP test (Minimum Average Partial).\n", "We also compare our method with the Silhouette Width (SW) technique which is\n", "used in different clustering methods to determine the optimal number of clusters.\n", "The result on synthetic data shows that the Parallel Analysis and our method have\n", "similar results and more accurate than the other methods, and that our methods is\n", "slightly better result than the Parallel Analysis method for the sparse data sets.", "The problem of determining the number of latent dimensions, or latent factors, is ubiquitous in a number of non supervised learning approaches.", "Matrix factorization techniques are good examples where we need to determine the number of latent dimensions prior to the learning phase.", "Non linear models such as LDA BID1 and neural networks also face the issue of stating the number of topics and nodes to include in the model before running an analysis over a data set, a problem that is akin to finding the number of latent factors.We propose a new method to estimate the number of latent dimensions that relies on the Singular Value Decomposition (SVD) and on a process of comparison of the singular values from the original matrix data with those from from bootstraped samples of the this matrix, whilst the name given to this method, Bootstrap SVD (BSVD).", "We compare the method to mainstream latent dimensions estimate techniques and over a space of dense vs. sparse matrices for Normal and non-Normal distributions.This paper is organized as follow.", "First, we outline some of best known methods and the related works in the next section.", "Then we explain our algorithm BSVD in section 3.", "The experiments are presented in section 4 and the results and discussion are reported in section 5.", "And finally conclusion of the study is given in section 6.", "According to the results of provided experiments in the tables 1 and 2, we could show that our method has a better performance than those mentioned especially in the sparse data sets.", "Our empirical experiments demonstrate that on the dense data sets; the accuracy of BSVD and PA is equal and better than the other approaches.", "But when we apply a different percentage of sparseness to our data sets, our method is more precise.In the figures 3 and 4, we display the behavior of each method in the dense and sparse data sets.", "Figure 3 depicts the average accuracy of all methods in the dense data sets with normal and nonnormal distribution.", "It shows that MAP method in the dense data set with normal or non-normal distribution has the same accuracy.", "Additionally, SW technique performs better result with the face of the dense data set with non-normal distribution, while K1 has an extreme behavior in the nonnormal data set.", "Moreover, BSVD, PA and K1 are more precise in the dense data set with normal distribution.", "Figure 4 shows the sparse data sets with normal and non-normal distribution.", "It demonstrates that BSVD, PA, and K1 have better accuracy in the sparse data set with normal distribution but MAP and SW are on the contrary.", "Figure 5 shows the average accuracy of all the methods in in different level of sparsity over the non normal sparse data set with latent dimensions (j) equal to 2.", "The error bars shows the variance of the observations after repeating the algorithm 25 times.", "Based on the results of these experiments we can conclude that our approach (BSVD) is better than the presented methods especially in the sparse data sets.", "To show if the outcome is statistically significant and is not by chance, we apply t-test between our method and PA.", "We considered the p values less than or equal to 0.05 as a significant result.", "To do so, we consider a sample of latent dimensions (j = {2, 3, 5, 8, 15}) and we repeat twenty-five times the mentioned experiments on the sparse data sets with normal and non-normal distribution, and record the result.", "Then we apply t-test between BSVD and PA.", "In this evaluation the null hypothesis (H0) state that µ SV D = µ P A and if the H0 is rejected, we could conclude that the obtained results are not by chance and our method is better than PA.", "TAB1 contain p values of the sparse and dense data sets with normal and non-normal distribution respectively.", "The first row of each table with 0% of sparsity indicate to the dense data sets.", "TAB1 shows more constant behavior, and implies that by increasing sparsity in the sparse data set with normal distribution, BSVD yeilds a significantly better result.", "But table 4 that shows the result of non-normal sparse data set is hard to interpret.", "Because the green cells are not affected by increasing sparsity.", "We can sum up with that the result seems to be significant with increasing the sparsity.", "In general, according to the tables 3 and 4, the difference between our method and PA seems to be statistically significant by increasing the percentage of sparsity.", "The objective of our study was to introduce a new method to find the number of latent dimensions using SVD which we inspired from PA.", "We employ our method on simulated data sets with normal and non-normal distribution whereas are dense or sparse and compared with the present methods such as PA, MAP, K1, and SW.", "According to the mentioned experiments and the reported results in the table 1, BSVD and PA have the same accuracy and better than the other presented methods in the dense data sets.", "But our method has a better result in the sparse data sets which is shown in the table", "2. We applied t-test on the sample of latent dimensions (j) between BSVD and PA to demonstrate if the result is statistically significant or not.", "The results in the tables (3 and 4) demonstrate that in the sparse data sets with increasing the sparsity, our method seems to be significantly better than the other methods.", "Our method performance is limited to the presented experiments and data sets.", "If we want to generalize the method, We need to see the behavior of the algorithm when we have a more complex data set.step a.", "Generating the matrices x and y with the sizes of 6 × 3 and 5 × 3." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3870967626571655, 0.8484848141670227, 0.4166666567325592, 0.2222222238779068, 0.25806450843811035, 0.0833333283662796, 0.307692289352417, 0.23076923191547394, 0.0833333283662796, 0, 0, 0.1249999925494194, 0.25, 0.24242423474788666, 0.1249999925494194, 0.12903225421905518, 0.2702702581882477, 0.3684210479259491, 0.2921348214149475, 0.2916666567325592, 0.1764705777168274, 0.0714285671710968, 0.0624999962747097, 0.13333332538604736, 0.2916666567325592, 0.19999998807907104, 0.2857142686843872, 0.10810810327529907, 0.1621621549129486, 0.0952380895614624, 0.05714285373687744, 0.06451612710952759, 0.1395348757505417, 0.2222222238779068, 0.1249999925494194, 0.23255813121795654, 0.15789473056793213, 0.17142856121063232, 0.2641509473323822, 0.07407406717538834, 0.22641508281230927, 0.11428570747375488, 0.1764705777168274, 0.13636362552642822, 0.22857142984867096, 0.06896550953388214, 0.1818181723356247, 0.2380952388048172, 0.5238094925880432, 0.12765957415103912, 0.09302324801683426, 0.17142856121063232, 0.2790697515010834, 0.17777776718139648, 0.19354838132858276, 0.25, 0.1249999925494194 ]
SkwAEQbAb
true
[ "In this study, we introduce a novel method that relies on SVD to discover the number of latent dimensions." ]
[ "Deep learning models are often sensitive to adversarial attacks, where carefully-designed input samples can cause the system to produce incorrect decisions.", "Here we focus on the problem of detecting attacks, rather than robust classification, since detecting that an attack occurs may be even more important than avoiding misclassification.", "We build on advances in explainability, where activity-map-like explanations are used to justify and validate decisions, by highlighting features that are involved with a classification decision.", "The key observation is that it is hard to create explanations for incorrect decisions.", " We propose EXAID, a novel attack-detection approach, which uses model explainability to identify images whose explanations are inconsistent with the predicted class", ". Specifically, we use SHAP, which uses Shapley values in the space of the input image, to identify which input features contribute to a class decision", ". Interestingly, this approach does not require to modify the attacked model, and it can be applied without modelling a specific attack", ". It can therefore be applied successfully to detect unfamiliar attacks, that were unknown at the time the detection model was designed", ". We evaluate EXAID on two benchmark datasets CIFAR-10 and SVHN, and against three leading attack techniques, FGSM, PGD and C&W.", "We find that EXAID improves over the SoTA detection methods by a large margin across a wide range of noise levels, improving detection from 70% to over 90% for small perturbations.", "Machine learning systems can be tricked to make incorrect decisions, when presented with samples that were slightly perturbed, but in special, adversarial ways (Szegedy et al., 2013) .", "This sensitivity, by now widely studied, can hurt networks regardless of the application domain, and can be applied without knowledge of the model (Papernot et al., 2017) .", "Detecting such adversarial attacks is currently a key problem in machine learning.", "To motivate our approach, consider how most conferences decide on which papers get accepted for publication.", "Human classifiers, known as reviewers, make classification decisions, but unfortunately these are notoriously noisy.", "To verify that their decision are sensible, reviewers are also asked to explain and justify their decision.", "Then, a second classifier, known as an area-chair or an editor, examines the classification, together with the explanation and the paper itself, to verify that the explanation supports the decision.", "If the justification is not valid, the review may be discounted or ignored.", "In this paper, we build on a similar intuition: Explaining a decision can reduce misclassification.", "Clearly, the analogy is not perfect, since unlike human reviewers, for deep models we do not have trustworthy methods to provide high level semantic explanation of decisions.", "Instead, we study below the effect of using the wider concept of explanation on detecting incorrect decisions, and in particular given adversarial samples that are designed to confuse a classifier.", "The key idea is that different classes have different explaining features, and that by probing explanations, one can detect classification decisions that are inconsistent with the explanation.", "For example, if an image is classified as a dog, but has an explanation that gives high weight to a striped pattern, it is more likely that the classification is incorrect.", "We focus here on the problem of detecting adversarial samples, rather than developing a system that provides robust classifications under adversarial attacks.", "This is because in many cases we are interested to detect that an attack occurs, even if we cannot automatically correct the decision.", "The key idea in detecting adversarial attacks, is to identify cases where the network behaves differently than when presented with untainted inputs, and previous methods focused on various different aspects of the network to recognize such different behaviours Lee et al. (2018) ; Ma et al. (2018) ; Liang et al. (2018) ; Roth et al. (2019) ; Dong et al. (2019) ; Katzir & Elovici (2018) ; Xu et al. (2017) .", "To detect these differences, here we build on recent work in explainability Lundberg & Lee (2017b) .", "The key intuition is that explainability algorithms are designed to point to input features that are the reason for making a decision.", "Even though leading explainability methods are still mostly based on high-order correlations and not necessarily identify purely causal features, they often yield features that people identify as causal (Lundberg & Lee, 2017a) .", "Explainability therefore operates directly against the aim of adversarial methods, which perturb images in directions that are not causal for a class.", "The result is that detection methods based on explainability holds the promise to work particularly well with adversarial perturbations that lead to nonsensical classification decisions.", "There is second major reason why using explainable features for adversarial detection is promising.", "Explainable features are designed to explain the classification decision of a classifier trained on non-modified (normal) data.", "As a result, they are independent of any specific adversarial attack.", "Some previous methods are based on learning the statistical abnormalities of the added perturbation.", "This makes them sensitive to the specific perturbation characteristics, which change from one attack method to another, or with change of hyperparameters.", "Instead, explainability models can be agnostic of the particular perturbation method.", "The challenge in detecting adversarial attacks becomes more severe when the perturbations of the input samples are small.", "Techniques like C&W Carlini & Wagner (2017b) can adaptively select the noise level for a given input, to reach the smallest perturbation that causes incorrect classification.", "It is therefore particularly important to design detection methods that can operate in the regime of small perturbations.", "Explanation-based detection is inherently less sensitive to the magnitude of the perturbation, because it focuses on those input features that explain a decision for a given class.", "In this paper we describe an EXAID (EXplAIn-then-Detect), an explanation-based method to detect adversarial attacks.", "It is designed to capture low-noise perturbations from unknown attacks, by building an explanation model per-class that can be trained without access to any adversarial samples.", "Our novel contributions are as follows: We describe a new approach to detect adversarial attacks using explainability techniques.", "We study the effect of negative sampling techniques to train such detectors.", "We also study the robustness of this approach in the regime of low-noise (small perturbations).", "Finally, we show that the new detection provides state-of-the-art defense against the three leading attacks (FGSM, PGD, CW) both for known attacks and in the setting of detecting unfamiliar attacks.", "In this paper we proposed EXAID, a novel attack-detection approach, which uses model explainability to identify images whose explanations are inconsistent with the predicted class.", "Our method outperforms previous state-of-the-art methods, for three attack methods, and many noise-levels.", "We demonstrated that the attack noise level has a major impact on previous defense methods.", "We hope this will encourage the research community to evaluate future defense methods on a large range of noise-levels." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19512194395065308, 0.04347825422883034, 0.17391303181648254, 0.11764705181121826, 0.7441860437393188, 0.2857142686843872, 0.0952380895614624, 0.1463414579629898, 0, 0.16326530277729034, 0.12244897335767746, 0.04347825422883034, 0.060606054961681366, 0.10810810327529907, 0.05714285373687744, 0.11428570747375488, 0.13333332538604736, 0.060606054961681366, 0, 0.12765957415103912, 0.16326530277729034, 0.17777776718139648, 0.08510638028383255, 0.0952380895614624, 0.1395348757505417, 0.17142856121063232, 0.05405404791235924, 0.19999998807907104, 0.15686273574829102, 0.2790697515010834, 0.3181818127632141, 0.11764705181121826, 0.15789473056793213, 0.1249999925494194, 0.1764705777168274, 0.19512194395065308, 0.1249999925494194, 0.15789473056793213, 0.08695651590824127, 0.20512819290161133, 0.17391303181648254, 0.11428570747375488, 0.08695651590824127, 0.25641024112701416, 0.12121211737394333, 0.05882352590560913, 0.08510638028383255, 0.695652186870575, 0, 0.1111111044883728, 0.14999999105930328 ]
B1xu6yStPH
true
[ "A novel adversarial detection approach, which uses explainability methods to identify images whose explanations are inconsistent with the predicted class. " ]
[ "We describe a simple and general neural network weight compression approach, in which the network parameters (weights and biases) are represented in a “latent” space, amounting to a reparameterization.", "This space is equipped with a learned probability model, which is used to impose an entropy penalty on the parameter representation during training, and to compress the representation using a simple arithmetic coder after training.", "Classification accuracy and model compressibility is maximized jointly, with the bitrate--accuracy trade-off specified by a hyperparameter.", "We evaluate the method on the MNIST, CIFAR-10 and ImageNet classification benchmarks using six distinct model architectures.", "Our results show that state-of-the-art model compression can be achieved in a scalable and general way without requiring complex procedures such as multi-stage training.", "Artificial neural networks (ANNs) have proven to be highly successful on a variety of tasks, and as a result, there is an increasing interest in their practical deployment.", "However, ANN parameters tend to require a large amount of space compared to manually designed algorithms.", "This can be problematic, for instance, when deploying models onto devices over the air, where the bottleneck is often network speed, or onto devices holding many stored models, with only few used at a time.", "To make these models more practical, several authors have proposed to compress model parameters (Han et al., 2015; Louizos, Ullrich, et al., 2017; Molchanov et al., 2017; Havasi et al., 2018) .", "While other desiderata often exist, such as minimizing the number of layers or filters of the network, we focus here simply on model compression algorithms that", "1. minimize compressed size while maintaining an acceptable classification accuracy,", "2. are conceptually simple and easy to implement, and", "3. can be scaled easily to large models.", "Classic data compression in a Shannon sense (Shannon, 1948) requires discrete-valued data (i.e., the data can only take on a countable number of states) and a probability model on that data known to both sender and receiver.", "Practical compression algorithms are often lossy, and consist of two steps.", "First, the data is subjected to (re-)quantization.", "Then, a Shannon-style entropy coding method such as arithmetic coding (Rissanen and Langdon, 1981 ) is applied to the discrete values, bringing them into a binary representation which can be easily stored or transmitted.", "Shannon's source coding theorem establishes the entropy of the discrete representation as a lower bound on the average length of this binary sequence (the bit rate), and arithmetic coding achieves this bound asymptotically.", "Thus, entropy is an excellent proxy for the expected model size.", "The type of quantization scheme affects both the fidelity of the representation (in this case, the precision of the model parameters, which in turn affects the prediction accuracy) as well as the bit rate, since a reduced number of states coincides with reduced entropy.", "ANN parameters are typically represented as floating point numbers.", "While these technically have a finite (but large) number of states, the best results in terms of both accuracy and bit rate are typically achieved for a significantly reduced number of states.", "Existing approaches to model compression often acknowledge this by quantizing each individual linear filter coefficient in an ANN to a small number of pre-determined values (Louizos, Reisser, et al., 2018; Baskin et al., 2018; F. Li et al., 2016) .", "This is known as scalar quantization (SQ).", "Other methods explore vector quantization (VQ), which is closely related to k-means clustering, in which each vector of filter coefficients is quantized jointly (Chen, J. Wilson, et al., 2015; Ullrich et al., 2017) .", "This is equivalent to enumerating a finite set of representers Figure 1 : Visualization of representers in scalar quantization vs. reparameterized quantization.", "The axes represent two different model parameters (e.g., linear filter coefficients).", "Small black dots are samples of the model parameters, red and blue discs are the representers.", "Left: in scalar quantization, the representers must be given by a Kronecker product of scalar representers along the cardinal axes, even though the distribution of samples may be skewed.", "Right: in reparameterized scalar quantization, the representers are still given by a Kronecker product, but in a transformed (here, rotated) space.", "This allows a better adaptation of the representers to the parameter distribution.", "(representable vectors), while in SQ the set of representers is given by the Kronecker product of representable scalar elements.", "VQ is much more general than SQ, in the sense that representers can be placed arbitrarily: if the set of useful filter vectors all live in a subset of the entire space, there is no benefit in having representers outside of that subset, which may be unavoidable with SQ (Figure 1 , left).", "Thus, VQ has the potential to yield better results, but it also suffers from the \"curse of dimensionality\": the number of necessary states grows exponentially with the number of dimensions, making it computationally infeasible to perform VQ for much more than a handful of dimensions.", "One of the key insights leading to this paper is that the strengths of SQ and VQ can be combined by representing the data in a \"latent\" space.", "This space can be an arbitrary rescaling, rotation, or otherwise warping of the original data space.", "SQ in this space, while making quantization computationally feasible, can provide substantially more flexibility in the choice of representers compared to the SQ in the data space (Figure 1, right) .", "This is in analogy to recent image compression methods based on autoencoders (Ballé, Laparra, et al., 2016; Theis et al., 2017) .", "The contribution of this paper is two-fold.", "First, we propose a novel end-to-end trainable model compression method that uses scalar quantization and entropy penalization in a reparameterized space of model parameters.", "The reparameterization allows us to use efficient SQ, while achieving flexibility in representing the model parameters.", "Second, we provide state-of-the-art results on a variety of network architectures on several datasets.", "This demonstrates that more complicated strategies involving pretraining, multi-stage training, sparsification, adaptive coding, etc., as employed by many previous methods, are not necessary to achieve good performance.", "Our method scales to modern large image datasets and neural network architectures such as ResNet-50 on ImageNet.", "Existing model compression methods are typically built on a combination of pruning, quantization, or coding.", "Pruning involves sparsifying the network either by removing individual parameters or higher level structures such as convolutional filters, layers, activations, etc.", "Various strategies for pruning weights include looking at the Hessian (Cun et al., 1990) or just their p norm (Han et al., 2015) .", "Srinivas and Babu (2015) focus on pruning individual units, and H. Li et al. (2016) prunes convolutional filters.", "Louizos, Ullrich, et al. (2017) and Molchanov et al. (2017) (Louizos, Ullrich, et al., 2017) 18.2 KB (58x) 1.8% Bayesian Compression (GHS) (Louizos, Ullrich, et al., 2017) 18.0 KB (59x) 2.0% Sparse Variational Dropout (Molchanov et al., 2017) 9.38 KB (113x) 1.8% Our Method (SQ) 8.56 KB (124x) 1.9%", "LeNet5-Caffe (MNIST) Uncompressed 1.72 MB 0.7% Sparse Variational Dropout (Molchanov et al., 2017) 4.71 KB (365x) 1.0% Bayesian Compression (GHS) (Louizos, Ullrich, et al., 2017) 2.23 KB (771x) 1.0% Minimal Random Code Learning (Havasi et al., 2018) 1.52 KB (1110x) 1.0% Our Method (SQ) 2.84 KB (606x) 0.9%", "We describe a simple model compression method built on two ingredients: joint (i.e., end-to-end) optimization of compressibility and classification performance in only a single training stage, and reparameterization of model parameters, which increases the flexibility of the representation over scalar quantization, and is applicable to arbitrary network architectures.", "We demonstrate that stateof-the-art model compression performance can be achieved with this simple framework, outperforming methods that rely on complex, multi-stage training procedures.", "Due to its simplicity, the approach is particularly suitable for larger models, such as VGG and especially ResNets.", "In future work, we may consider the potential benefits of even more flexible (deeper) parameter decoders." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10810810327529907, 0.09302324801683426, 0.27586206793785095, 0.20689654350280762, 0.10810810327529907, 0, 0, 0.08888888359069824, 0.052631575614213943, 0.1621621549129486, 0.08695651590824127, 0, 0, 0.13333332538604736, 0.0833333283662796, 0.09999999403953552, 0.08888888359069824, 0.04999999701976776, 0.3333333432674408, 0.1304347813129425, 0, 0.09756097197532654, 0.08510638028383255, 0, 0.0476190447807312, 0, 0.07692307233810425, 0.14814814925193787, 0.0555555522441864, 0.0624999962747097, 0.0833333283662796, 0.06666666269302368, 0.0714285671710968, 0.0833333283662796, 0.052631575614213943, 0.0714285671710968, 0.052631575614213943, 0.060606054961681366, 0, 0.2857142686843872, 0.13793103396892548, 0, 0, 0.06666666269302368, 0.1428571343421936, 0.05882352590560913, 0.05714285373687744, 0, 0, 0, 0.1428571343421936, 0.17142856121063232, 0.06451612710952759, 0.06896550953388214 ]
HkgxW0EYDS
true
[ "An end-to-end trainable model compression method optimizing accuracy jointly with the expected model size." ]
[ "Neural networks can converge faster with help from a smarter batch selection strategy.", "In this regard, we propose Ada-Boundary, a novel adaptive-batch selection algorithm that constructs an effective mini-batch according to the learning progress of the model.Our key idea is to present confusing samples what the true label is.", "Thus, the samples near the current decision boundary are considered as the most effective to expedite convergence.", "Taking advantage of our design, Ada-Boundary maintains its dominance in various degrees of training difficulty.", "We demonstrate the advantage of Ada-Boundary by extensive experiments using two convolutional neural networks for three benchmark data sets.", "The experiment results show that Ada-Boundary improves the training time by up to 31.7% compared with the state-of-the-art strategy and by up to 33.5% compared with the baseline strategy.", "Deep neural networks (DNNs) have achieved remarkable performance in many fields, especially, in computer vision and natural language processing BID13 BID5 .", "Nevertheless, as the size of data grows very rapidly, the training step via stochastic gradient descent (SGD) based on mini-batches suffers from extremely high computational cost, which is mainly due to slow convergence.", "The common approaches for expediting convergence include some SGD variants BID28 BID11 that maintain individual learning rates for parameters and batch normalization BID9 that stabilizes gradient variance.Recently, in favor of the fact that not all samples have an equal impact on training, many studies have attempted to design sampling schemes based on the sample importance BID25 BID3 (1) at the training accuracy of 60%.", "An easy data set (MNIST) does not have \"too hard\" sample but \"moderately hard\" samples colored in gray, whereas a relatively hard data set (CIFAR-10) has many \"too hard\" samples colored in black.", "(b) shows the result of SGD on a hard batch.", "The moderately hard samples are informative to update a model, but the too hard samples make the model overfit to themselves.", "et al., 2017; BID10 .", "Curriculum learning BID0 ) inspired by human's learning is one of the representative methods to speed up the training step by gradually increasing the difficulty level of training samples.", "In contrast, deep learning studies focus on giving higher weights to harder samples during the entire training process.", "When the model requires a lot of epochs for convergence, it is known to converge faster with the batches of hard samples rather than randomly selected batches BID21 BID17 BID4 .", "There are various criteria for judging the hardness of a sample, e.g., the rank of the loss computed from previous epochs BID17 .Here", ", a natural question arises: Does the \"hard\" batch selection always speed up DNN training? Our", "answer is partially yes: it is helpful only when training an easy data set. According", "to our indepth analysis, as demonstrated in FIG1 (a), the", "hardest samples in a hard data set (e.g., CIFAR-10) were too hard to learn. They are", "highly likely to make the decision boundary bias towards themselves, as shown in FIG1 (b) . On", "the other", "hand, in an easy data set (e.g., MNIST), the hardest samples, though they are just moderately hard, provide useful information for training. In practice,", "it was reported that hard batch selection succeeded to speed up only when training the easy MNIST data set BID17 BID4 , and our experiments in Section 4.4 also confirmed the previous findings. This limitation", "calls for a new sampling scheme that supports both easy and hard data sets.In this paper, we propose a novel adaptive batch selection strategy, called Ada-Boundary, that accelerates training and is better generalized to hard data sets. As opposed to existing", "hard batch selection, Ada-Boundary picks up the samples with the most appropriate difficulty, considering the learning progress of the model. The samples near the current", "decision boundary are selected with high probability, as shown in FIG3 (a). Intuitively speaking, the", "samples", "far from the decision boundary are not that helpful since they are either too hard or too easy: those on the incorrect (or correct) side are too hard (or easy). This is the reason why we regard", "the samples around the decision boundary, which are moderately hard, as having the appropriate difficulty at the moment. Overall, the key idea of Ada-Boundary", "is to use the distance of a sample to the decision boundary for the hardness of the sample. The beauty of this design is not to require", "human intervention. The current decision boundary should be directly", "influenced by the learning progress of the model. The decision boundary of a DNN moves towards eliminating", "the incorrect samples as the training step progresses, so the difficulty of the samples near the decision boundary gradually increases as the model is learned. Then, the decision boundary keeps updated to identify the", "confusing samples in the middle of SGD, as illustrated in FIG3 (b) . This approach is able to accelerate the convergence", "speed", "by providing the samples suited to the model at every SGD iteration, while it is less prone to incur an overfitting issue.We have conducted extensive experiments to demonstrate the superiority of Ada-Boundary. Two popular convolutional neural network (CNN) 1 models are", "trained using three benchmark data sets. Compared to random batch selection, Ada-Boundary significantly", "reduces the execution time by 14.0-33.5%. At the same time, it provides a relative improvement of test error", "by 7.34-14.8% in the final epoch. Moreover, compared to the state-of-the-art hard batch selection BID17", ", Ada-Boundary achieves the execution time smaller by 18.0% and the test error smaller by 13.7% in the CIFAR-10 data set.2 Ada-Boundary COMPONENTS The main challenge for Ada-Boundary is to evaluate", "how close a sample is to the decision boundary. In this section, we introduce a novel distance measure and present a method", "of computing the sampling probability based on the measure.", "In this paper, we proposed a novel adaptive batch selection algorithm, Ada-Boundary, that presents the most appropriate samples according to the learning progress of the model.", "Toward this goal, we defined the distance from a sample to the decision boundary and introduced a quantization method for selecting the samples near the boundary with high probability.", "We performed extensive experiments using two CNN models for three benchmark data sets.", "The results showed that Ada-Boundary significantly accelerated the training process as well as was better generalized in hard data sets.", "When training an easy data set, Ada-Boundary showed a fast convergence comparable to that of the state-of-the-art algorithm; when training relatively hard data sets, only Ada-Boundary converged significantly faster than random batch selection.The most exciting benefit of Ada-Boundary is to save the time needed for the training of a DNN.", "It becomes more important as the size and complexity of data becomes higher, and can be boosted with recent advance of hardware technologies.", "Our immediate future work is to apply Ada-Boundary to other types of DNNs such as the recurrent neural networks (RNN) BID18 and the long short-term memory (LSTM) BID7 , which have a neural structure completely different from the CNN.", "In addition, we plan to investigate the relationship between the power of a DNN and the improvement of Ada-Boundary." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.27272728085517883, 0.0952380895614624, 0, 0.08695651590824127, 0.1428571343421936, 0.0624999962747097, 0, 0, 0.030303027480840683, 0.05882352590560913, 0.21052631735801697, 0.07692307233810425, 0, 0, 0, 0.0555555522441864, 0.06451612710952759, 0.23999999463558197, 0, 0, 0.07692307233810425, 0, 0, 0.09302325546741486, 0.1818181723356247, 0.13793103396892548, 0, 0, 0.06896550953388214, 0.07407406717538834, 0, 0.0833333283662796, 0, 0, 0.08510638028383255, 0.1818181723356247, 0.0714285671710968, 0.1599999964237213, 0.054054051637649536, 0.06896550953388214, 0, 0.1818181723356247, 0.060606054961681366, 0.09090908616781235, 0.0714285671710968, 0.16326530277729034, 0, 0.09090908616781235, 0.1599999964237213 ]
SyfXKoRqFQ
true
[ "We suggest a smart batch selection technique called Ada-Boundary." ]
[ "State of the art sound event classification relies in neural networks to learn the associations between class labels and audio recordings within a dataset.", "These datasets typically define an ontology to create a structure that relates these sound classes with more abstract super classes.", "Hence, the ontology serves as a source of domain knowledge representation of sounds.", "However, the ontology information is rarely considered, and specially under explored to model neural network architectures.\n", "We propose two ontology-based neural network architectures for sound event classification.", "We defined a framework to design simple network architectures that preserve an ontological structure.", "The networks are trained and evaluated using two of the most common sound event classification datasets.", "Results show an improvement in classification performance demonstrating the benefits of including the ontological information.", "Humans can identify a large number of sounds in their environments e.g., a baby crying, a wailing ambulance siren, microwave bell.", "These sounds can be related to more abstract categories that aid interpretation e.g., humans, emergency vehicles, home.", "These relations and structures can be represented by ontologies BID0 , which are defined for most of the available datasets for sound event classification (SEC).", "However, sound event classification rarely exploits this additional available information.", "Moreover, although neural networks are the state of the art for SEC BID1 BID2 BID3 , they are rarely designed considering such ontologies.An ontology is a formal representation of domain knowledge through categories and relationships that can provide structure to the training data and the neural network architecture.", "The most common type of ontologies are based on abstraction hierarchies defined by linguistics, where a super category represents its subcategories.", "Generally, the taxonomies are defined by either nouns or verbs e.g., animal contains dog and cat, dog contains dog barking and dog howling.", "Examples of datasets are ESC-50 BID4 , UrbanSounds BID5 , DCASE BID6 , AudioSet BID7 .", "Another taxonomy can be defined by interactions between objects and materials, actions and descriptors e.g., contains Scraping, which contains Scraping Rapidly and Scraping a Board BID8 BID9 BID10 .", "Another example of this type is given by physical properties, such as frequency and time patterns BID11 BID12 BID13 .", "There are multiple benefits of considering hierarchical relations in sound event classifiers.", "They can allow the classifier to back-off to more general categories when encountering ambiguity among subcategories.", "They can disambiguate classes that are acoustically similar, but not semantically.", "They can be used to penalize classification differently, where miss classifying sounds from different super classes is worse than within the same super class.", "Lastly, they can be used as domain knowledge to model neural networks.", "In fact, ontological information has been evaluated in computer vision BID14 and music BID15 , but has rarely been used for sound event classification.Ontology-based network architectures have showed improvement in performance along with other benefits.", "Authors in BID16 proposed an ontology-based deep restricted Boltzmann machine for textual topic classification.", "The architecture replicates the tree-like structure adding intermediate layers to model the transformation from a super class to its sub classes.", "Authors showed improved performance and reduced overfitting in training data.", "Another example used a perceptron for each node of the hierarchy, which classified whether an image corresponded to such class or not BID17 .", "Authors showed an improvement in performance due to the ability of class disambiguation by comparing predictions of classes and sub classes.", "Motivated by these approaches and by the flexibility to adapt structures in a deep learning model we propose our ontology-based networks detailed in the following section.", "In this paper we proposed a framework to design neural networks for sound event classification using hierarchical ontologies.", "We have shown two methods to add such structure into deep learning models in a simple manner without adding more learnable parameters.", "We used a Feed-forward Network with an ontological layer to relate predictions of different levels in the hierarchy.", "Additionally, we proposed a Siamese neural Network to compute ontology-based embeddings to preserve the ontology in an embedding space.", "The embeddings plots showed clusters of super classes containing different sub classes.", "Our results in the datasets and MSoS challenge improved over the baselines.", "We expect that our results pave the path to further explore ontologies and other relations, which is fundamental for sound event classification due to wide acoustic diversity and limited lexicalized terms to describe sounds." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.24242423474788666, 0.06896550953388214, 0, 0.2222222238779068, 0.8571428656578064, 0.25, 0.23076923191547394, 0.0833333283662796, 0, 0, 0.23529411852359772, 0.29999998211860657, 0.11538461595773697, 0, 0, 0, 0, 0, 0.1818181723356247, 0, 0, 0.060606054961681366, 0.09090908616781235, 0.2790697515010834, 0.25, 0, 0, 0.060606054961681366, 0, 0.060606054961681366, 0.3571428656578064, 0.0624999962747097, 0.0714285671710968, 0.1428571343421936, 0, 0, 0.24390242993831635 ]
HkGv2NMTjQ
true
[ "We present ontology-based neural network architectures for sound event classification." ]
[ "In contrast to fully connected networks, Convolutional Neural Networks (CNNs) achieve efficiency by learning weights associated with local filters with a finite spatial extent.", "An implication of this is that a filter may know what it is looking at, but not where it is positioned in the image.", "Information concerning absolute position is inherently useful, and it is reasonable to assume that deep CNNs may implicitly learn to encode this information if there is a means to do so.", "In this paper, we test this hypothesis revealing the surprising degree of absolute position information that is encoded in commonly used neural networks.", "A comprehensive set of experiments show the validity of this hypothesis and shed light on how and where this information is represented while offering clues to where positional information is derived from in deep CNNs.", "Convolutional Neural Networks (CNNs) have achieved state-of-the-art results in many computer vision tasks, e.g. object classification (Simonyan & Zisserman, 2014; and detection (Redmon et al., 2015; , face recognition (Taigman et al., 2014) , semantic segmentation (Long et al., 2015; Chen et al., 2018; Noh et al., 2015) and saliency detection (Cornia et al., 2018; Li et al., 2014) .", "However, CNNs have faced some criticism in the context of deep learning for the lack of interpretability (Lipton, 2016) .", "The classic CNN model is considered to be spatially-agnostic and therefore capsule (Sabour et al., 2017) or recurrent networks (Visin et al., 2015) have been utilized to model relative spatial relationships within learned feature layers.", "It is unclear if CNNs capture any absolute spatial information which is important in position-dependent tasks (e.g. semantic segmentation and salient object detection).", "As shown in Fig. 1 , the regions determined to be most salient (Jia & Bruce, 2018) tend to be near the center of an image.", "While detecting saliency on a cropped version of the images, the most salient region shifts even though the visual features have not been changed.", "This is somewhat surprising, given the limited spatial extent of CNN filters through which the image is interpreted.", "In this paper, we examine the role of absolute position information by performing a series of randomization tests with the hypothesis that CNNs might indeed learn to encode position information as a cue for decision making.", "Our experiments reveal that position information is implicitly learned from the commonly used padding operation (zero-padding).", "Zero-padding is widely used for keeping the same dimensionality when applying convolution.", "However, its hidden effect in representation learning has been long omitted.", "This work helps to better understand the nature of the learned features in CNNs and highlights an important observation and fruitful direction for future investigation.", "Previous works try to visualize learned feature maps to demystify how CNNs work.", "A simple idea is to compute losses and pass these backwards to the input space to generate a pattern image that can maximize the activation of a given unit (Hinton et al., 2006; Erhan et al., 2009) .", "However, it is very difficult to model such relationships when the number of layers grows.", "Recent work (Zeiler & Fergus, 2013 ) presents a non-parametric method for visualization.", "A deconvolutional network (Zeiler et al., 2011 ) is leveraged to map learned features back to the input space and their results reveal what types of patterns a feature map actually learns.", "Another work (Selvaraju et al., 2016) proposes to combine pixel-level gradients with weighted class activation mapping to locate the region which maximizes class-specific activation.", "As an alternative to visualization strategies, an empirical study (Zhang et al., 2016) has shown that a simple network can achieve zero training Cropping results in a shift in position rightward of features relative to the centre.", "It is notable that this has a significant impact on output and decision of regions deemed salient despite no explicit position encoding and a modest change to position in the input.", "loss on noisy labels.", "We share the similar idea of applying a randomization test to study the CNN learned features.", "However, our work differs from existing approaches in that these techniques only present interesting visualizations or understanding, but fail to shed any light on spatial relationships encoded by a CNN model.", "In summary, CNNs have emerged as a way of dealing with the prohibitive number of weights that would come with a fully connected end-to-end network.", "A trade-off resulting from this is that kernels and their learned weights only have visibility of a small subset of the image.", "This would seem to imply solutions where networks rely more on cues such as texture and color rather than shape (Baker et al., 2018) .", "Nevertheless, position information provides a powerful cue for where objects might appear in an image (e.g. birds in the sky).", "It is conceivable that networks might rely sufficiently on such cues that they implicitly encode spatial position along with the features they represent.", "It is our hypothesis that deep neural networks succeed in part by learning both what and where things are.", "This paper tests this hypothesis, and provides convincing evidence that CNNs do indeed rely on and learn information about spatial positioning in the image to a much greater extent than one might expect.", "In this paper we explore the hypothesis that absolute position information is implicitly encoded in convolutional neural networks.", "Experiments reveal that positional information is available to a strong degree.", "More detailed experiments show that larger receptive fields or non-linear readout of positional information further augments the readout of absolute position, which is already very strong from a trivial single layer 3 × 3 PosENet.", "Experiments also reveal that this recovery is possible when no semantic cues are present and interference from semantic information suggests joint encoding of what (semantic features) and where (absolute position).", "Results point to zero padding and borders as an anchor from which spatial information is derived and eventually propagated over the whole image as spatial abstraction occurs.", "These results demonstrate a fundamental property of CNNs that was unknown to date, and for which much further exploration is warranted." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.04255318641662598, 0.13333332538604736, 0.19607841968536377, 0.17391303181648254, 0.18867923319339752, 0.1818181723356247, 0.09756097197532654, 0.1071428507566452, 0.3404255211353302, 0.04255318641662598, 0.17391303181648254, 0.09999999403953552, 0.1090909019112587, 0.19999998807907104, 0.1111111044883728, 0.17142856121063232, 0.25531914830207825, 0.0555555522441864, 0.1071428507566452, 0.05128204822540283, 0.1621621549129486, 0.145454540848732, 0.04255318641662598, 0.13793103396892548, 0.19230768084526062, 0, 0.05128204822540283, 0.145454540848732, 0.08695651590824127, 0.13333332538604736, 0.08163265138864517, 0.22727271914482117, 0.08888888359069824, 0.1395348757505417, 0.178571417927742, 0.2380952388048172, 0.22857142984867096, 0.1428571343421936, 0.1538461446762085, 0.1249999925494194, 0.17777776718139648 ]
rJeB36NKvB
true
[ "Our work shows positional information has been implicitly encoded in a network. This information is important for detecting position-dependent features, e.g. semantic and saliency." ]
[ "Semantic parsing which maps a natural language sentence into a formal machine-readable representation of its meaning, is highly constrained by the limited annotated training data.", "Inspired by the idea of coarse-to-fine, we propose a general-to-detailed neural network(GDNN) by incorporating cross-domain sketch(CDS) among utterances and their logic forms.", "For utterances in different domains, the General Network will extract CDS using an encoder-decoder model in a multi-task learning setup.", "Then for some utterances in a specific domain, the Detailed Network will generate the detailed target parts using sequence-to-sequence architecture with advanced attention to both utterance and generated CDS.", "Our experiments show that compared to direct multi-task learning, CDS has improved the performance in semantic parsing task which converts users' requests into meaning representation language(MRL).", "We also use experiments to illustrate that CDS works by adding some constraints to the target decoding process, which further proves the effectiveness and rationality of CDS.", "Recently many natural language processing (NLP) tasks based on the neural network have shown promising results and gained much attention because these studies are purely data-driven without linguistic prior knowledge.", "Semantic parsing task which maps a natural language sentence into a machine-readable representation BID6 ), as a particular translation task, can be treated as a sequence-to-sequence problem BID3 ).", "Lately, a compositional graph-based semantic meaning representation language (MRL) has been introduced BID14 ), which converts utterance into logic form (action-object-attribute), increasing the ability to represent complex requests.", "This work is based on MRL format for semantic parsing task.Semantic parsing highly depends on the amount of annotated data and it is hard to annotate the data in logic forms such as Alexa MRL.", "Several researchers have focused on the area of multi-task learning and transfer learning BID10 , BID6 , BID15 ) with the observation that while these tasks differ in their domains and forms, the structure of language composition repeats across domains BID12 ).", "Compared to the model trained on a single domain only, a multi-task model that shares information across domains can improve both performance and generalization.", "However, there is still a lack of interpretations why the multi-task learning setting works BID26 ) and what the tasks have shared.", "Some NLP studies around language modeling BID18 , BID29 , BID2 ) indicate that implicit commonalities of the sentences including syntax and morphology exist and can share among domains, but these commonalities have not been fully discussed and quantified.To address this problem, in this work, compared to multi-task learning mentioned above which directly use neural networks to learn shared features in an implicit way, we try to define these cross-domain commonalities explicitly as cross-domain sketch (CDS).", "E.g., Search weather in 10 days in domain Weather and Find schedule for films at night in domain ScreeningEvent both have action SearchAction and Attribute time, so that they share a same MRL structure like SearchAction(Type(time@?)), where Type indicates domain and ?", "indicates attribute value which is copying from the original utterance.", "We extract this domain general MRL structure as CDS.", "Inspired by the research of coarse-to-fine BID4 ), we construct a two-level encoder-decoder by using CDS as a middle coarse layer.", "We firstly use General Network to get the CDS for every utterance in all domains.", "Then for a single specific domain, based on both utterance and extracted CDS, we decode the final target with advanced attention while CDS can be seen as adding some constraints to this process.", "The first utterance-CDS process can be regarded as a multi-task learning setup since it is suitable for all utterances across the domains.", "This work mainly introducing CDS using multi-task learning has some contributions listed below:", "1) We make an assumption that there exist cross-domain commonalities including syntactic and phrasal similarity for utterances and extract these commonalities as cross-domain sketch (CDS) which for our knowledge is the first time.", "We then define CDS on two different levels (action-level and attribute-level) trying to seek the most appropriate definition of CDS.2) We propose a general-to-detailed neural network by incorporating CDS as a middle coarse layer.", "CDS is not only a high-level extraction of commonalities across all the domains, but also a prior information for fine process helping the final decoding.3) Since CDS is cross-domain, our first-level network General Network which encodes the utterance and decodes CDS can be seen as a multi-task learning setup, capturing the commonalities among utterances expressions from different domains which is exactly the goal of multi-task learning.", "In this paper, we propose the concept of cross-domain sketch (CDS) which extracts some shared information across domains, trying to fully utilize the cross-domain commonalities such as syntactic and phrasal similarity in human expressions.", "We try to define CDS on two levels and give some examples to illustrate our idea.", "We also present a general-to-detailed neural network (GDNN) for converting an utterance into a logic form based on meaning representation language (MRL) form.", "The general network, which is meant to extract cross-domain commonalities, uses an encoderdecoder model to obtain CDS in a multi-task setup.", "Then the detailed network generates the final domain-specific target by exploiting utterance and CDS simultaneously via attention mechanism.", "Our experiments demonstrate the effectiveness of CDS and multi-task learning.CDS is able to generalize over a wide range of tasks since it is an extraction to language expressions.", "Therefore, in the future, we would like to perfect the CDS definition and extend its' ontology to other domains and tasks.", "Besides, in this paper, we use attention mechanism to make use of CDS which is still a indirect way.", "We would like to explore more effective ways such as constraint decoding to further enhance the role of CDS." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10810810327529907, 0.3529411852359772, 0, 0.09756097197532654, 0.10256409645080566, 0.05405404791235924, 0.04651162400841713, 0.052631575614213943, 0.04878048226237297, 0.1395348757505417, 0.04255318641662598, 0, 0, 0.05128204822540283, 0.039215683937072754, 0, 0, 0.0624999962747097, 0.0714285671710968, 0.08695651590824127, 0.05714285373687744, 0, 0.0952380895614624, 0.13636362552642822, 0.03076922707259655, 0.04444444179534912, 0, 0.11764705181121826, 0.060606054961681366, 0.06666666269302368, 0, 0, 0, 0 ]
r1fO8oC9Y7
true
[ "General-to-detailed neural network(GDNN) with Multi-Task Learning by incorporating cross-domain sketch(CDS) for semantic parsing" ]
[ "The learnability of different neural architectures can be characterized directly by computable measures of data complexity.", "In this paper, we reframe the problem of architecture selection as understanding how data determines the most expressive and generalizable architectures suited to that data, beyond inductive bias.", "After suggesting algebraic topology as a measure for data complexity, we show that the power of a network to express the topological complexity of a dataset in its decision boundary is a strictly limiting factor in its ability to generalize.", "We then provide the first empirical characterization of the topological capacity of neural networks.", "Our empirical analysis shows that at every level of dataset complexity, neural networks exhibit topological phase transitions and stratification.", "This observation allowed us to connect existing theory to empirically driven conjectures on the choice of architectures for a single hidden layer neural networks.", "Deep learning has rapidly become one of the most pervasively applied techniques in machine learning.", "From computer vision BID15 ) and reinforcement learning BID18 ) to natural language processing BID27 ) and speech recognition ), the core principles of hierarchical representation and optimization central to deep learning have revolutionized the state of the art; see BID10 .", "In each domain, a major difficulty lies in selecting the architectures of models that most optimally take advantage of structure in the data.", "In computer vision, for example, a large body of work BID24 , BID25 , BID12 , etc.", ") focuses on improving the initial architectural choices of BID15 by developing novel network topologies and optimization schemes specific to vision tasks.", "Despite the success of this approach, there are still not general principles for choosing architectures in arbitrary settings, and in order for deep learning to scale efficiently to new problems and domains without expert architecture designers, the problem of architecture selection must be better understood.Theoretically, substantial analysis has explored how various properties of neural networks, (eg. the depth, width, and connectivity) relate to their expressivity and generalization capability , BID6 , BID11 ).", "However, the foregoing theory can only be used to determine an architecture in practice if it is understood how expressive a model need be in order to solve a problem.", "On the other hand, neural architecture search (NAS) views architecture selection as a compositional hyperparameter search BID23 , BID9 , BID31 ).", "As a result NAS ideally yields expressive and powerful architectures, but it is often difficult to interperate the resulting architectures beyond justifying their use from their emperical optimality.We propose a third alternative to the foregoing: data-first architecture selection.", "In practice, experts design architectures with some inductive bias about the data, and more generally, like any hyperparameter selection problem, the most expressive neural architectures for learning on a particular dataset are solely determined by the nature of the true data distribution.", "Therefore, architecture selection can be rephrased as follows: given a learning problem (some dataset), which architectures are suitably regularized and expressive enough to learn and generalize on that problem?A", "natural approach to this question is to develop some objective measure of data complexity, and then characterize neural architectures by their ability to learn subject to that complexity. Then", "given some new dataset, the problem of architecture selection is distilled to computing the data complexity and chosing the appropriate architecture.For example, take the two datasets D 1 and D 2 given in FIG0 (ab) and FIG0 (cd) respectively. The", "first dataset, D 1 , consists of positive examples sampled from two disks and negative examples from their compliment. On", "the right, dataset D 2 consists of positive points sampled from two disks and two rings with hollow centers. Under", "some geometric measure of complexity D 2 appears more 'complicated' than D 1 because it contains more holes and clusters. As one", "trains single layer neural networks of increasing hidden dimension on both datasets, the minimum number of hidden units required to achieve zero testing error is ordered according to this geometric complexity. Visually", "in FIG0 , regardless of initialization no single hidden layer neural network with ≤ 12 units, denoted h ≤12 , can express the two holes and clusters in D 2 . Whereas", "on the simpler D 1 , both h 12 and h 26 can express the decision boundary perfectly. Returning", "to architecture selection, one wonders if this characterization can be extrapolated; that is, is it true that for datasets with 'similar' geometric complexity to D 1 , any architecture with ≥ 12 hidden learns perfectly, and likewise for those datasets similar in complexity to D 2 , architectures with ≤ 12 hidden units can never learn to completion?", "Architectural power is deeply related to the algebraic topology of decision boundaries.", "In this work we distilled neural network expressivity into an empirical question of the generalization capabilities of architectures with respect to the homological complexity of learning problems.", "This view allowed us to provide an empirical method for developing tighter characterizations on the the capacity of different architectures in addition to a principled approach to guiding architecture selection by computation of persistent homology on real data.There are several potential avenues of future research in using homological complexity to better understand neural architectures.", "First, a full characterization of neural networks with many layers or convolutional linearities is a crucial next step.", "Our empirical results suggest that the their are exact formulas describing the of power of neural networks to express decision boundaries with certain properties.", "Future theoretical work in determining these forms would significantly increase the efficiency and power of neural architecture search, constraining the search space by the persistent homology of the data.", "Additionally, we intend on studying how the topological complexity of data changes as it is propagated through deeper architectures." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.8484848141670227, 0.2222222238779068, 0.23999999463558197, 0.2666666507720947, 0.1621621549129486, 0.19512194395065308, 0.1249999925494194, 0.07999999821186066, 0.2631579041481018, 0.060606054961681366, 0.14999999105930328, 0.1265822798013687, 0.13636362552642822, 0.10810810327529907, 0.11320754140615463, 0.2142857164144516, 0.17391303181648254, 0.3181818127632141, 0.1599999964237213, 0.0555555522441864, 0.10810810327529907, 0.10526315122842789, 0.1702127605676651, 0.1702127605676651, 0.11428570747375488, 0.16393442451953888, 0.13333332538604736, 0.2380952388048172, 0.25, 0.11428570747375488, 0.19999998807907104, 0.23255813121795654, 0.2702702581882477 ]
H11lAfbCW
true
[ "We show that the learnability of different neural architectures can be characterized directly by computable measures of data complexity." ]
[ "Challenges in natural sciences can often be phrased as optimization problems.", "Machine learning techniques have recently been applied to solve such problems.", "One example in chemistry is the design of tailor-made organic materials and molecules, which requires efficient methods to explore the chemical space.", "We present a genetic algorithm (GA) that is enhanced with a neural network (DNN) based discriminator model to improve the diversity of generated molecules and at the same time steer the GA.", "We show that our algorithm outperforms other generative models in optimization tasks.", "We furthermore present a way to increase interpretability of genetic algorithms, which helped us to derive design principles", "The design of optimal structures under constraints is an important problem spanning multiple domains in the physical sciences.", "Specifically, in chemistry, the design of tailor-made organic materials and molecules requires efficient methods to explore the chemical space.", "Purely experimental approaches are often time consuming and expensive.", "Reliable computational tools can accelerate and guide experimental efforts to find new materials faster.", "We present a genetic algorithm (GA) (Davis, 1991; Devillers, 1996; Sheridan & Kearsley, 1995; Parrill, 1996) for molecular design that is enhanced with two features:", "We presented a hybrid GA and ML-based generative model and demonstrated its application in molecular design.", "The model outperforms literature approaches in generating molecules with desired properties.", "A detailed analysis of the data generated by the genetic algorithm allowed us to interpret the model and learn rules for the design of high performing molecules.", "This human expert design inspired from GA molecules outperformed all molecules created by generative models.", "For computationally more expensive property evaluations, we will extend our approach by the introduction of an on-the-fly trained ML property evaluation method, which will open new ways of solving the inverse design challenge in chemistry and materials sciences.", "Our approach is independent of domain knowledge, thus applicable to design questions in other scientific disciplines beyond chemistry.", "We therefore plan to generalize the GA-D approach to make it a more general concept of generative modelling.", "6 SUPPLEMENTARY INFORMATION Figure S1 shows examples of the molecules optimized in Section 4.4.", "Figure S1 : Molecular modifications resulting in increased penalized logP scores under similarity constraint sim(m, m ) > 0.4, 0.6.", "We show the molecules that resulted in largest score improvement.", "Figures S2-S4 show comparisons between the property distributions observed in molecule data sets such as the ZINC and the GuacaMol data set and property distributions of molecules generated using random SELFIES ( Figure S2 ), GA generated molecules with the penalized logP objective ( Figure S3 ) and GA generated molecules with an objective function which includes logP and QED ( Figure S4 ).", "While the average logP scores of average SELFIES are low, the tail of the distribution reaches to high values, explaining the surprisingly high penalized logP scores shown in Table 1 .", "The QED and weight distributions of molecules optimized using the penalized logP objective significantly differ from the distributions of the ZINC and the GuacaMol data set (see Figure S3 ).", "As soon as the QED score is simultaneously optimized, the distributions of GA generated molecules and molecules from the reference data sets become more similar (see Figure S4 ).", "Figure 6 shows that the GA can simultaneously optimize logP and QED.", "Figure S2 : Distributions of", "a) logP,", "b) SA,", "c) QED and", "d) molecular weight for randomly generated SELFIES, molecules from the ZINC data set and molecules from the GuacaMol data set.", "Figure S3 : Distributions of", "a) logP,", "b) SA,", "c) QED and", "d) molecular weight for GA generated SELFIES (penalized logP objective function), molecules from the ZINC data set and molecules from the GuacaMol data set.", "Figure S4 : Distributions of", "a) logP,", "b) SA,", "c) QED and", "d) molecular weight for GA generated SELFIES (logP and QED as objective function), molecules from the ZINC data set and molecules from the GuacaMol data set." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0, 0.060606054961681366, 0.1463414579629898, 0, 0.13793103396892548, 0.06666666269302368, 0.06666666269302368, 0, 0, 0.1621621549129486, 0.07407406717538834, 0.08695651590824127, 0.11428570747375488, 0.07692307233810425, 0.08695651590824127, 0.06666666269302368, 0, 0, 0, 0, 0.03703703358769417, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1lmyRNFvr
true
[ "Tackling inverse design via genetic algorithms augmented with deep neural networks. " ]
[ "Bidirectional Encoder Representations from Transformers (BERT) reach state-of-the-art results in a variety of Natural Language Processing tasks.", "However, understanding of their internal functioning is still insufficient and unsatisfactory.", "In order to better understand BERT and other Transformer-based models, we present a layer-wise analysis of BERT's hidden states.", "Unlike previous research, which mainly focuses on explaining Transformer models by their \\hbox{attention} weights, we argue that hidden states contain equally valuable information.", "Specifically, our analysis focuses on models fine-tuned on the task of Question Answering (QA) as an example of a complex downstream task.", "We inspect how QA models transform token vectors in order to find the correct answer.", "To this end, we apply a set of general and QA-specific probing tasks that reveal the information stored in each representation layer.", "Our qualitative analysis of hidden state visualizations provides additional insights into BERT's reasoning process.", "Our results show that the transformations within BERT go through phases that are related to traditional pipeline tasks.", "The system can therefore implicitly incorporate task-specific information into its token representations.", "Furthermore, our analysis reveals that fine-tuning has little impact on the models' semantic abilities and that prediction errors can be recognized in the vector representations of even early layers.", "In recent months, Transformer models have become more and more prevalent in the field of Natural Language Processing.", "Originally they became popular for their improvements over RNNs in Machine Translation BID36 .", "Now however, with the advent of large models and an equally large amount of pre-training being done, they have Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.", "Copyrights for components of this work owned by others than the author(s) must be honored.", "Abstracting with credit is permitted.", "To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.", "Request permissions from [email protected].", "CIKM '19, November 3rd-7th, 2019, Beijing, China.", "© 2019 Copyright held by the owner/author(s).", "Publication rights licensed to Association for Computing Machinery.", "ACM ISBN 978-x-xxxx-xxxx-x/YY/MM.", ". . $15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn proven adept at solving many of the standard Natural Language Processing tasks. Main subject of this paper is BERT BID8 , arguably the most popular of the recent Transformer models and the first to display significant improvements over previous state-of-the-art models in a number of different benchmarks and tasks.Problem of black box models. Deep Learning models achieve increasingly impressive results across a number of different domains, whereas their application to real-world tasks has been moving somewhat more slowly. One major impediment lies in the lack of transparency, reliability and prediction guarantees in these largely black box models.While Transformers are commonly believed to be moderately interpretable through the inspection of their attention values, current research suggests that this may not always be the case BID15 . This paper takes a different approach to the interpretation of said Transformer Networks. Instead of evaluating attention values, our approach examines the hidden states between encoder layers directly. There are multiple questions this paper will address:(1) Do Transformers answer questions decompositionally, in a similar manner to humans? (2) Do specific layers in a multi-layer Transformer network solve different tasks? (3) What influence does fine-tuning have on a network's inner state? (4) Can an evaluation of network layers help come to a conclusion on why and how a network failed to predict a correct answer?We discuss these questions on the basis of fine-tuned models on standard QA datasets. We choose the task of Question Answering as an example of a complex downstream task that, as this paper will show, requires solving a multitude of other Natural Language Processing tasks. Additionally, it has been shown that other NLP tasks can be successfully framed as QA tasks BID22 , therefore our analysis should translate to these tasks as well. While this work focuses on the BERT architecture, we perform preliminary tests on the small GPT-2 model BID28 as well, which yield similar results.", "Training Results.", "TAB3 shows the evaluation results of our best models.", "Accuracy on the SQuAD task is close to human performance, indicating that the model can fulfill all sub-tasks required to answer SQuAD's questions.", "As expected the tasks derived from HotpotQA prove much more challenging, with the distractor setting being the most difficult to solve.", "Unsurprisingly too, bAbI was easily solved by both BERT and GPT-2.", "While GPT-2 performs significantly worse in the more difficult tasks of SQuAD and HotpotQA, it does considerably better on bAbi reducing the validation error to nearly 0.", "Most of BERT's error in the bAbI multi-task setting comes from tasks 17 and 19.", "Both of these tasks require positional or geometric reasoning, thus it is reasonable to assume that this is a skill where GPT-2 improves on BERT's reasoning capabilities.Presentation of Analysis Results.", "The qualitative analysis of vector transformations reveals a range of recurring patterns.", "In the following, we present these patterns by two representative samples from the SQuAD and bAbI task dataset described in TAB1 .", "Examples from HotpotQA can be found in the supplementary material as they require more space due to the larger context.", "Results from probing tasks are displayed in FIG1 .", "We compare results in macro-averaged F1 over all network layers.", "FIG1 shows results from three models of BERT-base with twelve layers: Fine-tuned on SQuAD,on bAbI tasks and without fine-tuning.", "FIG2 reports results of two models based on BERT-large with 24 layers: Fine-tuned on HotpotQA and without fine-tuning.", "Our work reveals important findings about the inner functioning of Transformer networks.", "The impact of these findings and how future work can build upon them is described in the following: CIKM '19, November 3rd-7th, 2019, Beijing, China.Anon.", "Interpretability.", "The qualitative analysis of token vectors reveals that there is indeed interpretable information stored within the hidden states of Transformer models.", "This information can be used to identify misclassified examples and model weaknesses.", "It also provides clues about which parts of the context the model considered important for answering a question -a crucial part of decision legitimisation.", "We leave the development of methods to further process this information for future work.Transferability.", "We further show that lower layers might be more applicable to certain problems than later ones.", "For a Transfer Learning task, this means layer depth should be chosen individually depending on the task at hand.", "We also suggest further work regarding skip connections in Transformer layers to examine whether direct information transfer between non-adjacent layers (that solve different tasks) can be of advantage.Modularity.", "Our findings support the hypothesis that not only do different phases exist in Transformer networks, but that specific layers seem to solve different problems.", "This hints at a kind of modularity that can potentially be exploited in the training process.", "For example, it could be beneficial to fit parts of the network to specific tasks in pre-training, instead of using an end-to-end language model task.Our work aims towards revealing some of the internal processes within Transformer-based models.", "We suggest to direct further research at thoroughly understanding state-of-the-art models and the way they solve downstream tasks, in order to improve on them." ]
[ 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.13793103396892548, 0.08695651590824127, 0.12903225421905518, 0.11428570747375488, 0.19354838132858276, 0.14814814925193787, 0.11764705181121826, 0.23076923191547394, 0, 0, 0.10256409645080566, 0.20689654350280762, 0.07999999821186066, 0.0312499962747097, 0.07407406717538834, 0, 0, 0, 0, 0, 0, 0, 0.06542056053876877, 0.0952380895614624, 0, 0, 0, 0.10526315122842789, 0.14814814925193787, 0.04878048226237297, 0.08695651590824127, 0.0624999962747097, 0.06451612710952759, 0.09999999403953552, 0.1818181723356247, 0.06451612710952759, 0.06896550953388214, 0.1666666567325592, 0.10526315122842789, 0.1875, 0, 0.05882352590560913, 0.14814814925193787, 0.0714285671710968, 0, 0.20000000298023224, 0.11764705181121826, 0.1428571343421936, 0.08695651590824127, 0.11428570747375488 ]
SygMXE2vAE
true
[ "We investigate hidden state activations of Transformer Models in Question Answering Tasks." ]
[ "We propose a general deep reinforcement learning method and apply it to robot manipulation tasks.", "Our approach leverages demonstration data to assist a reinforcement learning agent in learning to solve a wide range of tasks, mainly previously unsolved.", "We train visuomotor policies end-to-end to learn a direct mapping from RGB camera inputs to joint velocities.", "Our experiments indicate that our reinforcement and imitation approach can solve contact-rich robot manipulation tasks that neither the state-of-the-art reinforcement nor imitation learning method can solve alone.", "We also illustrate that these policies achieved zero-shot sim2real transfer by training with large visual and dynamics variations.", "Recent advances in deep reinforcement learning (RL) have performed very well in several challenging domains such as video games BID25 and Go .", "For robotics, RL in combination with powerful function approximators provides a general framework for designing sophisticated controllers that would be hard to handcraft otherwise.", "Yet, despite significant leaps in other domains the application of deep RL to control and robotic manipulation has proven challenging.", "While there have been successful demonstrations of deep RL for manipulation (e.g. BID26 BID31 ) and also noteworthy applications on real robotic hardware (e.g. BID48 there have been very few examples of learned controllers for sophisticated tasks even in simulation.Robotics exhibits several unique challenges.", "These include the need to rely on multi-modal and partial observations from noisy sensors, such as cameras.", "At the same time, realistic tasks often come with a large degree of variation (visual appearance, position, shapes, etc.) posing significant generalization challenges.", "Training on real robotics hardware can be daunting due to constraints on the amount of training data that can be collected in reasonable time.", "This is typically much less than the millions of frames needed by modern algorithms.", "Safety considerations also play an important role, as well as the difficulty of accessing information about the state of the environment (like the position of an object) e.g. to define a reward.", "Even in simulation when perfect state information and large amounts of training data are available, exploration can be a significant challenge.", "This is partly due to the often high-dimensional and continuous action space, but also due to the difficulty of designing suitable reward functions.In this paper, we present a general deep reinforcement learning method that addresses these issues and that can solve a wide range of robot arm manipulation tasks directly from pixels, most of which have not been solved previously.", "Our key insight is", "1) to reduce the difficulty of exploration in continuous domains by leveraging a handful of human demonstrations;", "2) several techniques to stabilize the learning of complex manipulation policies from vision; and", "3) to improve generalization by increasing the diversity of the training conditions.", "As a result, the trained policies work well under significant variations of system dynamics, object appearances, task lengths, etc.", "We ground these policies in the real world, demonstrating zero-shot transfer from simulation to real hardware.We develop a new method to combine imitation learning with reinforcement learning.", "Our method requires only a small number of human demonstrations to dramatically simplify the exploration problem.", "It uses demonstration data in two ways: first, it uses a hybrid reward that combines sparse environment reward with imitation reward based on Generative Adversarial Imitation Learning (Ho Figure 1: Our proposal of a principled robot learning pipeline. We used 3D motion controllers to collect human demonstrations of a task. Our reinforcement and imitation learning model leveraged these demonstrations to facilitate learning in a simulated physical engine. We then performed sim2real transfer to deploy the learned visuomotor policy to a real robot.& Ermon, 2016), which produces more robust controllers; second, it uses demonstration as a curriculum to initiate training episodes along demonstration trajectories, which facilitates the agent to reach new states and solve longer tasks.", "As a result, it solves dexterous manipulation tasks that neither the state-of-the-art reinforcement learning nor imitation learning method can solve alone.Previous RL-based robot manipulation policies BID26 BID31 ) largely rely on low-level states as input, or use severely limited action spaces that ignore the arm and instead learn Cartesian control of a simple gripper.", "This limits the ability of these methods to represent and solve more complex tasks (e.g., manipulating arbitrary 3D objects) and to deploy in real environments where the privileged state information is unavailable.", "Our method learns an end-to-end visuomotor policy that maps RGB camera observations to joint space control over the full 9-DoF arm (6 arm joints plus 3 actuated fingers).To", "sidestep the constraints of training on real hardware we embrace the sim2real paradigm which has recently shown promising results BID14 BID35 . Through", "the use of a physics engine and high-throughput RL algorithms, we can simulate parallel copies of a robot arm to perform millions of complex physical interactions in a contact-rich environment while eliminating the practical concerns of robot safety and system reset. Furthermore", ", we can, during training, exploit privileged information about the true system state with several new techniques, including learning policy and value in separate modalities, an object-centric GAIL discriminator, and auxiliary tasks for visual modules. These techniques", "stabilize and speed up policy learning from pixels.Finally, we diversify training conditions such as visual appearance as well as e.g. the size and shape of objects. This improves both", "generalization with respect to different task conditions as well as transfer from simulation to reality.To demonstrate our method, we use the same model and the same algorithm for visuomotor control of six diverse robot arm manipulation tasks. Combining reinforcement", "and imitation, our policies solve the tasks that the state-of-the-art reinforcement and imitation learning cannot solve and outperform human demonstrations. Our approach sheds light", "on a principled deep visuomotor learning pipeline illustrated in Fig. 1 , from collecting real-world human demonstration to learning in simulation, and back to real-world deployment via sim2real policy transfer.", "We have shown that combining reinforcement and imitation learning considerably improves the agents' ability to solve challenging dexterous manipulation tasks from pixels.", "Our proposed method sheds light on the three stages of a principled pipeline for robot skill learning: first, we collected a small amount of demonstration data to simplify the exploration problem; second, we relied on physical simulation to perform large-scale distributed robot training; and third, we performed sim2real transfer for real-world deployment.", "In future work, we seek to improve the sample efficiency of the learning method and to leverage real-world experience to close the reality gap for policy transfer." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.5, 0.24242423474788666, 0.13793103396892548, 0.4571428596973419, 0.06451612710952759, 0.1764705777168274, 0.05405404791235924, 0.1818181723356247, 0.11320754140615463, 0.19999998807907104, 0.05405404791235924, 0.05882352590560913, 0, 0.05128204822540283, 0.05882352590560913, 0.27272728085517883, 0, 0.06896550953388214, 0.4444444477558136, 0.0833333283662796, 0, 0.3243243098258972, 0.06896550953388214, 0.1599999964237213, 0.2539682388305664, 0.22727271914482117, 0.04878048226237297, 0, 0.1702127605676651, 0.12244897335767746, 0.19512194395065308, 0.2857142686843872, 0.3636363446712494, 0.20512820780277252, 0.5714285373687744, 0.1071428507566452, 0.1666666567325592 ]
HJWGdbbCW
true
[ "combine reinforcement learning and imitation learning to solve complex robot manipulation tasks from pixels" ]
[ "Ensembles, where multiple neural networks are trained individually and their predictions are averaged, have been shown to be widely successful for improving both the accuracy and predictive uncertainty of single neural networks.", "However, an ensemble's cost for both training and testing increases linearly with the number of networks.\n", "In this paper, we propose BatchEnsemble, an ensemble method whose computational and memory costs are significantly lower than typical ensembles.", "BatchEnsemble achieves this by defining each weight matrix to be the Hadamard product of a shared weight among all ensemble members and a rank-one matrix per member.", "Unlike ensembles, BatchEnsemble is not only parallelizable across devices, where one device trains one member, but also parallelizable within a device, where multiple ensemble members are updated simultaneously for a given mini-batch.", "Across CIFAR-10, CIFAR-100, WMT14 EN-DE/EN-FR translation, and contextual bandits tasks, BatchEnsemble yields competitive accuracy and uncertainties as typical ensembles; the speedup at test time is 3X and memory reduction is 3X at an ensemble of size 4.", "We also apply BatchEnsemble to lifelong learning, where on Split-CIFAR-100, BatchEnsemble yields comparable performance to progressive neural networks while having a much lower computational and memory costs.", "We further show that BatchEnsemble can easily scale up to lifelong learning on Split-ImageNet which involves 100 sequential learning tasks.", "Ensembling is one of the oldest tricks in machine learning literature (Hansen & Salamon, 1990) .", "By combining the outputs of several models, an ensemble can achieve better performance than any of its members.", "Many researchers demonstrate that a good ensemble is one where the ensemble's members are both accurate and make independent errors (Perrone & Cooper, 1992; Maclin & Opitz, 1999) .", "In neural networks, SGD (Bottou, 2003) and its variants (Kingma & Ba, 2014) are the most common optimization algorithm.", "The random noise from sampling mini-batches of data in SGD-like algorithms and random initialization of the deep neural networks, combined with the fact that there is a wide variety of local minima solutions in high dimensional optimization problem (Kawaguchi, 2016; Ge et al., 2015) , results in the following observation: deep neural networks trained with different random seeds can converge to very different local minima although they share similar error rates.", "One of the consequence is that neural networks trained with different random seeds will usually not make all the same errors on the test set, i.e. they may disagree on a prediction given the same input even if the model has converged.", "Ensembles of neural networks benefit from the above observation to achieve better performance by averaging or majority voting on the output of each ensemble member (Xie et al., 2013; Huang et al., 2017) .", "It is shown that ensembles of models perform at least as well as its individual members and diverse ensemble members lead to better performance (Krogh & Vedelsby, 1995) .", "More recently, Lakshminarayanan et al. (2017) showed that deep ensembles give reliable predictive uncertainty estimates while remaining simple and scalable.", "A further study confirms that deep ensembles generally achieves the best performance on out-of-distribution uncertainty benchmarks (Ovadia et al., 2019) compared to other methods such as MC-dropout (Gal & Ghahramani, 2015) .", "Despite their success on benchmarks, ensembles in practice are limited due to their expensive computational and memory costs, which increase linearly with the ensemble size in both training and testing.", "Computation-wise, each ensemble member requires a separate neural network forward pass of its inputs.", "Memory-wise, each ensemble member requires an independent copy of neural network weights, each up to millions (sometimes billions) of parameters.", "This memory requirement also makes many tasks beyond supervised learning prohibitive.", "For example, in lifelong learning, a natural idea is to use a separate ensemble member for each task, adaptively growing the total number of parameters by creating a new independent set of weights for each new task.", "No previous work achieves competitive performance on lifelong learning via ensemble methods, as memory is a major bottleneck.", "Our contribution: In this paper, we aim to address the computational and memory bottleneck by building a more parameter efficient ensemble model: BatchEnsemble.", "We achieve this goal by exploiting a novel ensemble weight generation mechanism: the weight of each ensemble member is generated by the Hadamard product between:", "a. one shared weight among all ensemble members.", "b. one rank-one matrix that varies among all members, which we refer to as fast weight in the following sections.", "Figure 1 compares testing and memory cost between BatchEnsemble and naive ensemble.", "Unlike typical ensembles, BatchEnsemble is mini-batch friendly, where it is not only parallelizable across devices like typical ensembles but also parallelizable within a device.", "Moreover, it incurs only minor memory overhead because a large number of weights are shared across ensemble members.", "Empirically, we show that BatchEnsemble has the best trade-off among accuracy, running time, and memory on several deep learning architectures and learning tasks: CIFAR-10/100 classification with ResNet32 (He et al., 2016) and WMT14 EN-DE/EN-FR machine translation with Transformer (Vaswani et al., 2017) .", "Additionally, we show that BatchEnsemble is also effective in uncertainty evaluation on contextual bandits.", "Finally, we show that BatchEnsemble can be successfully applied in lifelong learning and scale up to 100 sequential learning tasks without catastrophic forgetting and the need of memory buffer.", "We introduced BatchEnsemble, an efficient method for ensembling and lifelong learning.", "BatchEnsemble can be used to improve the accuracy and uncertainty of any neural network like typical ensemble methods.", "More importantly, BatchEnsemble removes the computation and memory bottleneck of typical ensemble methods, enabling its successful application to not only faster ensembles but also lifelong learning on up to 100 tasks.", "We believe BatchEnsemble has great potential to improve in lifelong learning.", "Our work may serve as a starting point for a new research area." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.3214285671710968, 0.2222222238779068, 0.25, 0.23076923191547394, 0.0714285671710968, 0.23333333432674408, 0.18867923319339752, 0.25531914830207825, 0.1395348757505417, 0.2666666507720947, 0.1090909019112587, 0.12765957415103912, 0.14117646217346191, 0.0923076868057251, 0.17241378128528595, 0.14814814925193787, 0.0833333283662796, 0.13333332538604736, 0.1818181723356247, 0.1904761791229248, 0.260869562625885, 0.05128204822540283, 0.20338982343673706, 0.1304347813129425, 0.19607841968536377, 0.16326530277729034, 0.0555555522441864, 0.1249999925494194, 0.10256409645080566, 0.08163265138864517, 0.08695651590824127, 0.0923076868057251, 0.0476190447807312, 0.290909081697464, 0.5641025304794312, 0.739130437374115, 0.27586206793785095, 0.25641024112701416, 0.04999999701976776 ]
Sklf1yrYDr
true
[ "We introduced BatchEnsemble, an efficient method for ensembling and lifelong learning which can be used to improve the accuracy and uncertainty of any neural network like typical ensemble methods." ]
[ "Reinforcement learning typically requires carefully designed reward functions in order to learn the desired behavior.", "We present a novel reward estimation method that is based on a finite sample of optimal state trajectories from expert demon- strations and can be used for guiding an agent to mimic the expert behavior.", "The optimal state trajectories are used to learn a generative or predictive model of the “good” states distribution.", "The reward signal is computed by a function of the difference between the actual next state acquired by the agent and the predicted next state given by the learned generative or predictive model.", "With this inferred reward function, we perform standard reinforcement learning in the inner loop to guide the agent to learn the given task.", "Experimental evaluations across a range of tasks demonstrate that the proposed method produces superior performance compared to standard reinforcement learning with both complete or sparse hand engineered rewards.", "Furthermore, we show that our method successfully enables an agent to learn good actions directly from expert player video of games such as the Super Mario Bros and Flappy Bird.", "Reinforcement learning (RL) deals with learning the desired behavior of an agent to accomplish a given task.", "Typically, a scalar reward signal is used to guide the agent's behavior and the agent learns a control policy that maximizes the cumulative reward over a trajectory, based on observations.", "This type of learning is referred to as \"model-free\" RL since the agent does not know apriori or learn the dynamics of the environment.", "Although the ideas of RL have been around for a long time BID24 ), great achievements were obtained recently by successfully incorporating deep models into them with the recent success of deep reinforcement learning.", "Some notable breakthroughs amongst many recent works are, the work from BID12 who approximated a Q-value function using as a deep neural network and trained agents to play Atari games with discrete control; who successfully applied deep RL for continuous control agents achieving state of the art; and BID22 who formulated a method for optimizing control policies with guaranteed monotonic improvement.In most RL methods, it is very critical to choose a well-designed reward function to successfully learn a good action policy for performing the task.", "However, there are cases where the reward function required for RL algorithms is not well-defined or is not available.", "Even for a task for which a reward function initially seems to be easily defined, it is often the case that painful hand-tuning of the reward function has to be done to make the agent converge on an optimal behavior.", "This problem of RL defeats the benefits of automated learning.", "In contrast, humans often can imitate instructor's behaviors, at least to some extent, when accomplishing a certain task in the real world, and can guess what actions or states are good for the eventual accomplishment, without being provided with the detailed reward at each step.", "For example, children can learn how to write letters by imitating demonstrations provided by their teachers or other adults (experts).", "Taking inspiration from such scenarios, various methods collectively referred to as imitation learning or learning from experts' demonstrations have been proposed BID21 ) as a relevant technical branch of RL.", "Using these methods, expert demonstrations can be given as input to the learning algorithm.", "Inverse reinforcement learning BID15 ; BID1 ; BID28 ), behavior cloning BID20 ), imitation learning BID6 ; BID5 ), and curiosity-based exploration ) are examples of research in this direction.While most of the prior work using expert demonstrations assumes that the demonstration trajectories contain both the state and action information (τ = {(s t )}) to solve the imitation learning problem, we, however, believe that there are many cases among real world environments where action information is not readily available.", "For example, a human teacher cannot tell the student what amount of force to put on each of the fingers when writing a letter.As such, in this work, we propose a reward estimation method that can estimate the underlying reward based only on the expert demonstrations of state trajectories for accomplishing a given task.", "The estimated reward function can be used in RL algorithms in order to learn a suitable policy for the task.", "The proposed method has the advantage of training agents based only on visual observations of experts performing the task.", "For this purpose, it uses a model of the distribution of the expert state trajectories and defines the reward function in a way that it penalizes the agent's behavior for actions that cause it to deviate from the modeled distribution.", "We present two methods with this motivation; a generative model and a temporal sequence prediction model.", "The latter defines the reward function by the similarity between the state predicted by the temporal sequence model trained based on the expert's demonstrations and the currently observed state.", "We present experimental results of the methods on multiple environments and with varied settings of input and output.", "The primary contribution of this paper is in the estimation of the reward function based on state similarity to expert demonstrations, that can be measured even from raw video input.", "In this paper, we proposed two variations of a reward estimation method via state prediction by using state-only trajectories of the expert; one based on an autoencoder-based generative model and one based on temporal sequence prediction using LSTM.", "Both the models were for calculating similarities between actual states and predicted states.", "We compared the methods with conventional reinforcement learning methods in five various environments.", "As overall trends, we found that the proposed method converged faster than using hand-crafted reward in many cases, especially when the expert trajectories were given by humans, and also that the temporal sequence prediction model had better results than the generative model.", "It was also shown that the method could be applied to the case where the demonstration was given by videos.", "However, detailed trends were different for the different environments depending on the complexity of the tasks.", "Neither model of our proposed method was versatile enough to be applicable to every environment without any changes of the reward definition.", "As we saw in the necessity of the energy term of the reward for the reacher task and in the necessity of special handling of the initial position of Mario, the proposed method has a room of improvements especially in modeling global temporal characteristics of trajectories.", "We would like to tackle these problems as future work." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.052631575614213943, 0, 0, 0, 0, 0.05714285373687744, 0, 0, 0, 0, 0.028169013559818268, 0, 0, 0, 0, 0, 0.0624999962747097, 0, 0, 0, 0, 0, 0.05714285373687744, 0, 0, 0, 0.060606058686971664, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HktXuGb0-
true
[ "Reward Estimation from Game Videos" ]
[ "Deep neural networks have achieved impressive performance in handling complicated semantics in natural language, while mostly treated as black boxes.", "To explain how the model handles compositional semantics of words and phrases, we study the hierarchical explanation problem.", "We highlight the key challenge is to compute non-additive and context-independent importance for individual words and phrases.", "We show some prior efforts on hierarchical explanations, e.g. contextual decomposition, do not satisfy the desired properties mathematically, leading to inconsistent explanation quality in different models.", "In this paper, we propose a formal way to quantify the importance of each word or phrase to generate hierarchical explanations.", "We modify contextual decomposition algorithms according to our formulation, and propose a model-agnostic explanation algorithm with competitive performance.", "Human evaluation and automatic metrics evaluation on both LSTM models and fine-tuned BERT Transformer models on multiple datasets show that our algorithms robustly outperform prior works on hierarchical explanations.", "We show our algorithms help explain compositionality of semantics, extract classification rules, and improve human trust of models.", "Recent advances in deep neural networks have led to impressive results on a range of natural language processing (NLP) tasks, by learning latent, compositional vector representations of text data (Peters et al., 2018; Devlin et al., 2018; Liu et al., 2019b) .", "However, interpretability of the predictions given by these complex, \"black box\" models has always been a limiting factor for use cases that require explanations of the features involved in modeling (e.g., words and phrases) (Guidotti et al., 2018; Ribeiro et al., 2016) .", "Prior efforts on enhancing model interpretability have focused on either constructing models with intrinsically interpretable structures (Bahdanau et al., 2015; Liu et al., 2019a) , or developing post-hoc explanation algorithms which can explain model predictions without elucidating the mechanisms by which model works (Mohseni et al., 2018; Guidotti et al., 2018) .", "Among these work, post-hoc explanation has come to the fore as they can operate over a variety of trained models while not affecting predictive performance of models.", "Towards post-hoc explanation, a major line of work, additive feature attribution methods (Lundberg & Lee, 2017; Ribeiro et al., 2016; Binder et al., 2016; Shrikumar et al., 2017) , explain a model prediction by assigning importance scores to individual input variables.", "However, these methods may not work for explaining compositional semantics in natural language (e.g., phrases or clauses), as the importance of a phrase often is non-linear combination of the importance of the words in the phrase.", "Contextual decomposition (CD) (Murdoch et al., 2018) and its hierarchical extension (Singh et al., 2019) go beyond the additive assumption and compute the contribution solely made by a word/phrase to the model prediction (i.e., individual contribution), by decomposing the output variables of the neural network at each layer.", "Using the individual contribution scores so derived, these algorithms generate hierarchical explanation on how the model captures compositional semantics (e.g., stress or negation) in making predictions (see Figure 1 for example).", "(a) Input occlusion assigns a negative score for the word \"interesting\", as the sentiment of the phrase becomes less negative after removing \"interesting\" from the original sentence.", "(b) Additive attributions assign importance scores for words \"not\" and \"interesting\" by linearly distributing contribution score of \"not interesting\", exemplified with Shapley Values (Shapley, 1997) .", "Intuitively, only", "(c) Hierarchical explanations highlight the negative compositional effect between the words \"not\" and \"interesting\".", "However, despite contextual decomposition methods have achieved good results in practice, what reveals extra importance that emerge from combining two phrases has not been well studied.", "As a result, prior lines of work on contextual decomposition have focused on exploring model-specific decompositions based on their performance on visualizations.", "We identify the extra importance from combining two phrases can be quantified by studying how the importance of the combined phrase differs from the sum of the importance of the two component phrases on its own.", "Similar strategies have been studied in game theory for quantifying the surplus from combining two groups of players (Driessen, 2013) .", "Following the definition above, the key challenge is to formulate the importance of a phrase on it own, i.e., context independent importance of a phrase.", "However, while contextual decomposition algorithms try to decompose the individual contributions from given phrases for explanation, we show neither of them satisfy this context independence property mathematically.", "To this end, we propose a formal way to quantify the importance of each individual word/phrase, and develop effective algorithms for generating hierarchical explanations based on the new formulation.", "To mathematically formalize and efficiently approximate context independent importance, we formulate N -context independent importance of a phrase, defined as the difference of model output after masking out the phrase, marginalized over all possible N words surrounding the phrase in the sentence.", "We propose two explanation algorithms according to our formulation, namely the Sampling and Contextual Decomposition algorithm (SCD), which overcomes the weakness of contextual decomposition algorithms, and the Sampling and OCclusion algorithm (SOC), which is simple, model-agnostic, and performs competitively against prior lines of algorithms.", "We experiment with both LSTM and fine-tuned Transformer models to evaluate the proposed methods.", "Quantitative studies involving automatic metrics and human evaluation on sentiment analysis and relation extraction tasks show that our algorithms consistently outperform competitors in the quality of explanations.", "Our algorithms manage to provide hierarchical visualization of compositional semantics captured by models, extract classification rules from models, and help users to trust neural networks predictions.", "In summary, our work makes the following contributions: (1) we identify the key challenges in generating post-hoc hierarchical explanations and propose a mathematically sound way to quantify context independent importance of words and phrases for generating hierarchical explanations; (2) we extend previous post-hoc explanation algorithm based on the new formulation of N -context independent importance and develop two effective hierarchical explanation algorithms; and (3) both experiments using automatic evaluation metrics and human evaluation demonstrate that the proposed explanation algorithms consistently outperform the compared methods (with both LSTM and Transformer as base models) over several datasets.", "In this work, we identify two desirable properties for informative hierarchical explanations of predictions, namely the non-additivity and context-independence.", "We propose a formulation to quantify context independent importance of words and phrases that satisfies the properties above.", "We revisit the prior line of works on contextual decomposition algorithms, and propose Sampling and Contextual Decomposition (SCD) algorithm.", "We also propose a simple and model agnostic explanation algorithm, namely the Sampling and Occlusion algorithm (SOC).", "Experiments on multiple datasets and models show that our explanation algorithms generate informative hierarchical explanations, help to extract classification rules from models, and enhance human trust of models.", "Table 2 : Phrase-level classification patterns extracted from models.", "We show the results of SCD and SOC respectively for the SST-2 and the TACRED dataset." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.05882352590560913, 0.3125, 0.25806450843811035, 0.1428571343421936, 0.2857142686843872, 0.3030303120613098, 0.1538461446762085, 0.25, 0.07843136787414551, 0.145454540848732, 0.1428571343421936, 0.09999999403953552, 0.11999999731779099, 0.17777776718139648, 0.17543859779834747, 0.25531914830207825, 0.15789473056793213, 0.19999998807907104, 0.0714285671710968, 0.04878048226237297, 0.05882352590560913, 0.20512820780277252, 0.11428570747375488, 0.1666666567325592, 0.1428571343421936, 0.3255814015865326, 0.20000000298023224, 0.2448979616165161, 0.13793103396892548, 0.1463414579629898, 0.307692289352417, 0.17977528274059296, 0.23529411852359772, 0.3030303120613098, 0.24242423474788666, 0.32258063554763794, 0.24390242993831635, 0, 0.2857142686843872 ]
BkxRRkSKwr
true
[ "We propose measurement of phrase importance and algorithms for hierarchical explanation of neural sequence model predictions" ]
[ "Stochastic gradient descent (SGD) with stochastic momentum is popular in nonconvex stochastic optimization and particularly for the training of deep neural networks.", "In standard SGD, parameters are updated by improving along the path of the gradient at the current iterate on a batch of examples, where the addition of a ``momentum'' term biases the update in the direction of the previous change in parameters.", "In non-stochastic convex optimization one can show that a momentum adjustment provably reduces convergence time in many settings, yet such results have been elusive in the stochastic and non-convex settings.", "At the same time, a widely-observed empirical phenomenon is that in training deep networks stochastic momentum appears to significantly improve convergence time, variants of it have flourished in the development of other popular update methods, e.g. ADAM, AMSGrad, etc.", "Yet theoretical justification for the use of stochastic momentum has remained a significant open question.", "In this paper we propose an answer: stochastic momentum improves deep network training because it modifies SGD to escape saddle points faster and, consequently, to more quickly find a second order stationary point.", "Our theoretical results also shed light on the related question of how to choose the ideal momentum parameter--our analysis suggests that $\\beta \\in [0,1)$ should be large (close to 1), which comports with empirical findings.", "We also provide experimental findings that further validate these conclusions.", "SGD with stochastic momentum has been a de facto algorithm in nonconvex optimization and deep learning.", "It has been widely adopted for training machine learning models in various applications.", "Modern techniques in computer vision (e.g. Krizhevsky et al. (2012) ; He et al. (2016) ; Cubuk et al. (2018) ; Gastaldi (2017)), speech recognition (e.g. Amodei et al. (2016) ), natural language processing (e.g. Vaswani et al. (2017) ), and reinforcement learning (e.g. Silver et al. (2017) ) use SGD with stochastic momentum to train models.", "The advantage of SGD with stochastic momentum has been widely observed (Hoffer et al. (2017) ; Loshchilov & Hutter (2019) ; Wilson et al. (2017) ).", "Sutskever et al. (2013) demonstrate that training deep neural nets by SGD with stochastic momentum helps achieving in faster convergence compared with the standard SGD (i.e. without momentum).", "The success of momentum makes it a necessary tool for designing new optimization algorithms in optimization and deep learning.", "For example, all the popular variants of adaptive stochastic gradient methods like Adam (Kingma & Ba (2015) ) or AMSGrad (Reddi et al. (2018b) ) include the use of momentum.", "Despite the wide use of stochastic momentum (Algorithm 1) in practice, justification for the clear empirical improvements has remained elusive, as has any mathematical guidelines for actually setting the momentum parameter-it has been observed that large values (e.g. β = 0.9) work well in practice.", "It should be noted that Algorithm 1 is the default momentum-method in popular software packages such as PyTorch and Tensorflow.", "1 In this paper we provide a theoretical analysis for SGD with 1: Required:", "Step size parameter η and momentum parameter β.", "2: Init: w0 ∈ R d and m−1 = 0 ∈ R d .", "3: for t = 0 to T do 4:", "Given current iterate wt, obtain stochastic gradient gt := ∇f (wt; ξt).", "In this paper, we identify three properties that guarantee SGD with momentum in reaching a secondorder stationary point faster by a higher momentum, which justifies the practice of using a large value of momentum parameter β.", "We show that a greater momentum leads to escaping strict saddle points faster due to that SGD with momentum recursively enlarges the projection to an escape direction.", "However, how to make sure that SGD with momentum has the three properties is not very clear.", "It would be interesting to identify conditions that guarantee SGD with momentum to have the properties.", "Perhaps a good starting point is understanding why the properties hold in phase retrieval.", "We believe that our results shed light on understanding the recent success of SGD with momentum in non-convex optimization and deep learning." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.12903225421905518, 0, 0.05128204822540283, 0.043478257954120636, 0.1599999964237213, 0.190476194024086, 0.04651162400841713, 0, 0.07692307233810425, 0.08695651590824127, 0.03999999538064003, 0.0624999962747097, 0.1621621549129486, 0.1428571343421936, 0.054054051637649536, 0.07999999821186066, 0, 0.0833333283662796, 0.23529411852359772, 0, 0.10526315122842789, 0, 0.1428571343421936, 0.3030303120613098, 0.07407406717538834, 0.07999999821186066, 0, 0.0624999962747097 ]
rkeNfp4tPr
true
[ "Higher momentum parameter $\\beta$ helps for escaping saddle points faster" ]
[ "GANs provide a framework for training generative models which mimic a data distribution.", "However, in many cases we wish to train a generative model to optimize some auxiliary objective function within the data it generates, such as making more aesthetically pleasing images.", "In some cases, these objective functions are difficult to evaluate, e.g. they may require human interaction.", "Here, we develop a system for efficiently training a GAN to increase a generic rate of positive user interactions, for example aesthetic ratings.", "To do this, we build a model of human behavior in the targeted domain from a relatively small set of interactions, and then use this behavioral model as an auxiliary loss function to improve the generative model.", "As a proof of concept, we demonstrate that this system is successful at improving positive interaction rates simulated from a variety of objectives, and characterize s", "Generative image models have improved rapidly in the past few years, in part because of the success of Generative Adversarial Networks, or GANs BID2 .", "GANs attempt to train a \"generator\" to create images which mimic real images, by training it to fool an adversarial \"discriminator,\" which attempts to discern whether images are real or fake.", "This is one solution to the difficult problem of learning when we don't know how to write down an objective function for image quality: take an empirical distribution of \"good\" images, and try to match it.Often, we want to impose additional constraints on our goal distribution besides simply matching empirical data.", "If we can write down an objective which reflects our goals (even approximately), we can often simply incorporate this into the loss function to achieve our goals.", "For example, when trying to generate art, we would like our network to be creative and innovative rather than just imitating previous styles, and including a penalty in the loss for producing recognized styles appears to make GANs more creative BID1 .", "Conditioning on image content class, training the discriminator to classify image content as well as making real/fake judgements, and including a loss term for fooling the discriminator on class both allows for targeted image generation and improves overall performance BID9 .However", ", sometimes it is not easy to write an explicit objective that reflects our goals. Often", "the only effective way to evaluate machine learning systems on complex tasks is by asking humans to determine the quality of their results (Christiano et al., 2017, e.g.) or by actually trying them out in the real world. Can we", "incorporate this kind of feedback to efficiently guide a generative model toward producing better results? Can we", "do so without a prohibitively expensive and slow amount of data collection? In this", "paper, we tackle a specific problem of this kind: generating images that cause more positive user interactions. We imagine", "interactions are measured by a generic Positive Interaction Rate (PIR), which could come from a wide variety of sources.For example, users might be asked to rate how aesthetically pleasing an image is from 1 to 5 stars. The PIR could", "be computed as a weighted sum of how frequently different ratings were chosen. Alternatively", ", these images could be used in the background of web pages. We can assess", "user interactions with a webpage in a variety of ways (time on page, clicks, shares, etc.), and summarize these interactions as the PIR. In both of these", "tasks, we don't know exactly what features will affect the PIR, and we certainly don't know how to explicitly compute the PIR for an image. However, we can", "empirically determine the quality of an image by actually showing it to users, and in this paper we show how to use a small amount of this data (results on 1000 images) to efficiently tune a generative model to produce images which increase PIR. In this work we", "focus on simulated PIR values as a proof of concept, but in future work we will investigate PIR values from real interactions.", "Overall, our system appears to be relatively successful.", "It can optimize a generative model to produce images which target a wide variety of objectives, ranging from low-level visual features such as colors and early features of VGG to features computed at the top layers of VGG.", "This success across a wide variety of objective functions allows us to be somewhat confident that our system will be able to achieve success in optimizing for real human interactions.Furthermore, the system did not require an inordinate amount of training data.", "In fact, we were able to successfully estimate many different objective functions from only 1000 images, several orders of magnitude fewer than is typically used to train CNNs for vision tasks.", "Furthermore, these images came from a very biased and narrow distribution (samples from our generative model) which is reflective of neither the images that were used to pre-train the Inception model in the PIR estimator, nor the images the VGG model (which produced the simulated objectives) was trained on.", "Our success from this small amount of data suggests that not only will our system be able to optimize for real human interactions, it will be able to do so from a feasible number of training points.These results are exciting -the model is able to approximate apparently complex objective functions from a small amount of data, even though this data comes from a very biased distribution that is unrelated to most the objectives in question.", "But what is really being learned?", "In the case of the color images, it's clear that the model is doing something close to correct.", "However, for the objectives derived from VGG we have no way to really assess whether the model is making the images better or just more adversarial.", "For instance, when we are optimizing for the logit for \"magpie,\" it's almost certainly the case that the result of this optimization will not look more like a magpie to a human, even if VGG does rate the images as more \"magpie-like.\"", "On the other hand, this is not necessarily a failure of the system -it is accurately capturing the objective function it is given.", "What remains to be seen is whether it can capture how background images influence human behavior as well as it can capture the vagaries of deep vision architectures.We believe there are many domains where a system similar to ours could be useful.", "We mentioned producing better webpage backgrounds and making more aesthetic images above, but there are many potential applications for improving GANs with a limited amount of human feedback.", "For example, a model could be trained to produce better music (e.g. song skip rates on streaming generated music could be treated as inverse PIRs).", "We have described a system for efficiently tuning a generative image model according to a slow-toevaluate objective function.", "We have demonstrated the success of this system at targeting a variety of objective functions simulated from different layers of a deep vision model, as well as from low-level visual features of the images, and have shown that it can do so from a small amount of data.", "We have quantified some of the features that affect its performance, including the variability of the training PIR data and the number of zeros it contains.", "Our system's success on a wide variety of objectives suggests that it will be able to improve real user interactions, or other objectives which are slow and expensive to evaluate.", "This may have many exciting applications, such as improving machine-generated images, music, or art.", "A OTHER ANALYSES", "Because L P IR is just the expected value of the PIR, by looking at L P IR before and after tuning the generative model, we can tell how well the system thinks it is doing, i.e. how much it estimates that it improved PIR.", "This comparison reveals the interesting pattern that the system is overly pessimistic about its performance.", "In fact, it tends to underestimate its performance by a factor of more than 1.5 (β = 1.67 when regressing change in mean PIR on predicted change in mean PIR, see FIG3 ).", "However, it does so fairly consistently.", "This effect appears to be driven by the system consistently underestimating the (absolute) PIRs, which is probably caused by our change in the softmax temperature between training the PIR estimator and tuning the generative model (which we empirically found improves performance, as noted above).", "This is in contrast to the possible a priori expectation that the model would systematically overestimate its performance, because it is overfitting to an imperfectly estimated objective function.", "Although decreasing the softmax temperature between training and using the PIR obscures this effect, we do see some evidence of this; the more complex objectives (which the system produced lower effect sizes on) seem to both have lower estimated changes in mean PIR and true changes in PIR which are even lower than the estimated ones (see FIG3 ).", "Thus although the system is somewhat aware of its reduced effectiveness with these objectives (as evidenced by the lower estimates of change in mean PIR), it is not reducing its estimates sufficiently to account for the true difficulty of the objectives (as evidenced by the fact that the true change in PIR is even lower than the estimates).", "However, the system was generally still able to obtain positive results on these objectives (see FIG1 )." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1463414579629898, 0.35087719559669495, 0.08695651590824127, 0.12244897335767746, 0.26229506731033325, 0.037735845893621445, 0.12244897335767746, 0.2222222238779068, 0.10958903282880783, 0.11538460850715637, 0.12121211737394333, 0.16393442451953888, 0.08888888359069824, 0.05882352590560913, 0.17391303181648254, 0.04651162400841713, 0.1666666567325592, 0.21212120354175568, 0.1395348757505417, 0.13636362552642822, 0.07547169178724289, 0.1538461446762085, 0.260869562625885, 0.08163265138864517, 0.054054051637649536, 0.2666666507720947, 0.12121211737394333, 0.06779660284519196, 0.17391303181648254, 0.09411764144897461, 0, 0.08888888359069824, 0.2641509473323822, 0.1818181723356247, 0.0416666604578495, 0.27272728085517883, 0.28070175647735596, 0.19230768084526062, 0.35555556416511536, 0.12121211737394333, 0.07999999821186066, 0.17543859779834747, 0.2790697515010834, 0, 0.0615384578704834, 0, 0.09999999403953552, 0, 0.14705881476402283, 0.14814814925193787, 0.10526315122842789, 0.02985074184834957, 0.043478257954120636 ]
HJr4QJ26W
true
[ "We describe how to improve an image generative model according to a slow- or difficult-to-evaluate objective, such as human feedback, which could have many applications, like making more aesthetic images." ]
[ "Semi-supervised learning (SSL) is a study that efficiently exploits a large amount of unlabeled data to improve performance in conditions of limited labeled data.", "Most of the conventional SSL methods assume that the classes of unlabeled data are included in the set of classes of labeled data.", "In addition, these methods do not sort out useless unlabeled samples and use all the unlabeled data for learning, which is not suitable for realistic situations.", "In this paper, we propose an SSL method called selective self-training (SST), which selectively decides whether to include each unlabeled sample in the training process.", "It is also designed to be applied to a more real situation where classes of unlabeled data are different from the ones of the labeled data.", "For the conventional SSL problems which deal with data where both the labeled and unlabeled samples share the same class categories, the proposed method not only performs comparable to other conventional SSL algorithms but also can be combined with other SSL algorithms.", "While the conventional methods cannot be applied to the new SSL problems where the separated data do not share the classes, our method does not show any performance degradation even if the classes of unlabeled data are different from those of the labeled data.", "Recently, machine learning has achieved a lot of success in various fields and well-refined datasets are considered to be one of the most important factors (Everingham et al., 2010; Krizhevsky et al., 2012; BID6 .", "Since we cannot discover the underlying real distribution of data, we need a lot of samples to estimate it correctly (Nasrabadi, 2007) .", "However, creating a large amount of dataset requires a huge amount of time, cost and manpower BID3 .Semi-supervised", "learning (SSL) is a method relieving the inefficiencies in data collection and annotation process, which lies between the supervised learning and unsupervised learning in that both labeled and unlabeled data are used in the learning process (Chapelle et al., 2009; BID3 . It can efficiently", "learn a model from fewer labeled data using a large amount of unlabeled data BID15 . Accordingly, the significance", "of SSL has been studied extensively in the previous literatures BID18 BID5 Kingma et al., 2014; BID4 BID2 . These results suggest that SSL", "can be a useful approach in cases where the amount of annotated data is insufficient.However, there is a recent research discussing the limitations of conventional SSL methods BID3 . They have pointed out that conventional", "SSL algorithms are difficult to be applied to real applications. Especially, the conventional methods assume", "that all the unlabeled data belong to one of the classes of the training labeled data. Training with unlabeled samples whose class", "distribution is significantly different from that of the labeled data may degrade the performance of traditional SSL methods. Furthermore, whenever a new set of data is", "available, they should be trained from the scratch using all the data including out-of-class 1 data.In this paper, we focus on the classification task and propose a deep neural network based approach named as selective self-training (SST) to solve the limitation mentioned above. Unlike the conventional self-training methods", "in (Chapelle et al., 2009) , our algorithm selectively utilizes the unlabeled data for the training. To enable learning to select unlabeled data,", "we propose a selection network, which is based on the deep neural network, that decides whether each sample is to be added or not. Different from BID12 , SST does not use the", "classification results for the data selection. Also, we adopt an ensemble approach which is", "similar to the co-training method BID0 ) that utilizes outputs of multiple classifiers to iteratively build a new training dataset. In our case, instead of using multiple classifiers", ", we apply a temporal ensemble method to the selection network. For each unlabeled instance, two consecutive outputs", "of the selection network are compared to keep our training data clean. In addition, we have found that the balance between", "the number of samples per class is quite important for the performance of our network. We suggest a simple heuristics to balance the number", "of selected samples among the classes. By the proposed selection method, reliable samples can", "be added to the training set and uncertain samples including out-of-class data can be excluded.SST is a self-training framework, which iteratively adopts the newly annotated training data (details in Section 2.1). SST is also suitable for the incremental learning which", "is frequently used in many real applications when we need to handle gradually incoming data. In addition, the proposed SST is suitable for lifelong", "learning which makes use of more knowledge from previously acquired knowledge BID10 Carlson et al., 2010; Chen & Liu, 2018) . Since SSL can be learned with labeled and unlabeled data", ", any algorithm for SSL may seem appropriate for lifelong learning. However, conventional SSL algorithms are inefficient when", "out-of-class samples are included in the additional data. SST only add samples having high relevance in-class data", "and is suitable for lifelong learning. The main contributions of the proposed method can be summarized", "as follows:• For the conventional SSL problems, the proposed SST method not only performs comparable to other conventional SSL algorithms but also can be combined with other algorithms.• For the new SSL problems, the proposed SST does not show any performance", "degradation even with the out-of-class data.• SST requires few hyper-parameters and can be easily implemented.• SST is", "more suitable for lifelong learning compared to other SSL algorithms", ".To prove the effectiveness of our proposed method, first, we conduct experiments comparing the classification errors of SST and several other state-of-the-art SSL methods (Laine & Aila, 2016; BID9 Luo et al., 2017; Miyato et al., 2017) in conventional SSL settings. Second, we propose a new experimental setup to investigate whether our method", "is more applicable to realworld situations. The experimental setup in BID3 samples classes among in-classes and out-classes", ". In the experimental setting in this paper, we sample unlabeled instances evenly", "in all classes. (details in Section 6.6 of the supplementary material). We evaluate the performance", "of the proposed SST using three public benchmark datasets", ": CIFAR-10, CIFAR-100 BID8 Hinton, 2009), and SVHN (Netzer et al., 2011) .", "We proposed selective self-training (SST) for semi-supervised learning (SSL) problem.", "Unlike conventional methods, SST selectively samples unlabeled data and trains the model with a subset of the dataset.", "Using selection network, reliable samples can be added to the new training dataset.", "In this paper, we conduct two types of experiments.", "First, we experiment with the assumption that unlabeled data are in-class like conventional SSL problems.", "Then, we experiment how SST performs for out-of-class unlabeled data.For the conventional SSL problems, we achieved competitive results on several datasets and our method could be combined with conventional algorithms to improve performance.", "The accuracy of SST is either saturated or not depending on the dataset.", "Nonetheless, SST has shown performance improvements as a number of data increases.", "In addition, the results of the combined experiments of SST and other algorithms show the possibility of performance improvement.For the new SSL problems, SST did not show any performance degradation even if the model is learned from in-class data and out-of-class unlabeled data.", "Decreasing the threshold of the selection network in new SSL problem, performance degrades.", "However, the output of the selection network shows different trends according to in-class and out-of-class.", "By setting a threshold that does not add out-of-class data, SST has prevented the addition of out-of-class samples to the new training dataset.", "It means that it is possible to prevent the erroneous data from being added to the unlabeled dataset in a real environment.", "6 SUPPLEMENTARY MATERIAL" ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.14999999105930328, 0.22857142984867096, 0.380952388048172, 0.13636362552642822, 0.19512194395065308, 0.23076923191547394, 0.2222222238779068, 0.11764705181121826, 0.1538461446762085, 0.11764705181121826, 0.14814814925193787, 0.22857142984867096, 0.0952380895614624, 0.12244897335767746, 0.060606054961681366, 0.277777761220932, 0.14999999105930328, 0.12903225421905518, 0.29999998211860657, 0.1666666567325592, 0.1818181723356247, 0.09302324801683426, 0.10810810327529907, 0.1538461446762085, 0.1538461446762085, 0.19354838132858276, 0.15094339847564697, 0.1904761791229248, 0.20408162474632263, 0.11428570747375488, 0.11764705181121826, 0.2857142686843872, 0.1666666567325592, 0.1666666567325592, 0.06896550953388214, 0.12121211737394333, 0.0555555522441864, 0.12903225421905518, 0.1875, 0.2142857164144516, 0.0624999962747097, 0.13793103396892548, 0.3333333134651184, 0.0624999962747097, 0.0714285671710968, 0.1764705777168274, 0.19607841968536377, 0.1875, 0.12903225421905518, 0.23076923191547394, 0.12903225421905518, 0.1818181723356247, 0.19999998807907104, 0.20512819290161133, 0 ]
SyzrLjA5FQ
true
[ "Our proposed algorithm does not use all of the unlabeled data for the training, and it rather uses them selectively." ]
[ "Deep generative neural networks have proven effective at both conditional and unconditional modeling of complex data distributions.", "Conditional generation enables interactive control, but creating new controls often requires expensive retraining.", "In this paper, we develop a method to condition generation without retraining the model.", "By post-hoc learning latent constraints, value functions identify regions in latent space that generate outputs with desired attributes, we can conditionally sample from these regions with gradient-based optimization or amortized actor functions.", "Combining attribute constraints with a universal “realism” constraint, which enforces similarity to the data distribution, we generate realistic conditional images from an unconditional variational autoencoder.", "Further, using gradient-based optimization, we demonstrate identity-preserving transformations that make the minimal adjustment in latent space to modify the attributes of an image.", "Finally, with discrete sequences of musical notes, we demonstrate zero-shot conditional generation, learning latent constraints in the absence of labeled data or a differentiable reward function.", "Generative modeling of complicated data such as images and audio is a long-standing challenge in machine learning.", "While unconditional sampling is an interesting technical problem, it is arguably of limited practical interest in its own right: if one needs a non-specific image (or sound, song, document, etc.) , one can simply pull something at random from the unfathomably vast media databases on the web.", "But that naive approach may not work for conditional sampling (i.e., generating data to match a set of user-specified attributes), since as more attributes are specified, it becomes exponentially less likely that a satisfactory example can be pulled from a database.", "One might also want to modify some attributes of an object while preserving its core identity.", "These are crucial tasks in creative applications, where the typical user desires fine-grained controls BID0 .One", "can enforce user-specified constraints at training time, either by training on a curated subset of data or with conditioning variables. These", "approaches can be effective if there is enough labeled data available, but they require expensive model retraining for each new set of constraints and may not leverage commonalities between tasks. Deep", "latent-variable models, such as Generative Adversarial Networks (GANs; BID8 and Variational Autoencoders (VAEs; BID15 BID26 , learn to unconditionally generate realistic and varied outputs by sampling from a semantically structured latent space. One", "might hope to leverage that structure in creating new conditional controls for sampling and transformations BID2 .Here", ", we show that new constraints can be enforced post-hoc on pre-trained unsupervised generative models. This", "approach removes the need to retrain the model for each new set of constraints, allowing users to more easily define custom behavior. We separate", "the problem into (1) creating an unsupervised model that learns how to reconstruct data from latent embeddings, and (2) leveraging the latent structure exposed in that embedding space as a source of prior knowledge, upon which we can impose behavioral constraints.Our key contributions are as follows:Figure 1: (a) Diagram of latent constraints for a VAE. We use one", "critic D attr to predict which regions of the latent space will generate outputs with desired attributes, and another critic D realism to predict which regions have high mass under the marginal posterior, q(z), of the training data. (b) We begin", "by pretraining a standard VAE, with an emphasis on achieving good reconstructions. (c) To train", "the actor-critic pair we use constraint-satisfaction labels, c, to train D to discriminate between encodings of actual data, z ∼ q(z|x), versus latent vectors z ∼ p(z) sampled from the prior or transformed prior samples G(z ∼ p(z), y). Similar", "to", "a Conditional GAN, both G and D operate on a concatenation of z and a binary attribute vector, y, allowing G to learn conditional mappings in latent space. If G is an", "optimizer, a separate attribute discriminator, D attr is trained and the latent vector is optimized to reduce the cost of both D attr and D realism . (d) To sample", "from the intersection of these regions, we use either gradient-based optimization or an amortized generator, G, to shift latent samples from either the prior (z ∼ p(z), sampling) or from the data (z ∼ q(z|x), transformation).• We show that", "it is possible to generate conditionally from an unconditional model, learning a critic function D(z) in latent space and generating high-value samples with either gradient-based optimization or an amortized actor function G(z), even with a nondifferentiable decoder (e.g., discrete sequences).• Focusing on VAEs", ", we address the tradeoff between reconstruction quality and sample quality (without sacrificing diversity) by enforcing a universal \"realism\" constraint that requires samples in latent space to be indistinguishable from encoded data (rather than prior samples).• Because we start", "from a VAE that can reconstruct inputs well, we are able to apply identitypreserving transformations by making the minimal adjustment in latent space needed to satisfy the desired constraints. For example, when", "we adjust a person's expression or hair, the result is still clearly identifiable as the same person (see Figure 5 ). This contrasts with", "pure GAN-based transformation approaches, which often fail to preserve identity.• Zero-shot conditional", "generation. Using samples from the", "VAE to generate exemplars, we can learn an actor-critic pair that satisfies user-specified rule-based constraints in the absence of any labeled data.", "We have demonstrated a new approach to conditional generation by constraining the latent space of an unconditional generative model.", "This approach could be extended in a number of ways.One possibility would be to plug in different architectures, including powerful autoregressive decoders or adversarial decoder costs, as we make no assumptions specific to independent likelihoods.", "While we have considered constraints based on implicit density estimation, we could also estimate the constrained distribution directly with an explicit autoregressive model or another variational autoencoder.", "The efficacy of autoregressive priors in VAEs is promising for this approach BID16 .", "Conditional samples could then be obtained by ancestral sampling, and transformations by using gradient ascent to increase the likelihood under the model.", "Active or semisupervised learning approaches could reduce the sample complexity of learning constraints.", "Real-time constraint learning would also enable new applications; it might be fruitful to extend the reward approximation of Section 6 to incorporate user preferences as in BID4 ." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.24242423474788666, 0.13793103396892548, 0.2666666507720947, 0.09090908616781235, 0.24390242993831635, 0.31578946113586426, 0.19512194395065308, 0.060606054961681366, 0.13114753365516663, 0.1428571343421936, 0.1875, 0.0624999962747097, 0.1111111044883728, 0.12765957415103912, 0.1666666567325592, 0.1818181723356247, 0.1249999925494194, 0.31578946113586426, 0.20895521342754364, 0.2083333283662796, 0.12903225421905518, 0.1599999964237213, 0.2790697515010834, 0.20512819290161133, 0.2083333283662796, 0.17241379618644714, 0.18518517911434174, 0.21739129722118378, 0.05128204822540283, 0.13793103396892548, 0.190476194024086, 0.21052631735801697, 0.8571428656578064, 0.12244897335767746, 0.1428571343421936, 0.13793103396892548, 0.2222222238779068, 0.1428571343421936, 0.1904761791229248 ]
Sy8XvGb0-
true
[ "A new approach to conditional generation by constraining the latent space of an unconditional generative model." ]
[ "Recent progress in hardware and methodology for training neural networks has ushered in a new generation of large networks trained on abundant data.", "These models have obtained notable gains in accuracy across many NLP tasks.", "However, these accuracy improvements depend on the availability of exceptionally large computational resources that necessitate similarly substantial energy consumption.", "As a result these models are costly to train and develop, both financially, due to the cost of hardware and electricity or cloud compute time, and environmentally, due to the carbon footprint required to fuel modern tensor processing hardware.", "In this paper we bring this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP.", "Based on these findings, we propose actionable recommendations to reduce costs and improve equity in NLP research and practice.", "Advances in techniques and hardware for training deep neural networks have recently enabled impressive accuracy improvements across many fundamental NLP tasks BID1 BID12 BID8 BID18 , with the most computationally-hungry models obtaining the highest scores BID13 BID7 BID14 BID16 .", "As a result, training a state-of-the-art model now requires substantial computational resources which demand considerable energy, along with the associated financial and environmental costs.", "Research and development of new models multiplies these costs by thousands of times by requiring retraining to experiment with model architectures and hyperparameters.", "Whereas a decade ago most NLP models could be trained and developed on a commodity laptop or server, many now require multiple instances of specialized hardware such as GPUs or TPUs, therefore limiting access to these highly accurate models on the basis of finances.", "Even when these expensive computational resources are available, model training also incurs a substantial cost to the environment due to the energy required to power this hardware for weeks or months at a time.", "Though some of this energy may come from renewable or carbon credit-offset resources, the high energy demands of these models are still a concern since (1) energy is not currently derived from carbon-neural sources in many locations, and (2) when renewable energy is available, it is still limited to the equipment we have to produce and store it, and energy spent training a neural network might better be allocated to heating a family's home.", "It is estimated that we must cut carbon emissions by half over the next decade to deter escalating rates of natural disaster, and based on the estimated CO 2 emissions listed in TAB1 , model training and development likely make up a substantial portion of the greenhouse gas emissions attributed to many NLP researchers.To heighten the awareness of the NLP community to this issue and promote mindful practice and policy, we characterize the dollar cost and carbon emissions that result from training the neural networks at the core of many state-of-the-art NLP models.", "We do this by estimating the kilowatts of energy required to train a variety of popular off-the-shelf NLP models, which can be converted to approximate carbon emissions and electricity costs.", "To estimate the even greater resources required to transfer an existing model to a new task or develop new models, we perform a case study of the full computational resources required for the development and tuning of a recent state-of-the-art NLP pipeline BID17 .", "We conclude with recommendations to the community based on our findings, namely: (1) Time to retrain and sensitivity to hyperparameters should be reported for NLP machine learning models; (2) academic researchers need equitable access to computational resources; and (3) researchers should prioritize developing efficient models and hardware." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.260869562625885, 0.1621621549129486, 0.13636362552642822, 0.2857142686843872, 0.4150943458080292, 0.1395348757505417, 0.2857142686843872, 0.1249999925494194, 0.13333332538604736, 0.15625, 0.2181818187236786, 0.2926829159259796, 0.21739129722118378, 0.2641509473323822, 0.16949151456356049, 0.1846153736114502 ]
rJg6Zh5Xer
true
[ "We quantify the energy cost in terms of money (cloud credits) and carbon footprint of training recently successful neural network models for NLP. Costs are high." ]
[ "Many models based on the Variational Autoencoder are proposed to achieve disentangled latent variables in inference.", "However, most current work is focusing on designing powerful disentangling regularizers, while the given number of dimensions for the latent representation at initialization could severely influence the disentanglement.", "Thus, a pruning mechanism is introduced, aiming at automatically seeking for the intrinsic dimension of the data while promoting disentangled representations.", "The proposed method is validated on MPI3D and MNIST to be advancing state-of-the-art methods in disentanglement, reconstruction, and robustness.", "The code is provided on the https://github.com/WeyShi/FYP-of-Disentanglement.", "To advance disentanglement, models based on the Variational Autoencoder (VAE) (Kingma and Welling, 2014) are proposed in terms of additional disentangling regularizers.", "However, in this paper, we introduce an orthogonal mechanism that is applicable to most state-of-theart models, resulting in higher disentanglement and robustness for model configurationsespecially the choice of dimensionality for the latent representation.", "Intuitively, both excessive and deficient latent dimensions can be detrimental to achieving the best disentangled latent representations.", "For excessive dimensions, powerful disentangling regularizers, like the β-VAE (Higgins et al., 2017) , can force information to be split across dimensions, resulting in capturing incomplete features.", "On the other hand, having too few dimensions inevitably leads to an entangled representation, such that each dimension could capture enough information for the subsequent reconstruction.", "A pruning mechanism that is complementary to most current state-of-the-art VAE-based disentangling models is introduced and validated on MPI3D and MNIST.", "The approximated L 0 regularization facilitates the model to capture better-disentangled representations with optimal size and increases the robustness to initialization.", "Moreover, with the same hyperparameters, the model approaches the intrinsic dimension for several datasets including MNIST and MPI3D, even with an extra-large number of dimensions at initialization.", "Even given the intrinsic dimension, the PVAE still outperforms other SOTA methods in terms of disentanglement and reconstruction." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.27586206793785095, 0.10256409645080566, 0.3030303120613098, 0.25806450843811035, 0.1904761791229248, 0.05714285373687744, 0.1395348757505417, 0.13793103396892548, 0.05128204822540283, 0.15789473056793213, 0.1249999925494194, 0.1875, 0.21621620655059814, 0.06666666269302368 ]
HJg8stY2oB
true
[ "The Pruning VAE is proposed to search for disentangled variables with intrinsic dimension." ]
[ "We explore the properties of byte-level recurrent language models.", "When given sufficient amounts of capacity, training data, and compute time, the representations learned by these models include disentangled features corresponding to high-level concepts.", "Specifically, we find a single unit which performs sentiment analysis.", "These representations, learned in an unsupervised manner, achieve state of the art on the binary subset of the Stanford Sentiment Treebank.", "They are also very data efficient.", "When using only a handful of labeled examples, our approach matches the performance of strong baselines trained on full datasets.", "We also demonstrate the sentiment unit has a direct influence on the generative process of the model.", "Simply fixing its value to be positive or negative generates samples with the corresponding positive or negative sentiment.", "Representation learning BID1 ) plays a critical role in many modern machine learning systems.", "Representations map raw data to more useful forms and the choice of representation is an important component of any application.", "Broadly speaking, there are two areas of research emphasizing different details of how to learn useful representations.The supervised training of high-capacity models on large labeled datasets is critical to the recent success of deep learning techniques for a wide range of applications such as image classification BID25 ), speech recognition ), and machine translation ).", "Analysis of the task specific representations learned by these models reveals many fascinating properties BID58 ).", "Image classifiers learn a broadly useful hierarchy of feature detectors re-representing raw pixels as edges, textures, and objects BID55 ).", "In the field of computer vision, it is now commonplace to reuse these representations on a broad suite of related tasks -one of the most successful examples of transfer learning to date BID42 ).There", "is also a long history of unsupervised representation learning BID41 ). Much", "of the early research into modern deep learning was developed and validated via this approach BID14 ; BID17 ; BID49 ; BID4 ; BID27 ). Unsupervised", "learning is promising due to its ability to scale beyond the small subsets and subdomains of data that can be cleaned and labeled given resource, privacy, or other constraints. This advantage", "is also its difficulty. While supervised", "approaches have clear objectives that can be directly optimized, unsupervised approaches rely on proxy tasks such as reconstruction, density estimation, or generation, which do not directly encourage useful representations for specific tasks. As a result, much", "work has gone into designing objectives, priors, and architectures meant to encourage the learning of useful representations. We refer readers", "to for a detailed review.Despite these difficulties, there are notable applications of unsupervised learning. Pre-trained word", "vectors are a vital part of many modern NLP systems BID5 ). These representations", ", learned by modeling word co-occurrences, increase the data efficiency and generalization capability of NLP systems BID45 ; BID3 ). Topic modelling can", "also discover factors within a corpus of text which align to human interpretable concepts such as \"art\" or \"education\" BID2 ).How to learn representations", "of phrases, sentences, and documents is an open area of research. Inspired by the success of word", "vectors, BID23 propose skip-thought vectors, a method of training a sentence encoder by predicting the preceding and following sentence. The representation learned by this", "objective performs competitively on a broad suite of evaluated tasks. More advanced training techniques", "such as layer normalization BID0 ) further improve results. However, skip-thought vectors are", "still outperformed by supervised models which directly optimize the desired performance metric on a specific dataset. This is the case for both text classification", "tasks, which measure whether a specific concept is well encoded in a representation, and more general semantic similarity tasks. This occurs even when the datasets are relatively", "small by modern standards, often consisting of only a few thousand labeled examples.In contrast to learning a generic representation on one large dataset and then evaluating on other tasks/datasets, BID6 proposed using similar unsupervised objectives such as sequence autoencoding and language modeling to first pretrain a model on a dataset and then finetune it for a given task. This approach outperformed training the same model", "from random initialization and achieved state of the art on several text classification datasets. Combining word-level language modelling of a dataset", "with topic modelling and fitting a small neural network feature extractor on top has also achieved strong results on document level sentiment analysis BID7 ).Considering this, we hypothesize two effects may be combining", "to result in the weaker performance of purely unsupervised approaches. Skip-thought vectors were trained on a corpus of books. But some", "of the classification tasks they are evaluated on, such", "as sentiment analysis of reviews of consumer goods, do not have much overlap with the text of novels. We propose this distributional issue, combined with the limited", "capacity of current models, results in representational underfitting. Current generic distributed sentence representations may be very", "lossy -good at capturing the gist, but poor with the precise semantic or syntactic details which are critical for applications.The experimental and evaluation protocols may be underestimating the quality of unsupervised representation learning for sentences and documents due to certain seemingly insignificant design decisions. BID12 also raises concern about current evaluation tasks in their", "recent work which provides a thorough survey of architectures and objectives for learning unsupervised sentence representations -including the above mentioned skip-thoughts.In this work, we test whether this is the case. We focus in on the task of sentiment analysis and attempt to learn", "an unsupervised representation that accurately contains this concept. BID37 showed that word-level recurrent language modelling supports", "the learning of useful word vectors. We are interested in pushing this line of work to learn representations", "of not just words but arbitrary scales of text with no distinction between sub-word, word, phrase, sentence, or document-level structure. Recent work has shown that traditional NLP task such as Named Entity Recognition", "and Part-of-Speech tagging can be performed this way by processing text as a byte sequence BID10 ). Byte level language modelling is a natural choice due to its simplicity and generality", ". We are also interested in evaluating this approach as it is not immediately clear whether", "such a low-level training objective supports the learning of high-level representations. We train on a very large corpus picked to have a similar distribution as our task of interest", ". We also benchmark on a wider range of tasks to quantify the sensitivity of the learned representation", "to various degrees of out-of-domain data and tasks.", "It is an open question why our model recovers the concept of sentiment in such a precise, disentangled, interpretable, and manipulable way.", "It is possible that sentiment as a conditioning feature has strong predictive capability for language modelling.", "This is likely since sentiment is such an important component of a review.", "Previous work analyzing LSTM language models showed the existence of interpretable units that indicate position within a line or presence inside a quotation BID20 ).", "In many ways, the sentiment unit in this model is just a scaled up example of the same phenomena.", "The update equation of an LSTM could play a role.", "The element-wise operation of its gates may encourage axis-aligned representations.", "Models such as word2vec have also been observed to have small subsets of dimensions strongly associated with specific tasks BID28 ).Our", "work highlights the sensitivity of learned representations to the data distribution they are trained on. The", "results make clear that it is unrealistic to expect a model trained on a corpus of books, where the two most common genres are Romance and Fantasy, to learn an encoding which preserves the exact sentiment of a review. Likewise", ", it is unrealistic to expect a model trained on Amazon product reviews to represent the precise semantic content of a caption of an image or a video.There are several promising directions for future work highlighted by our results. The observed", "performance plateau, even on relatively similar domains, suggests improving the representation model both in terms of architecture and size. Since our model", "operates at the byte-level, hierarchical/multitimescale extensions could improve the quality of representations for longer documents. The sensitivity", "of learned representations to their training domain could be addressed by training on a wider mix of datasets with better coverage of target tasks. Finally, our work", "encourages further research into language modelling as it demonstrates that the standard language modelling objective with no modifications is sufficient to learn high-quality representations." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.4000000059604645, 0.17142856121063232, 0, 0.06896550953388214, 0, 0.06666666269302368, 0.07692307233810425, 0, 0, 0.06666666269302368, 0.13114753365516663, 0.29629629850387573, 0.12903225421905518, 0.09756097197532654, 0.08695651590824127, 0.05882352590560913, 0.04999999701976776, 0, 0.09090908616781235, 0.12903225421905518, 0.0714285671710968, 0.1599999964237213, 0.05882352590560913, 0.22857142984867096, 0.07692307233810425, 0.06451612710952759, 0.07999999821186066, 0, 0.1764705777168274, 0.054054051637649536, 0.0615384578704834, 0.19354838132858276, 0, 0.06451612710952759, 0.09999999403953552, 0.11764705181121826, 0.14814814925193787, 0.032786883413791656, 0.11999999731779099, 0.1538461446762085, 0.2142857164144516, 0.09302325546741486, 0.09999999403953552, 0, 0.10810810327529907, 0.07692307233810425, 0.10526315122842789, 0.060606054961681366, 0.07407406717538834, 0.08695651590824127, 0.17142856121063232, 0.06896550953388214, 0.0952380895614624, 0.1904761791229248, 0.1249999925494194, 0.1538461446762085, 0.08695651590824127, 0.04081632196903229, 0.0624999962747097, 0.14814814925193787, 0.17142856121063232, 0.24242423474788666 ]
SJ71VXZAZ
true
[ "Byte-level recurrent language models learn high-quality domain specific representations of text." ]
[ " Discrete latent-variable models, while applicable in a variety of settings, can often be difficult to learn.", "Sampling discrete latent variables can result in high-variance gradient estimators for two primary reasons:", "1) branching on the samples within the model, and", "2) the lack of a pathwise derivative for the samples.", "While current state-of-the-art methods employ control-variate schemes for the former and continuous-relaxation methods for the latter, their utility is limited by the complexities of implementing and training effective control-variate schemes and the necessity of evaluating (potentially exponentially) many branch paths in the model.", "Here, we revisit the Reweighted Wake Sleep (RWS; Bornschein and Bengio, 2015) algorithm, and through extensive evaluations, show that it circumvents both these issues, outperforming current state-of-the-art methods in learning discrete latent-variable models.", "Moreover, we observe that, unlike the Importance-weighted Autoencoder, RWS learns better models and inference networks with increasing numbers of particles, and that its benefits extend to continuous latent-variable models as well.", "Our results suggest that RWS is a competitive, often preferable, alternative for learning deep generative models.", "Learning deep generative models with discrete latent variables opens up an avenue for solving a wide range of tasks including tracking and prediction BID28 , clustering BID31 , model structure learning BID0 , speech modeling BID16 , topic modeling BID1 , language modeling BID4 , and concept learning BID17 BID20 .", "Furthermore, recent deep-learning approaches addressing counting BID6 , attention BID37 , adaptive computation time BID9 , and differentiable data structures BID9 BID11 , underscore the importance of models with conditional branching induced by discrete latent variables.Current state-of-the-art methods optimize the evidence lower bound (ELBO) based on the importance weighted autoencoder (IWAE) BID3 by using either reparameterization BID19 BID32 , continuous relaxations of the discrete latents BID15 or the REINFORCE method BID36 with control variates BID24 BID25 BID12 BID35 BID8 .Despite", "the effective large-scale learning made possible by these methods, several challenges remain. First,", "with increasing number of particles, the IWAE ELBO estimator adversely impacts inference-network quality, consequently impeding learning of the generative model . Second", ", using continuous relaxations results in a biased gradient estimator, and in models with stochastic branching, forces evaluation of potentially exponential number of branching paths. For example", ", a continuous relaxation of the cluster identity in a Gaussian mixture model (GMM) (Section 4.3) forces the evaluation of a weighted average of likelihood parameters over all clusters instead of selecting the parameters based on just one. Finally, while", "control-variate methods may be employed to reduce variance, their practical efficacy can be somewhat limited as in some cases they involve designing and jointly optimizing a separate neural network which can be difficult to tune (Section 4.3).To address these", "challenges, we revisit the reweighted wake-sleep (RWS) algorithm BID2 , comparing it extensively with state-of-the-art methods for learning discrete latent-variable models, and demonstrate its efficacy in learning better generative models and inference networks, and improving the variance of the gradient estimators, over a range of particle budgets.Going forward, we review the current state-of-the-art methods for learning deep generative models with discrete latent variables (Section 2), revisit RWS (Section 3), and present an extensive evaluation of these methods (Section 4) on (i) the Attend", ", Infer", ", Repeat (AIR) model BID6 to perceive and localise multiple MNIST digits, (ii) a continuous latent-variable", "model on MNIST, and (iii) a pedagogical GMM example,", "exposing a shortcoming of RWS that we fix using defensive importance sampling BID13 . Our experiments confirm that RWS", "is a competitive, often preferable, alternative that unlike IWAE, learns better models and inference networks with increasing particle budgets.", "Our experiments suggest that RWS learns both better generative models and inference networks in models that involve discrete latent variables, while performing just as well as state-of-the-art on continuous-variable models as well.", "The AIR experiment (Section 4.1) shows that the trained inference networks are unusable when trained with high number of particles.", "Moreover, the MNIST experiment (Section 4.2) suggests that RWS is competitive even on models with continuous latent variables, especially for high number of particles where IWAE ELBO starts suffering from worse inference networks.", "The GMM experiment (Section 4.3) illustrates that this is at least at least in part due to a lower variance gradient estimator for the inference network and the fact that for RWSunlike the case of optimizing IWAE ELBO )-increasing number of particles actually improves the inference network.", "In the low-particle regime, the GMM suffers from zeroforcing of the generative model and the inference network, which is ameliorated using defensive RWS.", "Finally, all experiments show that, beyond a certain point, increasing the particle budget starts to affect the quality of the generative model for IWAE ELBO whereas this is not the case for RWS.", "As a consequence of our findings, we recommend reconsidering using RWS for learning deep generative models, especially those containing discrete latent variables that induce branching." ]
[ 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.06451612710952759, 0.20689654350280762, 0.08695651590824127, 0.1666666567325592, 0.12765957415103912, 0.08510638028383255, 0.22727271914482117, 0.25806450843811035, 0.25, 0.09756097197532654, 0, 0.1764705777168274, 0.25, 0.0416666641831398, 0.038461536169052124, 0.2368420958518982, 0.06451612710952759, 0.0833333283662796, 0.06451612710952759, 0.23529411852359772, 0.19512194395065308, 0.17142856121063232, 0.20408162474632263, 0.18867924809455872, 0.22857142984867096, 0.13636362552642822, 0.19999998807907104 ]
BJzuKiC9KX
true
[ "Empirical analysis and explanation of particle-based gradient estimators for approximate inference with deep generative models." ]
[ "The objective in deep extreme multi-label learning is to jointly learn feature representations and classifiers to automatically tag data points with the most relevant subset of labels from an extremely large label set.", "Unfortunately, state-of-the-art deep extreme classifiers are either not scalable or inaccurate for short text documents.", " This paper develops the DeepXML algorithm which addresses both limitations by introducing a novel architecture that splits training of head and tail labels", ". DeepXML increases accuracy", "by (a) learning word embeddings on head labels and transferring them through a novel residual connection to data impoverished tail labels", "; (b) increasing the amount of negative training data available by extending state-of-the-art negative sub-sampling techniques; and", "(c) re-ranking the set of predicted labels to eliminate the hardest negatives for the original classifier.", "All of these contributions are implemented efficiently by extending the highly scalable Slice algorithm for pretrained embeddings to learn the proposed DeepXML architecture.", "As a result, DeepXML could efficiently scale to problems involving millions of labels that were beyond the pale of state-of-the-art deep extreme classifiers as it could be more than 10x faster at training than XML-CNN and AttentionXML.", "At the same time, DeepXML was also empirically determined to be up to 19% more accurate than leading techniques for matching search engine queries to advertiser bid phrases.", "Objective: This paper develops the DeepXML algorithm for deep extreme multi-label learning applied to short text documents such as web search engine queries.", "DeepXML is demonstrated to be significantly more accurate and an order of magnitude faster to train than state-of-the-art deep extreme classifiers XML-CNN (Liu et al., 2017) and AttentionXML (You et al., 2018) .", "As a result, DeepXML could efficiently train on problems involving millions of labels on a single GPU that were beyond the scaling capabilities of leading deep extreme classifiers.", "This allowed DeepXML to be applied to the problem of matching millions of advertiser bid phrases to a user's query on a popular web search engine where it was found to increase prediction accuracy by more than 19 percentage points as compared to the leading techniques currently in production.", "Deep extreme multi-label learning: The objective in deep extreme multi-label learning is to learn feature representations and classifiers to automatically tag data points with the most relevant subset of labels from an extremely large label set.", "Note that multi-label learning is a generalization of multi-class classification which aims to predict a single mutually exclusive label.", "Notation: Throughout the paper: N refers to number of training points, d refers to representation dimension, and L refers to number of labels.", "Additionaly, Y refers to the label matrix where y ij = 1 if j th label is relevant to i th instance, and 0 otherwise.", "Please note that differences in accuracies are reported in absolute percentage points unless stated otherwise.", "Matching queries to bid phrases: Web search engines allow ads to be served for not just queries bidded on directly by advertisers, referred to as bid phrases, but also for related queries with matching intent.", "Thus matching a query that was just entered by the user to the relevant subset of millions of advertiser bid phrases in milliseconds is an important research application which forms the focus of this paper.", "DeepXML reformulates this problem as an extreme multi-label learning task by treating each of the top 3 Million monetizable advertiser bid phrases as a separate label and learning a deep classifier to predict the relevant subset of bid phrases given an input query.", "For example, given the user query \"what is diabetes type 2\" as input, DeepXML predicts that ads corresponding to the bid phrases \"what is type 2 diabetes mellitus\", \"diabetes type 2 definition\", \"do i have type 2 diabetes\", etc. could be relevant to the user.", "Note that other high-impact applications have also been reformulated as the extreme classification of short text documents such as queries, webpage titles, etc.", "For instance, (Jain et al., 2019) applied extreme multi-label learning to recommend the subset of relevant Bing queries that could be asked by a user instead of the original query.", "Similarly, extreme multi-label learning could be used to predict which subset of search engine queries might lead to a click on a webpage from its title alone for scenarios where the webpage content might not be available due to privacy concerns, latency issues in fetching the webpage, etc.", "State-of-the-art extreme classifiers: Unfortunately, state-of-the-art extreme classifiers are either not scalable or inaccurate for queries and other short text documents.", "In particular, leading extreme classifiers based on bag-of-words (BoW) features (Prabhu et al., 2018b) and pretrained embeddings (Jain et al., 2019) are highly scalable but inaccurate for documents having only 3 or 4 words.", "While feature engineering (Arora, 2017; Joulin et al., 2017; Wieting & Kiela, 2019) , including taking sub-word tokens, bigram tokens, etc can ameliorate the problem somewhat, their accuracy still lags that of deep learning methods which learn features specific to the task at hand.", "However, such methods, as exemplified by the state-of-the-art XML-CNN (Liu et al., 2017) and AttentionXML (You et al., 2018) , can have prohibitive training costs and have not been shown to scale beyond a million labels on a single GPU.", "At the same time, there is a lot of scope for improving accuracy as XML-CNN and AttentionXML's architectures have not been specialized for short text documents.", "Tail labels: It is worth noting that all the computational and statistical complexity in extreme classification arises due to the presence of millions of tail labels each having just a few, often a single, training point.", "Such labels can be very hard to learn due to data paucity.", "However, in most applications, predicting such rare tail labels accurately is much more rewarding than predicting common and obvious head labels.", "This motivates DeepXML to have specialized architectures for head and tail labels which lead to accuracy gains not only in standard metrics which assign equal weights to all labels but also in propensity scored metrics designed specifically for long-tail extreme classification.", "DeepXML: DeepXML improved both accuracy and scalability over existing deep extreme classifiers by partitioning all L labels into a small set of head labels, with cardinality less than 0.1L, containing the most frequently occuring labels and a large set of tail labels containing everything else.", "DeepXML first represented a document by the tf-idf weighted linear combination of its word-vector embeddings as this architecture was empirically found to be more suitable for short text documents than the CNN and attention based architectures of XML-CNN and AttentionXML respectively.", "The word-vector embeddings of the training documents were learnt on the head labels where there was enough data available to learn a good quality representation of the vocabulary.", "Accuracy was then further boosted by the introduction of a novel residual connection to fine-tune the document representation for head labels.", "This head architecture could be efficiently learnt on a single GPU with a fully connected final output layer due to the small number of labels involved.", "The word-vector embeddings were then transferred to the tail network where there wasn't enough data available to train them from scratch.", "Accuracy gains could potentially be obtained by fine tuning the embeddings but this led to a dramatic increase in the training and prediction costs.", "As an efficient alternative, DeepXML achieved state-of-the-art accuracies by fine tuning only the residual connection based document representation for tail labels.", "A number of modifications were made to the highly scalable Slice classifier (Jain et al., 2019) for pre-trained embeddings to allow it to also train the tail residual connection without sacrificing scalability.", "Finally, instead of learning an expensive ensemble of base classifiers to increase accuracy (Prabhu et al., 2018b; You et al., 2018) , DeepXML improved performance by re-ranking the set of predicted labels to eliminate the hardest negatives for the base classifier with only a 10% increase in training time.", "Results: Experiments on medium scale datasets of short text documents with less than a million labels revealed that DeepXML's accuracy gains over XML-CNN and AttentionXML could be up to 3.92 and 4.32 percentage points respectively in terms of precision@k and up to 5.32 and 4.2 percentage points respectively in terms of propensity-scored precision@k.", "At the same time, DeepXML could be up to 15× and 41× faster to train than XML-CNN and AttentionXML respectively on these datasets using a single GPU.", "Furthermore, XML-CNN and AttentionXML were unable to scale to a proprietary dataset for matching queries to bid phrases containing 3 million labels and 21 million training points on which DeepXML trained in 14 hours on a single GPU.", "On this dataset, DeepXML was found to be at least 19 percentage points more accurate than Slice, Parabel (Prabhu et al., 2018b) , and other leading query bid phrase-matching techniques currently running in production.", "Contributions: This paper makes the following contributions:", "(a) It proposes the DeepXML architecture for short text documents that is more accurate than state-of-the-art extreme classifiers;", "(b) it proposes an efficient training algorithm that allows DeepXML to be an order of magnitude more scalable than leading deep extreme classifiers; and", "(c) it demonstrates that DeepXML could be significantly better at matching user queries to advertiser bid phrases as compared to leading techniques in production on a popular web search engine.", "Source code for DeepXML and the short text document datasets used in this paper can be downloaded from (Anonymous, 2019) .", "This paper developed DeepXML, an algorithm to jointly learn representations for extreme multilabel learning on text data.", "The proposed algorithm addresses the key issues of scalability and low accuracy (especially on tail labels and very short documents) with existing approaches such as Slice, AttentionXML, and XML-CNN, and hence improves on them substantively.", "Experiments revealed that DeepXML-RE can lead to a 1.0-4.3 percentage point gain in performance while being 33-42× faster at training than AttentionXML.", "Furthermore, DeepXML was upto 15 percentage points more accurate than leading techniques for matching search engine queries to advertiser bid phrases.", "We note that DeepXML's gains are predominantly seen to be on predicting tail labels (for which very few direct word associations are available at train time) and on short documents (for which very few words are available at test time).", "This indicates that the method is doing especially well, compared to earlier approaches, at learning word representations which allow for richer and denser associations between words -which allow for the words to be well-clustered in a meaningful semantic space, and hence useful and generalisable information about document labels extracted even when the number of direct word co-occurrences observed is very limited.", "In the future we would like to better understand the nature of these representations, and explore their utility for other linguistic tasks.", "Table 5 lists the parameter settings for different data sets.", "Experiments were performed with a random-seed of 22 on a P40 GPU card with CUDA 10, CuDNN 7.4, and Pytorch 1.2 (Paszke et al., 2017) .", "Figure 4: Precision@5 in k(%) most frequent labels Table 5 : Parameter setting for DeepXML on different datasets.", "Dropout with probability 0.5 was used for all datasets.", "Learning rate is decayed by Decay factor after interval of Decay steps.", "For HNSW, values of construction parameter M = 100, ef C = 300 and query parameter, ef S = 300.", "Denoted by '|', DeepXML-h and DeepXML-t might take different values for some parameters.", "Note that DeepXML-t uses a shortlist of size 500 during training.", "However, a shortlist of size 300 queried from ANNS is used at prediction time for both DeepXML-h and DeepXML-t.", "A", "Label set L is divided into two disjoint sets, i.e. L h and L t based on the frequency of the labels.", "Labels with a frequency more than splitting threshold γ are kept in set L h and others in L t .", "The splitting threshold γ is chosen while ensuring that most of the features (or words) are covered in documents that one at least one instances of label in the set L h and |L h | < 0.2M .", "Two components for DeepXML, DeepXML-h and DeepXML-t, are trained on L h and L t .", "Please note that other strategies like clustering of labels, connected components of labels in a graph were also tried, but the above-mentioned strategy provides good results without any additional overhead.", "More sophisticated algorithms for splitting such as label clustering, may yield better results, however at the cost of increased training time.", "DeepXML, DeepXML-RE yields 3 − 4% better accuracy on propensity scored metrics and can be upto 2% more accurate on vanilla metrics.", "Note that PfastreXML outperform DeepXML and DeepXML-RE on AmazonTitles-3M in propensity scored metrics, however suffers a substantial loss of 10% on vanilla precision and nDCG which is unacceptable for real world applications." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3255814015865326, 0.07692307233810425, 0.1764705777168274, 0, 0.19354838132858276, 0.14814814925193787, 0.1599999964237213, 0.060606054961681366, 0.2222222238779068, 0.054054051637649536, 0.11764705181121826, 0.20000000298023224, 0.2222222238779068, 0.07547169178724289, 0.3181818127632141, 0.20689654350280762, 0.2142857164144516, 0.12121211737394333, 0, 0.04999999701976776, 0.0952380895614624, 0.21739129722118378, 0, 0.060606054961681366, 0.09999999403953552, 0.07692307233810425, 0.06666666269302368, 0.045454543083906174, 0.11538460850715637, 0.08695651590824127, 0.1111111044883728, 0.1818181723356247, 0.09090908616781235, 0.13333332538604736, 0.08888888359069824, 0.20000000298023224, 0.08163265138864517, 0.1111111044883728, 0.12903225421905518, 0.1666666567325592, 0, 0.05882352590560913, 0.0624999962747097, 0.04878048226237297, 0.15686273574829102, 0.15094339847564697, 0.0555555522441864, 0.09302325546741486, 0.08888888359069824, 0, 0.06896550953388214, 0.1764705777168274, 0, 0.06451612710952759, 0.0714285671710968, 0.190476194024086, 0, 0.0624999962747097, 0.09756097197532654, 0.12903225421905518, 0.1249999925494194, 0, 0.1621621549129486, 0.06896550953388214, 0.0952380895614624, 0.09090908616781235, 0.14814814925193787, 0.0833333283662796, 0.09090908616781235, 0.13333332538604736, 0.19354838132858276, 0.13793103396892548, 0.13636362552642822, 0.0833333283662796, 0.09999999403953552, 0.1249999925494194, 0.12903225421905518, 0.09756097197532654 ]
SJlWyerFPS
true
[ "Scalable and accurate deep multi label learning with millions of labels." ]
[ "Robust estimation under Huber's $\\epsilon$-contamination model has become an important topic in statistics and theoretical computer science.", "Rate-optimal procedures such as Tukey's median and other estimators based on statistical depth functions are impractical because of their computational intractability.", "In this paper, we establish an intriguing connection between f-GANs and various depth functions through the lens of f-Learning.", "Similar to the derivation of f-GAN, we show that these depth functions that lead to rate-optimal robust estimators can all be viewed as variational lower bounds of the total variation distance in the framework of f-Learning.", "This connection opens the door of computing robust estimators using tools developed for training GANs.", "In particular, we show that a JS-GAN that uses a neural network discriminator with at least one hidden layer is able to achieve the minimax rate of robust mean estimation under Huber's $\\epsilon$-contamination model.", "Interestingly, the hidden layers of the neural net structure in the discriminator class are shown to be necessary for robust estimation.", "In the setting of Huber's -contamination model (Huber, 1964; 1965) , one has i.i.d observations X 1 , ..., X n ∼ (1 − )P θ + Q,and the goal is to estimate the model parameter θ.", "Under the data generating process (1), each observation has a 1 − probability to be drawn from P θ and the other probability to be drawn from the contamination distribution Q. The presence of an unknown contamination distribution poses both statistical and computational challenges to the problem.", "For example, consider a normal mean estimation problem with P θ = N (θ, I p ).", "Due to the contamination of data, the sample average, which is optimal when = 0, can be arbitrarily far away from the true mean if Q charges a positive probability at infinity.", "Moreover, even robust estimators such as coordinatewise median and geometric median are proved to be suboptimal under the setting of (1) (Chen et al., 2018; Diakonikolas et al., 2016a; Lai et al., 2016) .", "The search for both statistically optimal and computationally feasible procedures has become a fundamental problem in areas including statistics and computer science.For the normal mean estimation problem, it has been shown in Chen et al. (2018) that the minimax rate with respect to the squared 2 loss is p n ∨ 2 , and is achieved by Tukey's median (Tukey, 1975) .", "Despite the statistical optimality of Tukey's median, its computation is not tractable.", "In fact, even an approximate algorithm takes O(e Cp ) in time BID1 Chan, 2004; Rousseeuw & Struyf, 1998) .Recent", "developments in theoretical computer science are focused on the search of computationally tractable algorithms for estimating θ under Huber's -contamination model (1). The success", "of the efforts started from two fundamental papers Diakonikolas et al. (2016a) ; Lai et al. (2016) , where two different but related computational strategies \"iterative filtering\" and \"dimension halving\" were proposed to robustly estimate the normal mean. These algorithms", "can provably achieve the minimax rate p n ∨ 2 up to a poly-logarithmic factor in polynomial time. The main idea behind", "the two methods is a critical fact that a good robust moment estimator can be certified efficiently by higher moments. This idea was later", "further extended (Diakonikolas et al., 2017; Du et al., 2017; Diakonikolas et al., 2016b; 2018a; c; b; Kothari et al., 2018) to develop robust and computable procedures for various other problems.However, many of the computationally feasible procedures for robust mean estimation in the literature rely on the knowledge of covariance matrix and sometimes the knowledge of contamination proportion. Even though these assumptions", "can be relaxed, nontrivial modifications of the algorithms are required for such extensions and statistical error rates may also be affected. Compared with these computationally", "feasible procedures proposed in the recent literature for robust estimation, Tukey's median (9) and other depth-based estimators (Rousseeuw & Hubert, 1999; Mizera, 2002; Zhang, 2002; Mizera & Müller, 2004; Paindaveine & Van Bever, 2017) have some indispensable advantages in terms of their statistical properties. First, the depth-based estimators have", "clear objective functions that can be interpreted from the perspective of projection pursuit (Mizera, 2002) . Second, the depth-based procedures are", "adaptive to unknown nuisance parameters in the models such as covariance structures, contamination proportion, and error distributions (Chen et al., 2018; Gao, 2017) . Last but not least, Tukey's depth and", "other depth functions are mostly designed for robust quantile estimation, while the recent advancements in the theoretical computer science literature are all focused on robust moments estimation. Although this is not an issue when it", "comes to normal mean estimation, the difference is fundamental for robust estimation under general settings such as elliptical distributions where moments do not necessarily exist.Given the desirable statistical properties discussed above, this paper is focused on the development of computational strategies of depth-like procedures. Our key observation is that robust estimators", "that are maximizers of depth functions, including halfspace depth, regression depth and covariance matrix depth, can all be derived under the framework of f -GAN (Nowozin et al., 2016) . As a result, these depth-based estimators can", "be viewed as minimizers of variational lower bounds of the total variation distance between the empirical measure and the model distribution (Proposition 2.1). This observation allows us to leverage the recent", "developments in the deep learning literature to compute these variational lower bounds through neural network approximations. Our theoretical results give insights on how to choose", "appropriate neural network classes that lead to minimax optimal robust estimation under Huber's -contamination model. In particular, Theorem 3.1 and 3.2 characterize the networks", "which can robustly estimate the Gaussian mean by TV-GAN and JS-GAN, respectively; Theorem 4.1 is an extension to robust location estimation under the class of elliptical distributions which includes Cauchy distribution whose mean does not exist. Numerical experiments in Section 5 are provided to show the", "success of these GANs." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.05128204822540283, 0.1395348757505417, 0.04878048226237297, 0.07692307233810425, 0.10810810327529907, 0.18518517911434174, 0.19512194395065308, 0.07547169178724289, 0.17543859779834747, 0.1538461446762085, 0.11538460850715637, 0.15686273574829102, 0.18421052396297455, 0.11764705181121826, 0, 0.04347825422883034, 0.13793103396892548, 0.09090908616781235, 0.08888888359069824, 0.11594202369451523, 0.17391303181648254, 0.0952380895614624, 0.04878048226237297, 0.07692307233810425, 0.07407406717538834, 0.11764705181121826, 0.1090909019112587, 0.11999999731779099, 0.04347825422883034, 0.12765957415103912, 0.1875, 0.07692307233810425 ]
BJgRDjR9tQ
true
[ "GANs are shown to provide us a new effective robust mean estimate against agnostic contaminations with both statistical optimality and practical tractability." ]
[ "Long Short-Term Memory (LSTM) is one of the most powerful sequence models.", "Despite the strong performance, however, it lacks the nice interpretability as in state space models.", "In this paper, we present a way to combine the best of both worlds by introducing State Space LSTM (SSL), which generalizes the earlier work \\cite{zaheer2017latent} of combining topic models with LSTM.", "However, unlike \\cite{zaheer2017latent}, we do not make any factorization assumptions in our inference algorithm.", "We present an efficient sampler based on sequential Monte Carlo (SMC) method that draws from the joint posterior directly.", "Experimental results confirms the superiority and stability of this SMC inference algorithm on a variety of domains.", "State space models (SSMs), such as hidden Markov models (HMM) and linear dynamical systems (LDS), have been the workhorse of sequence modeling in the past decades From a graphical model perspective, efficient message passing algorithms BID34 BID17 are available in compact closed form thanks to their simple linear Markov structure.", "However, simplicity comes at a cost: real world sequences can have long-range dependencies that cannot be captured by Markov models; and the linearity of transition and emission restricts the flexibility of the model for complex sequences.A popular alternative is the recurrent neural networks (RNN), for instance the Long Short-Term Memory (LSTM) BID14 ) which has become a standard for sequence modeling nowadays.", "Instead of associating the observations with stochastic latent variables, RNN directly defines the distribution of each observation conditioned on the past, parameterized by a neural network.", "The recurrent parameterization not only allows RNN to provide a rich function class, but also permits scalable stochastic optimization such as the backpropagation through time (BPTT) algorithm.", "However, flexibility does not come for free as well: due to the complex form of the transition function, the hidden states of RNN are often hard to interpret.", "Moreover, it can require large amount of parameters for seemingly simple sequence models BID35 .In", "this paper, we propose a new class of models State Space LSTM (SSL) that combines the best of both worlds. We", "show that SSLs can handle nonlinear, non-Markovian dynamics like RNNs, while retaining the probabilistic interpretations of SSMs. The", "intuition, in short, is to separate the state space from the sample space. In", "particular, instead of directly estimating the dynamics from the observed sequence, we focus on modeling the sequence of latent states, which may represent the true underlying dynamics that generated the noisy observations. Unlike", "SSMs, where the same goal is pursued under linearity and Markov assumption, we alleviate the restriction by directly modeling the transition function between states parameterized by a neural network. On the", "other hand, we bridge the state space and the sample space using classical probabilistic relation, which not only brings additional interpretability, but also enables the LSTM to work with more structured representation rather than the noisy observations. Indeed", ", parameter estimation of such models can be nontrivial. Since", "the LSTM is defined over a sequence of latent variables rather than observations, it is not straightforward to apply the usual BPTT algorithm without making variational approximations. In BID35", ", which is an instance of SSL, an EM-type approach was employed: the algorithm alternates between imputing the latent states and optimizing the LSTM over the imputed sequences. However,", "as we show below, the inference implicitly assumes the posterior is factorizable through time. This is", "a restrictive assumption since the benefit of rich state transition brought by the LSTM may be neutralized by breaking down the posterior over time.We present a general parameter estimation scheme for the proposed class of models based on sequential Monte Carlo (SMC) BID8 , in particular the Particle Gibbs BID1 . Instead", "of sampling each time point individually, we directly sample from the joint posterior without making limiting factorization assumptions. Through", "extensive experiments we verify that sampling from the full posterior leads to significant improvement in the performance.Related works Enhancing state space models using neural networks is not a new idea. Traditional", "approaches can be traced back to nonlinear extensions of linear dynamical systems, such as extended or unscented Kalman filters (Julier & Uhlmann, 1997), where both state transition and emission are generalized to nonlinear functions. The idea of", "parameterizing them with neural networks can be found in BID12 , as well as many recent works BID22 BID2 BID15 BID23 BID18 thanks to the development of recognition networks BID20 BID32 . Enriching the", "output distribution of RNN has also regain popularity recently. Unlike conventionally", "used multinomial output or mixture density networks BID4 , recent approaches seek for more flexible family of distributions such as restricted Boltzmann machines (RBM) BID6 or variational auto-encoders (VAE) BID13 BID7 .On the flip side, there", "have been studies in introducing stochasticity to recurrent neural networks. For instance, BID30 and", "BID3 incorporated independent latent variables at each time step; while in BID9 the RNN is attached to both latent states and observations. We note that in our approach", "the transition and emission are decoupled, not only for interpretability but also for efficient inference without variational assumptions.On a related note, sequential Monte Carlo methods have recently received attention in approximating the variational objective BID27 BID24 BID29 . Despite the similarity, we emphasize", "that the context is different: we take a stochastic EM approach, where the full expectation in E-step is replaced by the samples from SMC. In contrast, SMC in above works is aimed", "at providing a tighter lower bound for the variational objective.", "In this paper we revisited the problem of posterior inference in Latent LSTM models as introduced in BID35 .", "We generalized their model to accommodate a wide variety of state space models and most importantly we provided a more principled Sequential Monte-Carlo (SMC) algorithm for posterior inference.", "Although the newly proposed inference method can be slower, we showed over a variety of dataset that the new SMC based algorithm is far superior and more stable.", "While computation of the new SMC algorithm scales linearly with the number of particles, this can be naively parallelized.", "In the future we plan to extend our work to incorporate a wider class of dynamically changing structured objects such as time-evolving graphs." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1111111044883728, 0.15789473056793213, 0.2641509473323822, 0.10526315122842789, 0.3720930218696594, 0.29999998211860657, 0.17391303181648254, 0.07792207598686218, 0.12765957415103912, 0.07843136787414551, 0.0416666604578495, 0.10256409645080566, 0.3636363446712494, 0.0476190410554409, 0.1111111044883728, 0.07843136787414551, 0.07843136787414551, 0.13793103396892548, 0.11764705181121826, 0.15686273574829102, 0.20408162474632263, 0.052631575614213943, 0.3478260934352875, 0.04651162400841713, 0.145454540848732, 0.10344827175140381, 0.03703703358769417, 0.05714285373687744, 0.03389830142259598, 0.052631575614213943, 0.07999999821186066, 0.1904761791229248, 0.03999999538064003, 0.05882352590560913, 0.19512194395065308, 0.3529411852359772, 0.23529411852359772, 0.09756097197532654, 0.08695651590824127 ]
r1drp-WCZ
true
[ "We present State Space LSTM models, a combination of state space models and LSTMs, and propose an inference algorithm based on sequential Monte Carlo. " ]
[ "Given a large database of concepts but only one or a few examples of each, can we learn models for each concept that are not only generalisable, but interpretable?", "In this work, we aim to tackle this problem through hierarchical Bayesian program induction.", "We present a novel learning algorithm which can infer concepts as short, generative, stochastic programs, while learning a global prior over programs to improve generalisation and a recognition network for efficient inference.", "Our algorithm, Wake-Sleep-Remember (WSR), combines gradient learning for continuous parameters with neurally-guided search over programs.", "We show that WSR learns compelling latent programs in two tough symbolic domains: cellular automata and Gaussian process kernels.", "We also collect and evaluate on a new dataset, Text-Concepts, for discovering structured patterns in natural text data.", "A grand challenge for building more flexible AI is developing learning algorithms which quickly pick up a concept from just one or a few examples, yet still generalise well to new instances of that concept.", "In order to instill algorithms with the correct inductive biases, research in few-shot learning usually falls on a continuum between model-driven and data-driven approaches.Model-driven approaches place explicit domain-knowledge directly into the learner, often as a stochastic program describing how concepts and their instances are produced.", "For example, we can model handwritten characters with a motor program that composes distinct pen strokes BID13 , or spoken words as sequences of phonemes which obey particular phonotactic constraints.", "Such representationally explicit models are highly interpretable and natural to compose together into larger systems, although it may be difficult to completely pre-specify the required inductive biases.By contrast, data-driven approaches start with only minimal assumptions about a domain, and instead acquire the inductive biases themselves from a large background dataset.", "This is typified by recent work in deep meta-learning, such as the Neural Statistian BID5 ; see also BID9 ), MAML BID6 ; see also BID14 ) and Prototypical Networks BID15 .", "Crucially, these models rely on stochastic gradient descent (SGD) for the meta-learning phase, as it is a highly scalable algorithm that applies easily to datasets with thousands of classes.Ideally these approaches would not be exclusive -for many domains of AI we have access to large volumes of data and also rich domain knowledge, so we would like to utilise both.", "In practice, however, different algorithms are suited to each end of the continuum: SGD requires objectives to be differentiable, but explicit domain knowledge often introduces discrete latent variables, or programs.", "Thus, meta-learning from large datasets is often challenging in more explicit models.In this work, we aim to bridge these two extremes: we learn concepts represented explicitly as stochastic programs, while meta-learning generative parameters and an inductive bias over programs from a large unlabelled dataset.", "We introduce a simple learning algorithm, Wake-Sleep-Remember (WSR), which combines SGD over continuous parameters with neurally-guided search over latent programs to maximize a variational objective, the evidence lower bound (ELBo).In", "evaluating our algorithm, we also release a new dataset for few-shot concept learning in a highlystructured natural domain of short text patterns (see TAB0 ). This", "dataset contains 1500 concepts such as phone numbers, dates, email addresses and serial numbers, crawled from public GitHub repositories. Such", "concepts are easy for humans to learn using only a few examples, and are well described as short programs which compose discrete, interpretable parts. Thus", ", we see this as an excellent challenge domain for structured meta-learning and explainable AI. 2 BACKGROUND", ": HELMHOLTZ MACHINES AND VARIATIONAL BAYES Suppose we wish to learn generative models of spoken words unsupervised, using a large set of audio recordings. We may aim", "to include domain knowledge that words are built up from different short phonemes, without defining in advance exactly what the kinds of phoneme are, or exactly which phonemes occur in each recording. This means", "that, in order to learn a good model of words in general, we must also infer the particular latent phoneme sequence that generated each recording.This latent sequence must be re-estimated whenever the global model is updated, which itself can be a hard computational problem. To avoid a", "costly learning 'inner-loop', a longstanding idea in machine learning is to train two distinct models simultaneously: a generative model which describes the joint distribution of latent phonemes and sounds, and a recognition model which allows phonemes to be inferred quickly from data. These two", "models together are often called a Helmholtz Machine BID2 .Formally,", "algorithms for training a Helmholtz Machine are typically motivated by Variational Bayes. Suppose we", "wish to learn a generative model p(z, x), which is a joint distribution over latent variables z and observations x, alongside a recognition model q(z; x), which is a distribution over latent variables conditional on observations. It can be", "shown that the marginal likelihood of each observation is bounded below by DISPLAYFORM0 where D KL [q(z; x)||p(z|x)] is the KL divergence from the true posterior p(z|x) to the recognition model's approximate posterior q(z; x). Learning", "a Helmholtz machine is then framed as maximisation of this evidence lower bound (or ELBo), which provides the shared basis for two historically distinct approaches to learning.", "In this paper, we consider learning interpretable concepts from one or a few examples: a difficult task which gives rise to both inductive and computational challenges.", "Inductively, we aim to achieve strong generalisation by starting with rich domain knowledge and then 'filling in the gaps', using a large amount of background data.", "Computationally, we aim to tackle the challenge of finding high-probability programs by using a neural recognition model to guide search.Putting these pieces together we propose the Wake-Sleep-Remember algorithm, in which a Helmholtz machine is augmented with an persistent memory of discovered latent programs -optimised as a finite variational posterior.", "We demonstrate on several domains that our algorithm can learn generalisable concepts, and comparison with baseline models shows that WSR", "(a) utilises both its recognition model and its memory in order to search for programs effectively, and", "(b) utilises both domain knowledge and extensive background data in order to make strong generalisations." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.14999999105930328, 0.0714285671710968, 0.1818181723356247, 0, 0.11764705181121826, 0.1818181723356247, 0.1666666567325592, 0.10526315122842789, 0, 0.20000000298023224, 0.09302324801683426, 0.17391304671764374, 0.09090908616781235, 0.178571417927742, 0.13636362552642822, 0, 0.11764705181121826, 0.25641024112701416, 0.1249999925494194, 0.19512194395065308, 0.12765957415103912, 0.1111111044883728, 0.20000000298023224, 0.07999999821186066, 0, 0.1428571343421936, 0.13333332538604736, 0.0952380895614624, 0.19999998807907104, 0.1463414579629898, 0.06896551698446274, 0.29411762952804565, 0.13333332538604736, 0.13333332538604736 ]
B1gTE2AcKQ
true
[ "We extend the wake-sleep algorithm and use it to learn to learn structured models from few examples, " ]
[ "The knowledge regarding the function of proteins is necessary as it gives a clear picture of biological processes.", "Nevertheless, there are many protein sequences found and added to the databases but lacks functional annotation.", "The laboratory experiments take a considerable amount of time for annotation of the sequences.", "This arises the need to use computational techniques to classify proteins based on their functions.", "In our work, we have collected the data from Swiss-Prot containing 40433 proteins which is grouped into 30 families.", "We pass it to recurrent neural network(RNN), long short term memory(LSTM) and gated recurrent unit(GRU) model and compare it by applying trigram with deep neural network and shallow neural network on the same dataset.", "Through this approach, we could achieve maximum of around 78% accuracy for the classification of protein families. \n", "Proteins are considered to be essentials of life because it performs a variety of functions to sustain life.", "It performs DNA replication, transportation of molecules from one cell to another cell, accelerates metabolic reactions and several other important functions carried out within an organism.", "Proteins carry out these functions as specified by the informations encoded in the genes.", "Proteins are classified into three classes based on their tertiary structure as globular, membrane and fibrous proteins.", "Many of the globular proteins are soluble enzymes.", "Membrane proteins enables the transportation of electrically charged molecules past the cell membranes by providing channels.", "Fibrous proteins are always structural.", "Collagen which is a fibrous protein forms the major component of connective tissues.", "Escherichia coli cell is partially filled by proteins and 3% and 20% fraction of DNA and RNA respectively contains proteins.", "All of this contributes in making proteomics as a very important field in modern computational biology.", "It is therefore becoming important to predict protein family classification and study their functionalities to better understand the theory behind life cycle.Proteins are polymeric macromolecules consisting of amino acid residue chains joined by peptide bonds.", "And proteome of a particular cell type is a set of proteins that come under the same cell type.", "Proteins is framed using a primary structure represented as a sequence of 20-letter alphabets which is associated with a particular amino acid base subunit of proteins.", "Proteins differ from one another by the arrangement of amino acids intent on nucleotide sequence of their genes.", "This results in the formation of specific 3D structures by protein folding which determines the unique functionality of the proteins.", "The primary structure of proteins is an abstracted version of the complex 3D structure but retains sufficient information for protein family classification and infer the functionality of the families.Protein family consists of a set of proteins that exhibits similar structure at sequence as well as molecular level involving same functions.", "The lack of knowledge of functional information about sequences in spite of the large number of sequences known, led to many works identifying family of proteins based on primary sequences BID0 BID1 BID2 .", "Dayhoff identified the families of numerous proteins BID3 .", "Members of the same protein family can be identified using sequence homology which is defined as the evolutionary relatedness.", "It also exhibits similar secondary structure through modular protein domains which further group proteins families into super families BID4 .", "These classifications are listed in database like SCOP BID5 .", "Protein family database (Pfam) BID6 is an extremely large source which classify proteins into family, domain, repeat or motif.", "Protein classification using 3D structure is burdensome and require complex techniques like X-ray crystallography and NMR spectroscopy.", "This led to the works BID7 BID8 BID9 which uses only primary structure for protein family classification.", "In this work we use data from Swiss-Prot for protein family classification and obtain a classification accuracy of about 96%.In", "our work we gathered family information of about 40433 protein sequences in Swiss-Prot from Protein family database(Pfam), which consists of 30 distinct families. The", "application of keras embedding and n-gram technique is used with deep learning architectures and traditional machine learning classifiers respectively for text classification problems in the cyber security BID33 , BID34 , BID35 , BID36 , BID37 . By", "following, we apply keras word embedding and pass it to various deep neural network models like recurrent neural network(RNN), long short term memory(LSTM) and gated recurrent unit(GRU) and then compare it performance by applying trigram with deep and shallow neural networks for protein family classification. To", "verify the model used in our work, we test it over dataset consisting of about 12000 sequences from the same database.The rest of the part of this paper are organized as follows. Section", "2 discusses the related work, Section 3 provides background details of deep learning architecture, Section 4 discusses the proposed methodology, Section 5 provides results and submissions and at last the conclusion and future work directions are placed in Section 6.", "In our work we have analyzed the performance of different recurrent models like RNN, LSTM and GRU after applying word embedding to the sequence data to classify the protein sequences to their respective families.", "We have also compared the results by applying trigram with deep neural network and shallow neural network.", "Neural networks are preferred over traditional machine learning models because they capture optimal feature representation by themselves taking the primary protein sequences as input and give considerably high family classification accuracy of about 96%.Deep", "neural networks architecture is very complex therefore, understanding the background mechanics of a neural network model remain as a black box and thus the internal operation of the network is only partially demonstrated. In the", "future work, the internal working of the network can be explored by examining the Eigenvalues and Eigenvectors across several time steps obtained by transforming the state of the network to linearized dynamics BID32 ." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0, 0, 0, 0, 0.3720930218696594, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0833333283662796, 0.3333333432674408, 0, 0.04255318641662598, 0.043478257954120636, 0.12903225421905518, 0.07843136787414551, 0.09302324801683426, 0 ]
HygGrD70qm
true
[ "Proteins, amino-acid sequences, machine learning, deep learning, recurrent neural network(RNN), long short term memory(LSTM), gated recurrent unit(GRU), deep neural networks" ]
[ "Spoken term detection (STD) is the task of determining whether and where a given word or phrase appears in a given segment of speech.", "Algorithms for STD are often aimed at maximizing the gap between the scores of positive and negative examples.", "As such they are focused on ensuring that utterances where the term appears are ranked higher than utterances where the term does not appear.", "However, they do not determine a detection threshold between the two.", "In this paper, we propose a new approach for setting an absolute detection threshold for all terms by introducing a new calibrated loss function.", "The advantage of minimizing this loss function during training is that it aims at maximizing not only the relative ranking scores, but also adjusts the system to use a fixed threshold and thus enhances system robustness and maximizes the detection accuracy rates.", "We use the new loss function in the structured prediction setting and extend the discriminative keyword spotting algorithm for learning the spoken term detector with a single threshold for all terms.", "We further demonstrate the effectiveness of the new loss function by applying it on a deep neural Siamese network in a weakly supervised setting for template-based spoken term detection, again with a single fixed threshold.", "Experiments with the TIMIT, WSJ and Switchboard corpora showed that our approach not only improved the accuracy rates when a fixed threshold was used but also obtained higher Area Under Curve (AUC).", "Spoken term detection (STD) refers to the proper detection of any occurrence of a given word or phrase in a speech signal.", "Typically, any such system assigns a confidence score to every term it presumably detects.", "A speech signal is called positive or negative, depending on whether or not it contains the desired term.", "Ideally, an STD system assigns a positive speech input with a score higher than the score it assigns to a negative speech input.During inference, a detection threshold is chosen to determine the point from which a score would be considered positive or negative.", "The choice of the threshold represents a trade-off between different operational settings, as a high value of the threshold could cause an excessive amount of false negatives (instances incorrectly classified as negative), whereas a low value of the threshold could cause additional false positives (instances incorrectly classified as positive).The", "performance of STD systems can be measured by the Receiver Operation Characteristics (ROC) curve, that is, a plot of the true positive (spotting a term correctly) rate as a function of the false positive (mis-spotting a term) rate. Every", "point on the graph corresponds to a specific threshold value. The area", "under the ROC curve (AUC) is the expected performance of the system for all threshold values.A common practice for finding the threshold is to empirically select the desired value using a cross validation procedure. In BID2", ", the threshold was selected using the ROC curve. Similarly", ", in BID7 BID16 and the references therein, the threshold was chosen such that the system maximized the Actual Term Weighted Value (ATWV) score BID14 . Additionally", ", BID18 claims that a global threshold that was chosen for all terms was inferior to using a term specific threshold BID17 .In this paper", "we propose a new method to embed an automatic adjustment of the detection threshold within a learning algorithm, so that it is fixed and known for all terms. We present two", "algorithmic implementations of our method: the first is a structured prediction model that is a variant of the discriminative keyword spotting algorithm proposed by BID15 BID20 BID21 , and the second implementation extends the approach used for the structured prediction model on a variant of whole-word Siamese deep network models BID9 BID1 BID13 . Both of these", "approaches in their original form aim to assign positive speech inputs with higher scores than those assigned to negative speech inputs, and were shown to have good results on several datasets. However, maximizing", "the gap between the scores of the positive and negative examples only ensures the correct relative order between those examples, and does not fix a threshold between them; therefore it cannot guarantee a correct detection for a global threshold. Our goal is to train", "a system adjusted to use a global threshold valid for all terms.In this work, we set the threshold to be a fixed value, and adjust the decoding function accordingly. To do so, we propose", "a new loss function that trains the ranking function to separate the positive and negative instances; that is, instead of merely assigning a higher score to the positive examples, it rather fixes the threshold to be a certain constant, and assigns the positive examples with scores greater than the threshold, and the negative examples with scores less than the threshold. Additionally, this loss", "function is a surrogate loss function which extends the hinge loss to penalize misdetected instances, thus enhancing the system's robustness. The new loss function is", "an upper bound to the ranking loss function, hence minimizing the new loss function can lead to minimization of ranking errors, or equivalently to the maximization of the AUC.", "In this work, we introduced a new loss function that can be used to train a spoken term detection system with a fixed desired threshold for all terms.", "We introduced a new discriminative structured prediction model that is based on the Passive-Aggressive algorithm.", "We show that the new loss can be used in training weakly supervised deep network models as well.", "Results suggest that our new loss function yields AUC and accuracy values that are better than previous works' results." ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.13636362552642822, 0.04999999329447746, 0.0476190410554409, 0.11764705181121826, 0.22727271914482117, 0.26229506731033325, 0.3199999928474426, 0.2181818187236786, 0.14814814925193787, 0.1428571343421936, 0.10810810327529907, 0, 0.1111111044883728, 0.07407406717538834, 0.11320754140615463, 0.17142856121063232, 0.15094339847564697, 0.1249999925494194, 0.17391303181648254, 0.22727271914482117, 0.22641508281230927, 0.1846153736114502, 0.07547169178724289, 0.14035087823867798, 0.19230768084526062, 0.25806450843811035, 0.2380952388048172, 0.22727271914482117, 0.2857142686843872, 0.2631579041481018, 0.19512194395065308, 0.2926829159259796 ]
r1etIqwjiX
true
[ "Spoken Term Detection, using structured prediction and deep networks, implementing a new loss function that both maximizes AUC and ranks according to a predefined threshold." ]
[ "Federated learning involves jointly learning over massively distributed partitions of data generated on remote devices.", "Naively minimizing an aggregate loss function in such a network may disproportionately advantage or disadvantage some of the devices.", "In this work, we propose q-Fair Federated Learning (q-FFL), a novel optimization objective inspired by resource allocation strategies in wireless networks that encourages a more fair accuracy distribution across devices in federated networks.", "To solve q-FFL, we devise a scalable method, q-FedAvg, that can run in federated networks.", "We validate both the improved fairness and flexibility of q-FFL and the efficiency of q-FedAvg through simulations on federated datasets.", "With the growing prevalence of IoT-type devices, data is frequently collected and processed outside of the data center and directly on distributed devices, such as wearable devices or mobile phones.", "Federated learning is a promising learning paradigm in this setting that pushes statistical model training to the edge (McMahan et al., 2017) .The", "number of devices in federated networks is generally large-ranging from hundreds to millions. While", "one can naturally view federated learning as a multi-task learning problem where each device corresponds to a task (Smith et al., 2017) , the focus is often instead to fit a single global model over these distributed devices/tasks via some empirical risk minimization objective (McMahan et al., 2017) . Naively", "minimizing the average loss via such an objective may disproportionately advantage or disadvantage some of the devices, which is exacerbated by the fact that the data are often heterogeneous across devices both in terms of size and distribution. In this", "work, we therefore ask: Can we devise an efficient optimization method to encourage a more fair distribution of the model performance across devices in federated networks?There has", "been tremendous recent interest in developing fair methods for machine learning. However,", "current methods that could help to improve the fairness of the accuracy distribution in federated networks are typically proposed for a much smaller number of devices, and may be impractical in federated settings due to the number of involved constraints BID5 . Recent work", "that has been proposed specifically for the federated setting has also only been applied at small scales (2-3 groups/devices), and lacks flexibility by optimizing only the performance of the single worst device (Mohri et al., 2019) .In this work", ", we propose q-FFL, a novel optimization objective that addresses fairness issues in federated learning. Inspired by", "work in fair resource allocation for wireless networks, q-FFL minimizes an aggregate reweighted loss parameterized by q such that the devices with higher loss are given higher relative weight to encourage less variance in the accuracy distribution. In addition", ", we propose a lightweight and scalable distributed method, qFedAvg, to efficiently solve q-FFL, which carefully accounts for important characteristics of the federated setting such as communication-efficiency and low participation of devices BID3 McMahan et al., 2017) . We empirically", "demonstrate the fairness, efficiency, and flexibility of q-FFL and q-FedAvg compared with existing baselines. On average, q-FFL", "is able to reduce the variance of accuracies across devices by 45% while maintaining the same overall average accuracy." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0, 0.10256409645080566, 0.35999998450279236, 0.34285715222358704, 0.21621620655059814, 0.04444443807005882, 0.1860465109348297, 0.1764705777168274, 0.12903225421905518, 0.178571417927742, 0.260869562625885, 0.0624999962747097, 0.2545454502105713, 0.1090909019112587, 0.4864864945411682, 0.145454540848732, 0.28070175647735596, 0.05714285373687744, 0.052631575614213943 ]
BJzvEqrj2V
true
[ "We propose a novel optimization objective that encourages fairness in heterogeneous federated networks, and develop a scalable method to solve it." ]
[ "We propose a novel autoencoding model called Pairwise Augmented GANs.", "We train a generator and an encoder jointly and in an adversarial manner.", "The generator network learns to sample realistic objects.", "In turn, the encoder network at the same time is trained to map the true data distribution to the prior in latent space.", "To ensure good reconstructions, we introduce an augmented adversarial reconstruction loss.", "Here we train a discriminator to distinguish two types of pairs: an object with its augmentation and the one with its reconstruction.", "We show that such adversarial loss compares objects based on the content rather than on the exact match.", "We experimentally demonstrate that our model generates samples and reconstructions of quality competitive with state-of-the-art on datasets MNIST, CIFAR10, CelebA and achieves good quantitative results on CIFAR10.", "Deep generative models are a powerful tool to sample complex high dimensional objects from a low dimensional manifold.", "The dominant approaches for learning such generative models are variational autoencoders (VAEs) and generative adversarial networks (GANs) BID12 .", "VAEs allow not only to generate samples from the data distribution, but also to encode the objects into the latent space.", "However, VAE-like models require a careful likelihood choice.", "Misspecifying one may lead to undesirable effects in samples and reconstructions (e.g., blurry images).", "On the contrary, GANs do not rely on an explicit likelihood and utilize more complex loss function provided by a discriminator.", "As a result, they produce higher quality images.", "However, the original formulation of GANs BID12 lacks an important encoding property that allows many practical applications.", "For example, it is used in semi-supervised learning , in a manipulation of object properties using low dimensional manifold BID7 and in an optimization utilizing the known structure of embeddings BID11 .VAE-GAN", "hybrids are of great interest due to their potential ability to learn latent representations like VAEs, while generating high-quality objects like GANs. In such", "generative models with a bidirectional mapping between the data space and the latent space one of the desired properties is to have good reconstructions (x ≈ G(E(x))). In many", "hybrid approaches BID30 BID34 BID36 BID3 BID33 as well as in VAE-like methods it is achieved by minimizing L 1 or L 2 pixel-wise norm between x and G(E(x)). However", ", the main drawback of using these standard reconstruction losses is that they enforce the generative model to recover too many unnecessary details of the source object x. For example", ", to reconstruct a bird picture we do not need an exact position of the bird on an image, but the pixel-wise loss penalizes a lot for shifted reconstructions. Recently,", "improved ALI model BID8 by introducing a reconstruction loss in the form of a discriminator which classifies pairs (x, x) and (x, G(E(x))). However,", "in such approach, the discriminator tends to detect the fake pair (x, G(E(x))) just by checking the identity of x and G(E(x)) which leads to vanishing gradients.In this paper, we propose a novel autoencoding model which matches the distributions in the data space and in the latent space independently as in BID36 . To ensure", "good reconstructions, we introduce an augmented adversarial reconstruction loss as a discriminator which classifies pairs (x, a(x)) and (x, G(E(x))) where a(·) is a stochastic augmentation function. This enforces", "the DISPLAYFORM0 discriminator to take into account content invariant to the augmentation, thus making training more robust. We call this", "approach Pairwise Augmented Generative Adversarial Networks (PAGANs).Measuring a reconstruction", "quality of autoencoding models is challenging. A standard reconstruction", "metric RMSE does not perform the content-based comparison. To deal with this problem", "we propose a novel metric Reconstruction Inception Dissimilarity (RID) which is robust to content-preserving transformations (e.g., small shifts of an image). We show qualitative results", "on common datasets such as MNIST BID19 , CIFAR10 BID17 and CelebA BID21 . PAGANs outperform existing", "VAE-GAN hybrids in Inception Score BID31 and Fréchet Inception Distance BID14 except for the recently announced method PD-WGAN BID10 on CIFAR10 dataset.", "In this paper, we proposed a novel framework with an augmented adversarial reconstruction loss.", "We introduced RID to estimate reconstructions quality for images.", "It was empirically shown that this metric could perform content-based comparison of reconstructed images.", "Using RID, we proved the value of augmentation in our experiments.", "We showed that the augmented adversarial loss in this framework plays a key role in getting not only good reconstructions but good generated images.Some open questions are still left for future work.", "More complex architectures may be used to achieve better IS and RID.", "The random shift augmentation may not the only possible choice, and other smart choices are also possible." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.4000000059604645, 0.19354838132858276, 0, 0, 0.25806450843811035, 0.19999998807907104, 0.1666666567325592, 0.2222222238779068, 0.0555555522441864, 0.10810810327529907, 0, 0.0714285671710968, 0.0555555522441864, 0.09756097197532654, 0.0714285671710968, 0.05405404791235924, 0.08163265138864517, 0.0476190410554409, 0.17391303181648254, 0, 0.12765957415103912, 0.21739129722118378, 0.2380952388048172, 0.1904761791229248, 0.21276594698429108, 0.052631575614213943, 0.13333332538604736, 0.20689654350280762, 0.1818181723356247, 0.25531914830207825, 0, 0.04878048226237297, 0.4117647111415863, 0.20689654350280762, 0.1764705777168274, 0.06451612710952759, 0.2745097875595093, 0, 0 ]
BJGfCjA5FX
true
[ "We propose a novel autoencoding model with augmented adversarial reconstruction loss. We intoduce new metric for content-based assessment of reconstructions. " ]
[ "We propose a robust Bayesian deep learning algorithm to infer complex posteriors with latent variables.", "Inspired by dropout, a popular tool for regularization and model ensemble, we assign sparse priors to the weights in deep neural networks (DNN) in order to achieve automatic “dropout” and avoid over-fitting.", "By alternatively sampling from posterior distribution through stochastic gradient Markov Chain Monte Carlo (SG-MCMC) and optimizing latent variables via stochastic approximation (SA), the trajectory of the target weights is proved to converge to the true posterior distribution conditioned on optimal latent variables.", "This ensures a stronger regularization on the over-fitted parameter space and more accurate uncertainty quantification on the decisive variables.", "Simulations from large-p-small-n regressions showcase the robustness of this method when applied to models with latent variables.", "Additionally, its application on the convolutional neural networks (CNN) leads to state-of-the-art performance on MNIST and Fashion MNIST datasets and improved resistance to adversarial attacks.", "Bayesian deep learning, which evolved from Bayesian neural networks (Neal, 1996; BID4 , provides an alternative to point estimation due to its close connection to both Bayesian probability theory and cutting-edge deep learning models.", "It has been shown of the merit to quantify uncertainty BID6 , which not only increases the predictive power of DNN, but also further provides a more robust estimation to enhance AI safety.", "Particularly, BID5 BID3 described dropout (Srivastava et al., 2014) as a variational Bayesian approximation.", "Through enabling dropout in the testing period, the randomly dropped neurons generate some amount of uncertainty with almost no added cost.", "However, the dropout Bayesian approximation is variational inference (VI) based thus it is vulnerable to underestimating uncertainty.MCMC, known for its asymptotically accurate posterior inference, has not been fully investigated in DNN due to its unscalability in dealing with big data and large models.", "Stochastic gradient Langevin dynamics (SGLD) (Welling and Teh, 2011) , the first SG-MCMC algorithm, tackled this issue by adding noise to a standard stochastic gradient optimization, smoothing the transition between optimization and sampling.", "Considering the pathological curvature that causes the SGLD methods inefficient in DNN models, BID15 proposed combining adaptive preconditioners with SGLD (pSGLD) to adapt to the local geometry and obtained state-of-the-art performance on MNIST dataset.", "To avoid SGLD's random-walk behavior, BID3 proposed using stochastic gradient Hamiltonian Monte Carlo (SGHMC), a second-order Langevin dynamics with a large friction term, which was shown to have lower autocorrelation time and faster convergence BID2 .", "Saatci and Wilson (2017) used SGHMC with GANs BID8 ) to achieve a fully probabilistic inference and showed the Bayesian GAN model with only 100 labeled images was able to achieve 99.3% testing accuracy in MNIST dataset.", "Raginsky et al. (2017) ; Zhang et al. (2017) ; Xu et al. (2018) provided theoretical interpretations of SGLD from the perspective of non-convex optimization, echoing the empirical fact that SGLD works well in practice.When the number of predictors exceeds the number of observations, applying the spike-and-slab priors is particularly powerful and efficient to avoid over-fitting by assigning less probability mass on", "We propose a mixed sampling-optimization method called SG-MCMC-SA to efficiently sample from complex DNN posteriors with latent variables and prove its convergence.", "By adaptively searching and penalizing the over-fitted parameter space, the proposed method improves the generalizability of deep neural networks.", "This method is less affected by the hyperparameters, achieves higher prediction accuracy over the traditional SG-MCMC methods in both simulated examples and real applications and shows more robustness towards adversarial attacks.Interesting future directions include applying SG-MCMC-SA towards popular large deep learning models such as the residual network BID11 on CIFAR-10 and CIFAR-100, combining active learning and uncertainty quantification to learn from datasets of smaller size and proving posterior consistency and the consistency of variable selection under various shrinkage priors concretely." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.9285714030265808, 0.1428571343421936, 0.12765957415103912, 0.13333332538604736, 0.2666666507720947, 0.05882352590560913, 0.1904761791229248, 0.1395348757505417, 0.1428571343421936, 0.060606054961681366, 0.11320754140615463, 0.09302324801683426, 0.09302324801683426, 0.12765957415103912, 0.1702127605676651, 0.032786883413791656, 0.4000000059604645, 0.06666666269302368, 0.0731707289814949 ]
S1grRoR9tQ
true
[ "a robust Bayesian deep learning algorithm to infer complex posteriors with latent variables" ]
[ "Activity of populations of sensory neurons carries stimulus information in both the temporal and the spatial dimensions.", "This poses the question of how to compactly represent all the information that the population codes carry across all these dimensions.", "Here, we developed an analytical method to factorize a large number of retinal ganglion cells' spike trains into a robust low-dimensional representation that captures efficiently both their spatial and temporal information.", "In particular, we extended previously used single-trial space-by-time tensor decomposition based on non-negative matrix factorization to efficiently discount pre-stimulus baseline activity.", "On data recorded from retinal ganglion cells with strong pre-stimulus baseline, we showed that in situations were the stimulus elicits a strong change in firing rate, our extensions yield a boost in stimulus decoding performance.", "Our results thus suggest that taking into account the baseline can be important for finding a compact information-rich representation of neural activity.", "Populations of neurons encode sensory stimuli across the time dimension (temporal variations), the space dimension (different neuron identities), or along combinations of both dimensions BID1 BID16 BID17 BID10 BID19 .", "Consequently, understanding the neural code requires characterizing the firing patterns along these dimensions and linking them to the stimuli BID0 BID9 BID17 BID18 BID12 .", "There are many methods for compactly representing neural activity along their most relevant dimensions.", "These methods include Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Factor Analysis (FA) BID2 BID3 BID13 BID20 .", "Recently, a particularly promising tensor decomposition method was introduced that provides a compact representation of single trial neuronal activity into spatial and temporal dimensions and their combination in the given trial .", "The method is based on non-negative matrix factorization (NMF) BID14 BID6 BID21 which imposes non-negativity constraints on the extracted components leading to a parts-based, low dimensional, though flexible representation of the data, only assuming non-negativity of the model components.", "Though space-by-time NMF yielded robust decoding performance with a small number of parameters and good biological interpretability of its basis functions on data recorded from salamander retinal ganglion cells, the method does have a potential shortcoming: it cannot explicitly discount, and is partly confounded by, baseline activity that is not relevant for the neural response to a sensory stimulus.", "Although these non-negative tensor factorizations performed well on salamander retinal ganglion cells, which have almost non-existent spontaneous activity , it is not clear how well the method would perform on data with considerable spontaneous activity, which might require to explicitly correct for the pre-stimulus baseline.One way to reduce the baseline would be to subtract it from the stimulus-elicited response.", "This, however, would result in negative activities that cannot be modeled using a decomposition with full non-negativity constraints such as space-by-time NMF.", "In this study, we thus propose a variant of space-by-time NMF that discounts the baseline activity by subtracting the pre-stimulus baseline from each trial and then decomposes the baseline-corrected activity using a tri-factorization that finds non-negative spatial and temporal modules, and signed activation coefficients.", "We explored the benefits that this method provides on data recorded from mouse and pig retinal ganglion cells and showed that baseline-corrected space-by-time NMF improves decoding performance on data with non-negligible baselines and stimulus response changes.", "Here we introduced a novel computational approach to decompose single trial neural population spike trains into a small set of trial-invariant spatial and temporal firing patterns and into a set of activation coefficients that characterize single trials in terms of the identified patterns.", "To this end, we extended space-by-time non-negative matrix factorization to discount the neuronal pre-stimulus baseline activity.", "Subtraction of the baseline required the introduction of signed activation coefficients into the decomposition algorithm.", "This extension considerable widens the scope of applicability of the algorithm as it opens the possibility to decompose data that are inherently signed.Our method inherits many the advantages of the original space-by-time NMF decomposition such as yielding low-dimensional representations of neural activity that compactly carry stimulus information from both the spatial and temporal dimension.", "Using non-negativity constraints for the spatial and temporal modules, we could also retain the ability of space-by-time NMF to identify a partsbased representation of the concurrent spatial and temporal firing activity of the population.", "The factorization into space and time further still allows the quantification of the relative importance of these different dimensions on a trial-by-trial basis.Recently, introduced another tensor decomposition algorithm with the capacity to factorize signed data.", "Their algorithm differs from ours in that it introduces additional constraints for the spatial and temporal modules (cluster-NMF).", "Our algorithm, on the other hand, introduces no additional constraints, thereby facilitating the comparison with the original space-by-time NMF algorithm.", "In fact, our extension actually relaxes the non-negativity constraint for the activation coefficients without giving up the parts-based representation of the spatial and temporal modules.", "This made it possible to pinpoint the reason for the increase in performance as the introduction of the baseline-correction.While BC-SbT-NMF outperformed SbT-NMF overall on tasks with strong baseline activity, we also found that in a few cases, SbT-NMF performed better than BC-SbT-NMF.", "Previous studies showed that there is an effect of the baseline firing rate on the response BID5 BID8 .", "In these situations, the baseline might have an advantageous effect on the representation of neural responses and could lead to better decoding performance of SbT-NMF that we observed in some cases.", "One possibility to take this effect into account would be to devise a joint factorization-decoding framework that explicitly introduces the baseline into the optimization framework.", "While this is beyond the scope of the current work, we believe that development of such a framework is a promising direction for future research.In order to evaluate decoding performance, we applied LDA classification to the single trial activation coefficients to predict the stimulus identity and also to compare decoding performance of our baseline correction extension with the original space-by-time NMF decomposition.", "Specifically, we could show that our baseline-corrected version of space-by-time NMF increases decoding performance significantly when the difference between pre-stimulus baseline activity and stimulus-elicited rate was moderate to high.", "Importantly, this rate-change criterion makes it possible to select the best decomposition method (SbT-NMF vs. BC-SbT-NMF) following a simple data screening by means of the rate change.", "On our data, we obtained a relative difference in decoding performance on the order of 19.18% when picking the right method in this way and comparing to the inferior method.The requirement for such a rate change to perform well can be understood when considering the baseline-corrected activity.", "Without a substantial change from pre-stimulus to stimulus-elicited rate, most of the baseline-corrected activity will be close to zero.", "The Frobenius norm that is at the core of our objective function puts emphasis on high values and will be sensitive to outliers whenever most of the activity is close to zero.", "In this situation, our update rules are strongly affected by noise, thereby decreasing cross-validated decoding performance.", "In practical terms, this new method is expected to improve decoding performance when there is a large sensory-evoked response but the differences in responses across different sensory stimuli is of the order of spontaneous activity.", "In that case, the discounting of the spontaneous levels of firing would help to better discriminate among different stimuli based on neural responses.", "While the original space-by-time NMF algorithm could in principle identify spatial and temporal modules that fully account for the implicit baseline, the performance gain of our extension suggests that in practice, the original method cannot completely do so.", "Additional modules increases the model complexity and the number of parameters the method needs to fit which lowers decoding performance.", "The discount of the baseline provides an elegant way to avoid this unnecessary complication.", "43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000" ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.09302324801683426, 0.1090909019112587, 0.695652186870575, 0.2181818187236786, 0.12765957415103912, 0, 0.04255318641662598, 0.05128204822540283, 0, 0.15094339847564697, 0.20689654350280762, 0.25641024112701416, 0.24657534062862396, 0.1702127605676651, 0.19672130048274994, 0.3928571343421936, 0.06779660284519196, 0.4878048598766327, 0.10810810327529907, 0.17142856121063232, 0.11764705181121826, 0.24137930572032928, 0.04651162400841713, 0.1395348757505417, 0, 0.19354838132858276, 0.1428571343421936, 0.2222222238779068, 0.1304347813129425, 0.21333332359790802, 0.29629629850387573, 0.11764705181121826, 0.1515151411294937, 0.1395348757505417, 0.15094339847564697, 0.09756097197532654, 0.1428571343421936, 0.17391303181648254, 0.10526315122842789, 0.1395348757505417, 0.1538461446762085, 0 ]
Bki1Ct1AW
true
[ "We extended single-trial space-by-time tensor decomposition based on non-negative matrix factorization to efficiently discount pre-stimulus baseline activity that improves decoding performance on data with non-negligible baselines." ]
[ "We propose a framework for extreme learned image compression based on Generative Adversarial Networks (GANs), obtaining visually pleasing images at significantly lower bitrates than previous methods.", "This is made possible through our GAN formulation of learned compression combined with a generator/decoder which operates on the full-resolution image and is trained in combination with a multi-scale discriminator.", "Additionally, if a semantic label map of the original image is available, our method can fully synthesize unimportant regions in the decoded image such as streets and trees from the label map, therefore only requiring the storage of the preserved region and the semantic label map.", "A user study confirms that for low bitrates, our approach is preferred to state-of-the-art methods, even when they use more than double the bits.", "Image compression systems based on deep neural networks (DNNs), or deep compression systems for short, have become an active area of research recently.", "These systems (e.g. BID6 BID33 ) are often competitive with modern engineered codecs such as WebP (WebP), JPEG2000 BID37 ) and even BPG (Bellard) (the state-of-the-art engineered codec).", "Besides achieving competitive compression rates on natural images, they can be easily adapted to specific target domains such as stereo or medical images, and promise efficient processing and indexing directly from compressed representations BID41 .", "However, deep compression systems are typically optimized for traditional distortion metrics such as peak signal-to-noise ratio (PSNR) or multi-scale structural similarity (MS-SSIM) BID44 .", "For very low bitrates (below 0.1 bits per pixel (bpp)), where preserving the full image content becomes impossible, these distortion metrics lose significance as they favor pixel-wise preservation of local (high-entropy) structure over preserving texture and global structure.", "To further advance deep image compression it is therefore of great importance to develop new training objectives beyond PSNR and MS-SSIM.", "A promising candidate towards this goal are adversarial losses BID13 which were shown recently to capture global semantic information and local texture, yielding powerful generators that produce visually appealing high-resolution images from semantic label maps BID43 .In", "this paper, we propose and study a generative adversarial network (GAN)-based framework for extreme image compression, targeting bitrates below 0.1 bpp. We", "rely on a principled GAN formulation for deep image compression that allows for different degrees of content generation. In", "contrast to prior works on deep image compression which applied adversarial losses to image patches for artifact suppression BID33 BID12 , generation of texture details BID25 , or representation learning for thumbnail images BID35 , our generator/decoder operates on the full-resolution image and is trained with a multi-scale discriminator BID43 .We", "consider two modes of operation (corresponding to unconditional and conditional GANs BID13 BID31 ), namely• generative compression (GC), preserving the overall image content while generating structure of different scales such as leaves of trees or windows in the facade of buildings, and • selective generative compression (SC), completely generating parts of the image from a semantic label map while preserving user-defined regions with a high degree of detail.We emphasize that GC does not require semantic label maps (neither for training, nor for deployment). A", "typical use case for GC are bandwidth constrained scenarios, where one wants to preserve the full image as well as possible, while falling back to synthesized content instead of blocky/blurry blobs for regions for which not sufficient bits are available to store the original pixels. SC", "could be applied in a video call scenario where one wants to fully preserve people in the video stream, but a visually pleasing synthesized background serves the purpose as well as the true background.In the GC operation mode the image is transformed into a bitstream and encoded using arithmetic coding. SC", "requires a semantic/instance label map of the original image which can be obtained using off-the-shelf semantic/instance segmentation networks, e.g., PSPNet and Mask R-CNN BID18 , and which is stored as a vector graphic. This", "amounts to a small, image dimension-independent overhead in terms of coding cost. On the", "other hand, the size of the compressed image is reduced proportionally to the area which is generated from the semantic label map, typically leading to a significant overall reduction in storage cost.For GC, a comprehensive user study shows that our compression system yields visually considerably more appealing results than BPG (Bellard) (the current state-of-the-art engineered compression algorithm) and the recently proposed autoencoder-based deep compression (AEDC) system . In particular", ", our GC models trained for compression of general natural images are preferred to BPG when BPG uses up to 95% and 124% more bits than those produced by our models on the Kodak (Kodak) and RAISE1K BID11 data set, respectively. When constraining", "the target domain to the street scene images of the Cityscapes data set BID9 , the reconstructions of our GC models are preferred to BPG even when the latter uses up to 181% more bits. To the best of our", "knowledge, these are the first results showing that a deep compression method outperforms BPG on the Kodak data set in a user study-and by large margins. In the SC operation", "mode, our system seamlessly combines preserved image content with synthesized content, even for regions that cross multiple object boundaries, while faithfully preserving the image semantics. By partially generating", "image content we achieve bitrate reductions of over 50% without notably degrading image quality.", "The GC models produce images with much finer detail than BPG, which suffers from smoothed patches and blocking artifacts.", "In particular, the GC models convincingly reconstruct texture in natural objects such as trees, water, and sky, and is most challenged with scenes involving humans.", "AEDC and the MSE baseline both produce blurry images.We see that the gains of our models are maximal at extreme bitrates, with BPG needing 95-181% more bits for the C = 2, 4 models on the three datasets.", "For C = 8 gains are smaller but still very large (BPG needing 21-49% more bits).", "This is expected, since as the bitrate increases the classical compression measures (PSNR/MS-SSIM) become more meaningful-and our system does not employ the full complexity of current state-of-the-art systems, as discussed next.State-of-the-art on Kodak: We give an overview of relevant recent learned compression methods and their differences to our GC method and BPG in Table 1 in the Appendix.", "BID33 also used GANs (albeit a different formulation) and were state-of-the-art in MS-SSIM in 2017, while the concurrent work of is the current state-of-the-art in image compression in terms of classical metrics (PSNR and MS-SSIM) when measured on the Kodak dataset (Kodak).", "Notably, all methods except ours (BPG, Rippel et al., and Minnen et al.) employ adaptive arithmetic coding using context models for improved compression performance.", "Such models could also be implemented for our system, and have led to additional savings of 10% in .", "Since Rippel et al. and Minnen et al. have only released a selection of their decoded images (for 3 and 4, respectively, out of the 24 Kodak images), and at significantly higher bitrates, a comparison with a user study is not meaningful.", "Instead, we try to qualitatively put our results into context with theirs.In Figs. 12-14 in the Appendix, we compare qualitatively to BID33 .", "We can observe that even though BID33 use 29-179% more bits, our models produce images of comparable or better quality.", "In FIG1 , we show a qualitative comparison of our results to the images provided by the concurrent work of , as well as to BPG (Bellard) on those images.", "First, we see that BPG is still visually competitive with the current state-of-the-art, which is consistent with moderate 8.41% bitrate savings being reported by in terms of PSNR.", "Second, even though we use much fewer bits compared to the example images available from , for some of them (Figs. 15 and 16) our method can still produce images of comparable visual quality.Given the dramatic bitrate savings we achieve according to the user study (BPG needing 21-181% more bits), and the competitiveness of BPG to the most recent state-of-the-art , we conclude that our proposed system presents a significant step forward for visually pleasing compression at extreme bitrates.Sampling the compressed representations: In FIG1 we explore the representation learned by our GC models (with C = 4), by sampling the (discrete) latent space ofŵ.", "When we sample uniformly, and decode with our GC model into images, we obtain a \"soup of image patches\" which reflects the domain the models were trained on (e.g. street sign and building patches on Cityscapes).", "Note that we should not expect these outputs to look like normal images, since nothing forces the encoder outputŵ to be uniformly distributed over the discrete latent space.However, given the low dimensionality ofŵ (32 × 64 × 4 for 512 × 1024px Cityscape images), it would be interesting to try to learn the true distribution.", "To this end, we perform a simple experiment and train an improved Wasserstein GAN (WGAN-GP) BID14 onŵ extracted from Cityscapes, using default parameters and a ResNet architecture.", "4 By feeding our GC model with samples from the WGAN-GP generator, we easily obtain a powerful generative model, which generates sharp 1024 × 512px images from scratch.", "We think this could be a promising direction for building high-resolution generative models.", "In FIG5 in the Appendix, we show more samples, and samples obtained by feeding the MSE baseline with uniform and learned code samples.", "The latter yields noisier \"patch soups\" and much blurrier image samples than our GC network.", "Figure 6 : Mean IoU as a function of bpp on the Cityscapes validation set for our GC and SC networks, and for the MSE baseline.", "We show both SC modes: RI (inst.), RB (box).", "D + annotates models where instance semantic label maps are fed to the discriminator (only during training); EDG + indicates that semantic label maps are used both for training and deployment.", "The pix2pixHD baseline BID43 was trained from scratch for 50 epochs, using the same downsampled 1024 × 512px training images as for our method.", "The heatmaps in the lower left corners show the synthesized parts in gray.", "We show the bpp of each image as well as the relative savings due to the selective generation.In FIG2 we present example Cityscapes validation images produced by the SC network trained in the RI mode with C = 8, where different semantic classes are preserved.", "More visual results for the SC networks trained on Cityscapes can be found in Appendix F.7, including results obtained for the RB operation mode and by using semantic label maps estimated from the input image via PSPNet .Discussion", ": The quantitative evaluation of the semantic preservation capacity (Fig. 6 ) reveals that the SC networks preserve the semantics somewhat better than pix2pixHD, indicating that the SC networks faithfully generate texture from the label maps and plausibly combine generated with preserved image content. The mIoU", "of BPG, AEDC, and the MSE baseline is considerably lower than that obtained by our SC and GC models, which can arguably be attributed to blurring and blocking artifacts. However,", "it is not surprising as these baseline methods do not use label maps during training and prediction.In the SC operation mode, our networks manage to seamlessly merge preserved and generated image content both when preserving object instances and boxes crossing object boundaries (see Appendix F.7). Further,", "our networks lead to reductions in bpp of 50% and more compared to the same networks without synthesis, while leaving the visual quality essentially unimpaired, when objects with repetitive structure are synthesized (such as trees, streets, and sky). In some", "cases, the visual quality is even better than that of BPG at the same bitrate. The visual", "quality of more complex synthesized objects (e.g. buildings, people) is worse. However, this", "is a limitation of current GAN technology rather than our approach. As the visual", "quality of GANs improves further, SC networks will as well. Notably, the", "SC networks can generate entire images from the semantic label map only.Finally, the semantic label map, which requires 0.036 bpp on average for the downscaled 1024 × 512px Cityscapes images, represents a relatively large overhead compared to the storage cost of the preserved image parts. This cost vanishes", "as the image size increases, since the semantic mask can be stored as an image dimension-independent vector graphic.", "We proposed and evaluated a GAN-based framework for learned compression that significantly outperforms prior works for low bitrates in terms of visual quality, for compression of natural images.", "Furthermore, we demonstrated that constraining the application domain to street scene images leads to additional storage savings, and we explored combining synthesized with preserved image content with the potential to achieve even larger savings.", "Interesting directions for future work are to develop a mechanism for controlling spatial allocation of bits for GC (e.g. to achieve better preservation of faces; possibly using semantic label maps), and to combine SC with saliency information to determine what regions to preserve.", "In addition, the sampling experiments presented in Sec. 6.1 indicate that combining our GC compression approach with GANs to (unconditionally) generate compressed representations is a promising avenue to learn high-resolution generative models." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.17777776718139648, 0.17391303181648254, 0.15094339847564697, 0.1395348757505417, 0.10256409645080566, 0.04347825422883034, 0.039215680211782455, 0.0476190410554409, 0.178571417927742, 0.14999999105930328, 0, 0.0952380895614624, 0.1621621549129486, 0.1269841194152832, 0.1395348757505417, 0.17241378128528595, 0.09836065024137497, 0.1599999964237213, 0.1818181723356247, 0.1538461446762085, 0.17543859779834747, 0.12244897335767746, 0.1304347813129425, 0.17391303181648254, 0.1875, 0.052631575614213943, 0.04651162400841713, 0.14814814925193787, 0, 0.11594202369451523, 0.18867924809455872, 0.09302324801683426, 0.05405404791235924, 0.07407406717538834, 0.051282044500112534, 0.10256409645080566, 0.09302324801683426, 0.08695651590824127, 0.1538461446762085, 0.11538460850715637, 0.030303025618195534, 0.045454539358615875, 0.04347825422883034, 0, 0.051282044500112534, 0.11764705181121826, 0.0952380895614624, 0, 0.04444443807005882, 0.1428571343421936, 0.06666666269302368, 0.09999999403953552, 0.14814814925193787, 0.14035087823867798, 0.12765957415103912, 0.0952380895614624, 0.1818181723356247, 0.29411762952804565, 0.12121211737394333, 0.24242423474788666, 0.19354838132858276, 0.09677419066429138, 0.11764705181121826, 0.1860465109348297, 0.0833333283662796, 0.1071428507566452, 0.07843136787414551 ]
HygtHnR5tQ
true
[ "GAN-based extreme image compression method using less than half the bits of the SOTA engineered codec while preserving visual quality" ]
[ "In learning to rank, one is interested in optimising the global ordering of a list of items according to their utility for users.", "Popular approaches learn a scoring function that scores items individually (i.e. without the context of other items in the list) by optimising a pointwise, pairwise or listwise loss.", "The list is then sorted in the descending order of the scores.", "Possible interactions between items present in the same list are taken into account in the training phase at the loss level.", "However, during inference, items are scored individually, and possible interactions between them are not considered.", "In this paper, we propose a context-aware neural network model that learns item scores by applying a self-attention mechanism.", "The relevance of a given item is thus determined in the context of all other items present in the list, both in training and in inference.", "Finally, we empirically demonstrate significant performance gains of self-attention based neural architecture over Multi-Layer Perceptron baselines.", "This effect is consistent across popular pointwise, pairwise and listwise losses on datasets with both implicit and explicit relevance feedback.", "Learning to rank (LTR) is an important area of machine learning research, lying at the core of many information retrieval (IR) systems.", "It arises in numerous industrial applications like search engines, recommender systems, question-answering systems, and others.", "A typical machine learning solution to the LTR problem involves learning a scoring function, which assigns real-valued scores to each item of a given list, based on a dataset of item features and human-curated or implicit (e.g. clickthrough logs) relevance labels.", "Items are then sorted in the descending order of scores [19] .", "Performance of the trained scoring function is usually evaluated using an Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.", "Copyrights for third-party components of this work must be honored.", "For all other uses, contact the owner/author(s).", "The Web Conference, April, 2020, Taipei, Taiwan © 2019 Copyright held by the owner/author(s).", "ACM ISBN 978-x-xxxx-xxxx-x/YY/MM.", "https://doi.org/10.1145/nnnnnnn.nnnnnnn IR metric like Mean Reciprocal Rank (MRR) [29] , Normalised Discounted Cumulative Gain (NDCG) [16] or Mean Average Precision (MAP) [4] .", "In contrast to other classic machine learning problems like classification or regression, the main goal of a ranking algorithm is to determine relative preference among a group of items.", "Scoring items individually is a proxy of the actual learning to rank task.", "Users' preference for a given item on a list depends on other items present in the same list: an otherwise preferable item might become less relevant in the presence of other, more relevant items.", "Common learning to rank algorithms attempt to model such inter-item dependencies at the loss level.", "That is, items in a list are still scored individually, but the effect of their interactions on evaluation metrics is accounted for in the loss function, which usually takes a form of a pairwise (RankNet [6] , LambdaLoss [30] ) or a listwise (ListNet [9] , ListMLE [31] ) objective.", "For example, in LambdaMART [8] the gradient of the pairwise loss is rescaled by the change in NDCG of the list which would occur if a pair of items was swapped.", "Pointwise objectives, on the other hand, do not take such dependencies into account.", "In this work, we propose a learnable, context-aware, self-attention [27] based scoring function, which allows for modelling of interitem dependencies not only at the loss level but also in the computation of items' scores.", "Self-attention is a mechanism first introduced in the context of natural language processing.", "Unlike RNNs [14] , it does not process the input items sequentially but allows the model to attend to different parts of the input regardless of their distance from the currently processed item.", "We adapt the Transformer [27] , a popular self-attention based neural machine translation architecture, to the ranking task.", "We demonstrate that the obtained ranking model significantly improves performance over Multi-Layer Perceptron (MLP) baselines across a range of pointwise, pairwise and listwise ranking losses.", "Evaluation is conducted on MSLR-WEB30K [24] , the benchmark LTR dataset with multi-level relevance judgements, as well as on clickthrough data coming from Allegro.pl, a large-scale e-commerce search engine.", "We provide an open-source Pytorch [22] implementation of our self-attentive context-aware ranker available at url_removed.", "The rest of the paper is organised as follows.", "In Section 2 we review related work.", "In Section 3 we formulate the problem solved in this work.", "In Section 4 we describe our self-attentive ranking model.", "Experimental results and their discussion are presented in Section 5.", "In Section 6 we conduct an ablation study of various hyperparameters of our model.", "Finally, a summary of our work is given in Section 7.", "In this work, we addressed the problem of constructing a contextaware scoring function for learning to rank.", "We adapted the selfattention based Transformer architecture from the neural machine translation literature to propose a new type of scoring function for LTR.", "We demonstrated considerable performance gains of proposed neural architecture over MLP baselines across different losses and types of data, both in ranking and re-ranking setting.", "These experiments provide strong evidence that the gains are due to the ability of the model to score items simultaneously.", "As a result of our empirical study, we observed the strong performance of models trained to optimise ordinal loss function.", "Such models outperformed models trained with well-studied losses like LambdaLoss or LambdaMART, which were previously shown to provide tight bounds on IR metrics like NDCG.", "On the other hand, we observed the surprisingly poor performance of models trained to optimise RankNet and ListMLE losses.", "In future work, we plan to investigate the reasons for both good and poor performance of the aforementioned losses, in particular, the relation between ordinal loss and NDCG." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1428571343421936, 0.060606058686971664, 0.1111111044883728, 0.07999999821186066, 0, 0, 0.0714285671710968, 0.08695651590824127, 0, 0.2857142686843872, 0, 0.09302325546741486, 0.1111111044883728, 0.11320754140615463, 0, 0.1428571343421936, 0.0952380895614624, 0, 0, 0.12121211737394333, 0.29999998211860657, 0.05882352590560913, 0.2857142686843872, 0.040816325694322586, 0.0624999962747097, 0.09999999403953552, 0.05128204822540283, 0.09999999403953552, 0.11764705926179886, 0.25, 0.06451612710952759, 0.05714285373687744, 0, 0.1249999925494194, 0, 0.1111111044883728, 0, 0, 0, 0, 0.25, 0.27586206793785095, 0.06666666269302368, 0.1666666567325592, 0.1538461446762085, 0.06666666269302368, 0.1599999964237213, 0.125 ]
BJxcAX7iYB
true
[ "Learning to rank using the Transformer architecture." ]
[ "We propose an active learning algorithmic architecture, capable of organizing its learning process in order to achieve a field of complex tasks by learning sequences of primitive motor policies : Socially Guided Intrinsic Motivation with Procedure Babbling (SGIM-PB).", "The learner can generalize over its experience to continuously learn new outcomes, by choosing actively what and how to learn guided by empirical measures of its own progress.", "In this paper, we are considering the learning of a set of interrelated complex outcomes hierarchically organized.\n\n", "We introduce a new framework called \"procedures\", which enables the autonomous discovery of how to combine previously learned skills in order to learn increasingly more complex motor policies (combinations of primitive motor policies).", "Our architecture can actively decide which outcome to focus on and which exploration strategy to apply.", "Those strategies could be autonomous exploration, or active social guidance, where it relies on the expertise of a human teacher providing demonstrations at the learner's request.", "We show on a simulated environment that our new architecture is capable of tackling the learning of complex motor policies, to adapt the complexity of its policies to the task at hand.", "We also show that our \"procedures\" increases the agent's capability to learn complex tasks.", "Recently, efforts in the robotic industry and academic field have been made for integrating robots in previously human only environments.", "In such a context, the ability for service robots to continuously learn new skills, autonomously or guided by their human counterparts, has become necessary.", "They would be needed to carry out multiple tasks, especially in open environments, which is still an ongoing challenge in robotic learning.", "Those tasks can be independent and self-contained but they can also be complex and interrelated, needing to combine learned skills from simpler tasks to be tackled efficiently.The range of tasks those robots need to learn can be wide and even change after the deployment of the robot, we are therefore taking inspiration from the field of developmental psychology to give the robot the ability to learn.", "Taking a developmental robotic approach BID13 , we combine the approaches of active motor skill learning of multiple tasks, interactive learning and strategical learning into a new learning algorithm and we show its capability to learn a mapping between a continuous space of parametrized outcomes (sometimes referred to as tasks) and a space of parametrized motor policies (sometimes referred to as actions).", "With this experiment, we show the capability of SGIM-PB to tackle the learning of a set of multiple interrelated complex tasks.", "It successfully discovers the hierarchy between tasks and uses complex motor policies to learn a wider range of tasks.", "It is capable to correctly choose the most adapted teachers to the target outcome when available.", "Though it is not limited in the size of policies it could execute, the learner shows it could adapt the complexity of its policies to the task at hand.The procedures greatly improved the learning capability of autonomous learners, as shows the difference between IM-PB and SAGG-RIAC .", "Our SGIM-PB shows it is capable to use procedures to discover the task hierarchy and exploit the inverse model of previously learned skills.", "More importantly, it shows it can successfully combine the ability of SGIM-ACTS to progress quickly in the beginning (owing to the mimicry teachers) and the ability of IM-PB to progress further on highly hierarchical tasks (owing to the procedure framework).In", "this article, we aimed to enable a robot to learn sequences of actions of undetermined length to achieve a field of outcomes. To", "tackle this high-dimensionality learning between a continuous high-dimensional space of outcomes and a continuous infinite dimensionality space of sequences of actions , we used techniques that have proven efficient in previous studies: goal-babbling, social guidance and strategic learning based on intrinsic motivation. We", "extended them with the procedures framework and proposed SGIM-PB algorithm, allowing the robot to babble in the procedure space and to imitate procedural teachers. We", "showed that SGIM-PB can discover the hierarchy between tasks, learn to reach complex tasks while adapting the complexity of the policy. The", "study shows that :• procedures allow the learner to learn complex tasks, and adapt the length of sequences of actions to the complexity of the task • social guidance bootstraps the learning owing to demonstrations of primitive policy in the beginning, and then to demonstrations of procedures to learn how to compose tasks into sequences of actions • intrinsic motivation can be used as a common criteria for active learning for the robot to choose both its exploration strategy, its goal outcomes and the goal-oriented procedures.However a precise analysis of the impact of each of the different strategies used by our learning algorithm could give us more insight in the roles of the teachers and procedures framework. Also", ", we aim to illustrate the potency of our SGIM-PB learner on a real-world application. We", "are currently designing such an experiment with a real robotic platform.Besides, the procedures are defined as combinations of any number of subtasks but are used in the illustration experiment as only combinations of two subtasks. It", "could be a next step to see if the learning algorithm can handle the curse of dimensionality of a larger procedure space, and explore combinations of any number of subtasks. Moreover", ", the algorithm can be extended to allow the robot learner to decide on how to execute a procedure. In the", "current version, we have proposed the \"refinement process\" to infer the best policy. We could", "make this refinement process more recursive, by allowing the algorithm to select, not only policies, but also lower-level procedures as one of the policy components." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.23999999463558197, 0.09999999403953552, 0.3030303120613098, 0.30434781312942505, 0.06666666269302368, 0.1463414579629898, 0.3255814015865326, 0.13333332538604736, 0.05714285373687744, 0.09999999403953552, 0.10810810327529907, 0.12903225421905518, 0.24561403691768646, 0.29411762952804565, 0.3529411852359772, 0.06666666269302368, 0.19607841968536377, 0.10810810327529907, 0.08888888359069824, 0.11764705181121826, 0.1538461446762085, 0.05405404791235924, 0.2222222238779068, 0.1318681240081787, 0.1875, 0.13636362552642822, 0.2380952388048172, 0.1818181723356247, 0.06666666269302368, 0.14999999105930328 ]
HyW0afxKM
true
[ "The paper describes a strategic intrinsically motivated learning algorithm which tackles the learning of complex motor policies." ]
[ "Monte Carlo Tree Search (MCTS) has achieved impressive results on a range of discrete environments, such as Go, Mario and Arcade games, but it has not yet fulfilled its true potential in continuous domains.", "In this work, we introduceTPO, a tree search based policy optimization method for continuous environments.", "TPO takes a hybrid approach to policy optimization. ", "Building the MCTS tree in a continuous action space and updating the policy gradient using off-policy MCTS trajectories are non-trivial.", "To overcome these challenges, we propose limiting tree search branching factor by drawing only few action samples from the policy distribution and define a new loss function based on the trajectories’ mean and standard deviations. ", "Our approach led to some non-intuitive findings. ", "MCTS training generally requires a large number of samples and simulations.", "However, we observed that bootstrappingtree search with a pre-trained policy allows us to achieve high quality results with a low MCTS branching factor and few number of simulations.", "Without the proposed policy bootstrapping, continuous MCTS would require a much larger branching factor and simulation count, rendering it computationally and prohibitively expensive.", "In our experiments, we use PPO as our baseline policy optimization algorithm.", "TPO significantly improves the policy on nearly all of our benchmarks. ", "For example, in complex environments such as Humanoid, we achieve a 2.5×improvement over the baseline algorithm.", "Fueled by advances in neural representation learning, the field of model-free reinforcement learning has rapidly evolved over the past few years.", "These advances are due in part to the advent of algorithms capable of navigating larger action spaces and longer time horizons [2, 32, 42] , as well as the distribution of data collection and training across massive-scale computing resources [42, 38, 16, 32] .", "While learning algorithms have been continuously improving, it is undeniable that tree search methods have played a large role in some of the most successful applications of RL (e.g., AlphaZero [42] , Mario [10] and Arcade games [48] ).", "Tree search methods enable powerful explorations of the action space in a way which is guided by the topology of the search space, focusing on branches (actions) that are more promising.", "Although tree search methods have achieved impressive results on a range of discrete domains, they have not yet fulfilled their true potential in continuous domains.", "Given that the number of actions is inherently unbounded in continuous domains, traditional approaches to building the search tree become intractable from a computational perspective.", "In this paper, we introduce TPO, a Tree Search Policy Optimization for environments with continuous action spaces.", "We address the challenges of building the tree and running simulations by adopting a hybrid method, in which we first train a policy using existing model-free RL methods, and then use the pre-trained policy distribution to draw actions with which to build the tree.", "Once the tree has been constructed, we run simulations to generate experiences using an Upper Confidence Bounds for Trees (UCT) approach [33] .", "Populating the tree with the action samples drawn from a pre-trained policy enables us to perform a computationally feasible search.", "TPO is a variation of the policy iteration method [35, 44, 42, 1] .", "Broadly, in these methods, the behavior of policy is iteratively updated using the trajectories generated by an expert policy.", "Then, the newly updated policy in return guides the expert to generate higher quality samples.", "In TPO, we use tree search as an expert to generate high quality trajectories.", "Later, we employ the updated policy to re-populate the tree search.", "For tree search, we use the Monte Carlo Tree Search (MCTS) [5] expansion and selection methods.", "However, it is challenging to directly infer the probability of selected actions for rollout; unlike in discrete domains where all actions can be exhaustively explored, in continuous domains, we cannot sample more than a subset of the effectively innumerable continuous action space.", "Furthermore, to use the trajectories generated by MCTS, we must perform off-policy optimization.", "To address this challenge, we define a new loss function that uses the weighted mean and standard deviation of the tree search statistics to update the pre-trained policy.", "For ease of implementation and scalability, we use Proximal Policy Optimization (PPO) [38] and choose as our policy optimization baseline.", "In phase 1, we perform a policy gradient based optimization training to build a target policy.", "In phase 2, we iteratively build an MCTS tree using the pre-trained target policy and update the target policy using roll-out trajectories from MCTS.", "Both training and data collection are done in a distributed manner.", "Our approach led to some non-intuitive findings.", "MCTS training generally requires a large number of branches and simulations.", "For example, AlphaGo uses 1600 simulations per tree search and a branching factor of up to 362 [40] .", "However, we observed that if we pre-train the policy, we require far fewer simulations to generate high quality trajectories.", "While we do benefit from exploring a greater number of branches, especially for higher dimensional action spaces (e.g. Humanoid), we observed diminishing returns after only a small number of branches (e.g., 32) across all of the evaluated environments.", "Furthermore, performance quickly plateaued as we increased the number of simulations past 32.", "This property did not hold when we initialized tree search with an untrained policy.", "This is a critical advantage of our method as it would otherwise be computationally infeasible to generate high quality trajectories using tree search.", "The main contributions of TPO are summarized as follows:", "1. Tree search policy optimization for continuous action spaces.", "TPO is one of the very first techniques that integrates tree search into policy optimization for continuous action spaces.", "This unique integration of tree search into policy optimization yields a superior performance compared to baseline policy optimization techniques for continuous action spaces.", "2. Policy bootstrapping.", "We propose a policy bootstrapping technique that significantly improves the sample efficiency of the tree search and enables us to discretize continuous action spaces into only a few number of highly probable actions.", "More specifically, TPO only performs 32 tree searches compared to substantially larger number of tree searches (1600, 50× more) in AlphaGo [42] .", "In addition, TPO narrows down the number of tree expansion (actions) compared to discretization techniques such as Tang et al. [45] which requires 7-11 bins per action dimension.", "This number of bins translates to a prohibitively large number of actions even in discrete domain for complex environments such as Humanoid which has a 17 dimensional action space.", "In contrast, TPO only samples 32 actions at each simulation step across all the environments.", "3. Infrastructure and results.", "On the infrastructure side, we developed a distributed system (shown in Figure 1 ), in which both policy optimization and data collection are performed on separate distributed platforms.", "The policy optimization is done on a TPU-v2 using multiple cores, and MCTS search is performed on a rack of CPU nodes.", "A synchronous policy update and data collection approach is used to train the policy and generate trajectories.", "TPO readily extends to challenging and high-dimensional tasks, such the Humanoid benchmark [9] .", "Our empirical results indicate that TPO significantly improves the performance of the baseline policy optimization algorithm, achieving up to 2.5× improvement.", "In this paper, we have studied Monte Carlo Tree Search in continuous space for improving the performance of a baseline on-policy algorithm [38] .", "Our results show that MCTS policy optimization can indeed improve the quality of policy in choosing better actions during policy evaluation at the cost of more samples during MCTS rollout.", "We show that bootstrapping tree search with a pretrained policy enables us to achieve high performance with a low MCTS branching factor and few simulations.", "On the other hand, without pre-training, we require a much larger branching factor and simulation count, rendering MCTS computationally infeasible.", "One of the future research direction is to explore techniques for improving the sample efficiency and removing the need for having a reset-able environment.", "To achieve these goals, we can use a trained model of the environment similar to model-based reinforcement learning approaches [20, 23, 6] , instead of interacting directly with the environment in MCTS.", "Recently, MBPO [20] showed that they can train a model of Mujoco environments that is accurate enough for nearly 200-step rollouts in terms of accumulated rewards.", "This level of accuracy horizon is more than enough for the shallow MCTS simulations (32 simulations) that is employed in TPO.", "As mentioned earlier, TPO assumes access to an environment that can be restarted from an arbitrary state for MCTS simulations.", "While this assumption can be readily satisfied in some RL problems such as playing games, it may be harder to achieve for physical RL problems like Robotics.", "This assumption can also be relaxed using a modeled environment to replace the interactions with the real environment during MCTS simulations." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.08163265138864517, 0.25806450843811035, 0.23999999463558197, 0.29411762952804565, 0.11999999731779099, 0.0833333283662796, 0.14814814925193787, 0.1904761791229248, 0.21052631735801697, 0.14814814925193787, 0.0714285671710968, 0.060606054961681366, 0, 0.1111111044883728, 0.03703703358769417, 0.09302324801683426, 0.09999999403953552, 0.14999999105930328, 0.3030303120613098, 0.19607841968536377, 0.10526315122842789, 0.23529411852359772, 0.20689654350280762, 0.060606054961681366, 0.13333332538604736, 0.13333332538604736, 0.1538461446762085, 0.0624999962747097, 0.18867924809455872, 0.13793103396892548, 0.1428571343421936, 0.11428570747375488, 0.19999998807907104, 0.11428570747375488, 0.07407406717538834, 0.08695651590824127, 0.14814814925193787, 0.11764705181121826, 0.060606054961681366, 0.1599999964237213, 0, 0.06666666269302368, 0.10256409645080566, 0, 0.4000000059604645, 0.2857142686843872, 0.37837836146354675, 0, 0.30434781312942505, 0.0555555522441864, 0.09090908616781235, 0.1904761791229248, 0, 0, 0.0952380895614624, 0.17142856121063232, 0.12903225421905518, 0.06896550953388214, 0.10810810327529907, 0.1538461446762085, 0.09999999403953552, 0.25641024112701416, 0.1111111044883728, 0.1621621549129486, 0.17777776718139648, 0.09999999403953552, 0.1111111044883728, 0.17142856121063232, 0.09999999403953552, 0.17142856121063232 ]
HJew70NYvH
true
[ "We use MCTS to further optimize a bootstrapped policy for continuous action spaces under a policy iteration setting." ]
[ "The variational autoencoder (VAE) has found success in modelling the manifold of natural images on certain datasets, allowing meaningful images to be generated while interpolating or extrapolating in the latent code space, but it is unclear whether similar capabilities are feasible for text considering its discrete nature.", "In this work, we investigate the reason why unsupervised learning of controllable representations fails for text.", "We find that traditional sequence VAEs can learn disentangled representations through their latent codes to some extent, but they often fail to properly decode when the latent factor is being manipulated, because the manipulated codes often land in holes or vacant regions in the aggregated posterior latent space, which the decoding network is not trained to process.", "Both as a validation of the explanation and as a fix to the problem, we propose to constrain the posterior mean to a learned probability simplex, and performs manipulation within this simplex.", "Our proposed method mitigates the latent vacancy problem and achieves the first success in unsupervised learning of controllable representations for text.", "Empirically, our method significantly outperforms unsupervised baselines and is competitive with strong supervised approaches on text style transfer.", "Furthermore, when switching the latent factor (e.g., topic) during a long sentence generation, our proposed framework can often complete the sentence in a seemingly natural way -- a capability that has never been attempted by previous methods.", "High-dimensional data, such as images and text, are often causally generated through the interaction of many complex factors, such as lighting and pose in images or style and content in texts.", "Recently, VAEs and other unsupervised generative models have found successes in modelling the manifold of natural images (Higgins et al., 2017; Kumar et al., 2017; Chen et al., 2016) .", "These models often discover controllable latent factors that allow manipulation of the images through conditional generation from interpolated or extrapolated latent codes, often with impressive quality.", "On the other hand, while various attributes of text such as sentiment and topic can be discovered in an unsupervised way, manipulating the text by changing these learned factors have not been possible with unsupervised generative models to the best of our knowledge.", "Cífka et al. (2018) ; Zhao et al. (2018) observed that text manipulation is generally more challenging compared to images, and the successes of these models cannot be directly transferred to texts.", "Controllable text generation aims at generating realistic text with control over various attributes including sentiment, topic and other high-level properties.", "Besides being a scientific curiosity, the possibility of unsupervised controllable text generation could help in a wide range of application, e.g., dialogues systems (Wen et al., 2016) .", "Existing promising progress (Shen et al., 2017; Fu et al., 2018; Li et al., 2018; Sudhakar et al., 2019) all relies on supervised learning from annotated attributes to generate the text in a controllable fashion.", "The high cost of labelling large training corpora with attributes of interest limits the usage of these models, as pre-existing annotations often do not align with some downstream goal.", "Even if cheap labels are available, for example, review scores as a proxy for sentiment, the control is limited to the variation defined by the attributes.", "In this work, we examine the obstacles that prevent sequence VAEs from performing well in unsupervised controllable text generation.", "We empirically discover that manipulating the latent factors for typical semantic variations often leads to latent codes that reside in some low-density region of the aggregated posterior distribution.", "In other words, there are vacant regions in the latent code space (Makhzani et al., 2015; Rezende & Viola, 2018) not being considered by the decoding network, at least not at convergence.", "As a result, the decoding network is unable to process such manipulated latent codes, yielding unpredictable generation results of low quality.", "In order to mitigate the latent vacancy problem, we propose to constrain the posterior mean to a learned probability simplex and only perform manipulation within the probability simplex.", "Two regularizers are added to the original objective of VAE.", "The first enforces an orthogonal structure of the learned probability simplex; the other encourages this simplex to be filled without holes.", "Besides confirming that latent vacancy is indeed a cause of failure in previous sequence VAEs', it is also the first successful attempt towards unsupervised learning of controllable representations for text to the best of our knowledge.", "Experimental results on text style transfer show that our approach significantly outperforms unsupervised baselines, and is competitive with strong supervised approaches across a wide range of evaluation metrics.", "Our proposed framework also enables finer-grained and more flexible control over text generation.", "In particular, we can switch the topic in the middle of sentence generation, and the model will often still find a way to complete the sentence in a natural way.", "In this work, we investigate latent vacancy as an important problem in unsupervised learning of controllable representations when modelling text with VAEs.", "To mitigate this, we propose to constrain the posterior within a learned probability simplex, achieving the first success towards controlled text generation without supervision." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ]
[ 0.17391303181648254, 0.19512194395065308, 0.14084506034851074, 0.20408162474632263, 0.2666666507720947, 0.09302324801683426, 0.13333332538604736, 0.07999999821186066, 0.07999999821186066, 0.16326530277729034, 0.1269841194152832, 0.18867923319339752, 0.09090908616781235, 0.19230768084526062, 0.2222222238779068, 0.07843136787414551, 0.1666666567325592, 0.27272728085517883, 0.11999999731779099, 0.072727270424366, 0.21739129722118378, 0.1702127605676651, 0.11428570747375488, 0.17777776718139648, 0.31578946113586426, 0.11320754140615463, 0.10526315122842789, 0.1249999925494194, 0.21276594698429108, 0.4583333134651184 ]
Hkex2a4FPr
true
[ "why previous VAEs on text cannot learn controllable latent representation as on images, as well as a fix to enable the first success towards controlled text generation without supervision" ]
[ "In this paper we developed a hierarchical network model, called Hierarchical Prediction Network (HPNet) to understand how spatiotemporal memories might be learned and encoded in a representational hierarchy for predicting future video frames.", "The model is inspired by the feedforward, feedback and lateral recurrent circuits in the mammalian hierarchical visual system.", "It assumes that spatiotemporal memories are encoded in the recurrent connections within each level and between different levels of the hierarchy.", "The model contains a feed-forward path that computes and encodes spatiotemporal features of successive complexity and a feedback path that projects interpretation from a higher level to the level below.", "Within each level, the feed-forward path and the feedback path intersect in a recurrent gated circuit that integrates their signals as well as the circuit's internal memory states to generate a prediction of the incoming signals.", "The network learns by comparing the incoming signals with its prediction, updating its internal model of the world by minimizing the prediction errors at each level of the hierarchy in the style of {\\em predictive self-supervised learning}. The network processes data in blocks of video frames rather than a frame-to-frame basis. ", "This allows it to learn relationships among movement patterns, yielding state-of-the-art performance in long range video sequence predictions in benchmark datasets.", "We observed that hierarchical interaction in the network introduces sensitivity to memories of global movement patterns even in the population representation of the units in the earliest level.", "Finally, we provided neurophysiological evidence, showing that neurons in the early visual cortex of awake monkeys exhibit very similar sensitivity and behaviors.", "These findings suggest that predictive self-supervised learning might be an important principle for representational learning in the visual cortex. ", "While the hippocampus is known to play a critical role in encoding episodic memories, the storage of these memories might ultimately rest in the sensory areas of the neocortex BID27 .", "Indeed, a number of neurophysiological studies suggest that neurons throughout the hierarchical visual cortex, including those in the early visual areas such as V1 and V2, might be encoding memories of object images and of visual sequences in cell assemblies BID54 BID14 BID52 BID2 BID21 .", "As specific priors, these memories, together with the generic statistical priors encoded in receptive fields and connectivity of neurons, serve as internal models of the world for predicting incoming visual experiences.", "In fact, learning to predict incoming visual signals has also been proposed as an objective that drives representation learning in a recurrent neural network in a self-supervised learning paradigm, where the discrepancy between the model's prediction and the incoming signals can be used to train the network using backpropagation, without the need of labeled data BID9 BID46 BID42 BID34 BID21 .In", "computer vision, a number of hierarchical recurrent neural network models, notably PredNet BID24 and PredRNN++ , have been developed for video prediction with state-of-the-art performance. PredNet", ", in particular, was inspired by the neuroscience principle of predictive coding BID31 BID39 BID21 BID7 BID11 . It learns", "a LSTM (long short-term memory) model at each level to predict the prediction errors made in an earlier level of the hierarchical visual system. Because the", "error representations are sparse, the computation of PredNet is very efficient. However, the", "model builds a hierarchical representation to model and predict its own errors, rather than learning a hierarchy of features of successive complexities and scales to model the world. The lack of", "a compositional feature hierarchy hampers its ability in long range video predictions.Here, we proposed an alternative hierarchical network architecture. The proposed", "model, HPNet (Hierarchical Prediction Network), contains a fast feedforward path, instantiated currently by a fast deep convolutional neural network (DCNN) that learns a representational hierarchy of features of successive complexity, and a feedback path that brings a higher order interpretation to influence the computation a level below. The two paths", "intersect at each level through a gated recurrent circuit to generate a hypothetical interpretation of the current state of the world and make a prediction to explain the bottom-up input. The gated recurrent", "circuit, currently implemented in the form of LSTM, performs this prediction by integrating top-down, bottom-up, and horizontal information. The discrepancy between", "this prediction and the bottom-up input at each level is called prediction error, which is fed back to influence the interpretation of the gated recurrent circuits at the same level as well as the level above.To facilitate the learning of relationships between movement patterns, HPNet processes data in the unit of a spatiotemporal block that is composed of a sequence of video frames, rather than frame by frame, as in PredNet and PredRNN++. We used a 3D convolutional", "LSTM at each level of the hierarchy to process these spatiotemporal blocks of signals BID1 , which is a key factor underlying HPNet's better performance in long range video prediction.In the paper, we will first demonstrate HPNet's effectiveness in predictive learning and its competency in long range video prediction. Then we will provide neurophysiological", "evidence showing that neurons in the early visual cortex of the primate visual system exhibit the same sensitivity to memories of global movement patterns as units in the lowest modules of HPNet. Our results suggest that predictive self-supervised", "learning might indeed be an important strategy for representation learning in the visual cortex, and that HPNet is a viable computational model for understanding the computation in the visual cortical circuits.", "In this paper, we developed a hierarchical prediction network model (HPNet), with a fast DCNN feedforward path, a feedback path and local recurrent LSTM circuits for modeling the counterstream / analysis-by-synthesis architecture of the mammalian hierarchical visual systems.", "HPNet utilizes predictive self-supervised learning as in PredNet and PredRNN++, but integrates additional neural constraints or theoretical neuroscience ideas on spatiotemporal processing, counter-stream architecture, feature hierarchy, prediction evaluation and sparse convolution into a new model that delivers the state-of-the-art performance in long range video prediction.", "Most importantly, we found that the hierarchical interaction in HPNet introduces sensitivity to global movement patterns in the representational units of the earliest module in the network and that real cortical neurons in the early visual cortex of awake monkeys exhibit very similar sensitivity to memories of global movement patterns, despite their very local receptive fields.", "These findings support predictive self-supervised learning as an important principle for representation learning in the visual cortex and suggest that HPNet might be a viable computational model for understanding the cortical circuits in the hierarchical visual system at the functional level.", "Further evaluations are needed to determine definitively whether PredNet or HPNet is a better fit to the biological reality.", "APPENDIX A 3D CONVOLUTIONAL LSTM Because our data are in the unit of spatitemporal block, we have to use a 3D form of the 2D convolutional LSTM.", "3D convolutional LSTM has been used by BID1 in the stereo setting.", "The dimensions of the input video or the various representations (I, E and H) in any module are c×d×h×w, where c is the number of channels, d is the number of adjacent frames, h and w specify the spatial dimensions of the frame.", "The 3D spatiotemporal convolution kernel is m × k × k in size, where m is kernel temporal depth and k is kernel spatial size.", "The spatial stride of the convolution is 1.", "The size of the output with n kernels is n × d × h × w.", "We define the inputs as X 1 , ..., X t , the cell states as C 1 , ..., C t , the outputs as H 1 , ..., H t , and the gates as i t , f t , o t .", "Our 3D convolutional LSTM is specified by the equations below, where the function of 3D convolution is indicated by and the Hadamard product is indicated by •.", "FIG7 of the main text of the paper.", "The figures demonstrate that as more higher order modules are stacked up in the hierarchy, the semantic clustering into the six movement classes become more pronounced even in the early modules, suggesting that the hierarchical interaction has steered the feature representation into semantic clusters even in the early modules.", "Module 4-1 means representation of module 1 in a 4-module network.", "DISPLAYFORM0 We use linear decoding (multi-class SVM) to assess the distinctiveness of the semantiuc clusters in the representation of the different modules in the different networks.", "The decoding results in TAB2 shows that the decoding accuracy based on the reprsentation of module 1 has improved from chance (16%) to 26%, an improvement of 60% between a 1-module HPNet and a 4-module HPNet, and that the representation of module 4 of a 4-module HPNet can achieve a 63% accuracy in classifying the six movement classes, suggesting that the network only needs to learn to predict unlabelled video sequences, and it automatically learns reasonable semantic representations for recognition.", "For comparison, we also performed decoding on the output representations of each LSTM layer in the PredRNN++ and PredNet to study their representations of the six movement patterns.", "The results shown below indicate that the semantic clustering of the six movements is not very strong in the PredRNN++ hierarchy.", "We realized that this might be because the PredRNN++ behaves essentially like an autoencoder.", "The four-layer network effectively only has two layers of feature abstraction, with layer 2 being the most semantic in the hierarchy and layers 3 and 4 representing the unfolding of the feedback path.", "Decoding results indicate that the hierarchical representation based on the output of the LSTM at every layer in PredNet, which serve to predict errors of prediction errors of the previous layer, does not contain semantic information about the global movement patterns.", "Figure 8 : Results of video sequence learning experiments showing prediction suppression can be observed in E, P , and R units in every module along the hierarchical network.", "The abscissa is time after stimulus onset -where we set each video frame to be 25 ms for comparison with neural data.", "The ordinate is the normalized averaged temporal response of all the units within the center 8×8 hypercolumns, averaged across all neurons and across the 20 movies in the Predicted set (blue) and the Unpredicted set (red) respectively.", "Prediction suppression can be observed in all types of units, though more pronounced in the E and P units." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.22727271914482117, 0.20689654350280762, 0.1249999925494194, 0.1666666567325592, 0.1463414579629898, 0.11538460850715637, 0.0624999962747097, 0.05882352590560913, 0.05882352590560913, 0.06451612710952759, 0.05405404791235924, 0.11999999731779099, 0.09756097197532654, 0.06666666269302368, 0.2702702581882477, 0, 0.17142856121063232, 0, 0.1666666567325592, 0.12121211737394333, 0.03703703358769417, 0.10810810327529907, 0.12121211737394333, 0.11594202369451523, 0.1428571343421936, 0, 0.21621620655059814, 0.21739129722118378, 0.2222222238779068, 0.11320754140615463, 0.21739129722118378, 0, 0.05714285373687744, 0, 0.09302325546741486, 0.13793103396892548, 0, 0, 0.06451612710952759, 0.06451612710952759, 0, 0.043478257954120636, 0, 0, 0.08571428060531616, 0.0555555522441864, 0, 0, 0.05128204822540283, 0.08695651590824127, 0.20000000298023224, 0.11764705181121826, 0.05128204822540283, 0.06666666269302368 ]
BJl_VnR9Km
true
[ "A new hierarchical cortical model for encoding spatiotemporal memory and video prediction" ]
[ "Saliency maps are often used to suggest explanations of the behavior of deep rein- forcement learning (RL) agents.", "However, the explanations derived from saliency maps are often unfalsifiable and can be highly subjective.", "We introduce an empirical approach grounded in counterfactual reasoning to test the hypotheses generated from saliency maps and show that explanations suggested by saliency maps are often not supported by experiments.", "Our experiments suggest that saliency maps are best viewed as an exploratory tool rather than an explanatory tool.", "Saliency map methods are a popular visualization technique that produce heatmap-like output highlighting the importance of different regions of some visual input.", "They are frequently used to explain how deep networks classify images in computer vision applications (Simonyan et al., 2014; Springenberg et al., 2014; Shrikumar et al., 2017; Smilkov et al., 2017; Selvaraju et al., 2017; Zhang et al., 2018; Zeiler & Fergus, 2014; Ribeiro et al., 2016; Dabkowski & Gal, 2017; Fong & Vedaldi, 2017) and to explain how agents choose actions in reinforcement learning (RL) applications (Bogdanovic et al., 2015; Wang et al., 2015; Zahavy et al., 2016; Greydanus et al., 2017; Iyer et al., 2018; Sundar, 2018; Yang et al., 2018; Annasamy & Sycara, 2019) .", "Saliency methods in computer vision and reinforcement learning use similar procedures to generate these maps.", "However, the temporal and interactive nature of RL systems presents a unique set of opportunities and challenges.", "Deep models in reinforcement learning select sequential actions whose effects can interact over long time periods.", "This contrasts strongly with visual classification tasks, in which deep models merely map from images to labels.", "For RL systems, saliency maps are often used to assess an agent's internal representations and behavior over multiple frames in the environment, rather than to assess the importance of specific pixels in classifying images.", "Despite their common use to explain agent behavior, it is unclear whether saliency maps provide useful explanations of the behavior of deep RL agents.", "Some prior work has evaluated the applicability of saliency maps for explaining the behavior of image classifiers (Adebayo et al., 2018; Kindermans et al., 2019; Samek et al., 2016) , but there is not a corresponding literature evaluating the applicability of saliency maps for explaining RL agent behavior.", "In this work, we develop a methodology grounded in counterfactual reasoning to empirically evaluate the explanations generated using saliency maps in deep RL.", "Specifically, we:", "C1 Survey the ways in which saliency maps have been used as evidence in explanations of deep RL agents.", "C2 Describe a new interventional method to evaluate the inferences made from saliency maps.", "C3 Experimentally evaluate how well the pixel-level inferences of saliency maps correspond to the semantic-level inferences of humans.", "(a)", "(b)", "(c) Figure 1 :", "(a) A perturbation saliency map from a frame in Breakout,", "(b) a saliency map from the same model and frame with the brick pattern reflected across the vertical axis, and", "(c) a saliency map from the same model and frame with the ball, paddle and brick pattern reflected across the vertical axis.", "The blue and red regions represent their importance in action selection and reward estimation from the current state, respectively.", "The pattern and intensity of saliency around the channel is not symmetric in either reflection intervention.", "Temporal association (e.g. formation of a tunnel followed by higher saliency) does not generally imply causal dependence.", "In this case at least, tunnel formation and salience appear to be confounded by location or, at least, the dependence of these phenomena are highly dependent on location.", "Case Study 2: Amidar Score.", "Amidar is a Pac-Man-like game in which an agent attempts to completely traverse a series of passages while avoiding enemies.", "The yellow sprite that indicates the location of the agent is almost always salient in Amidar.", "Surprisingly, the displayed score is salient as often as the yellow sprite throughout the episode with varying levels of intensity.", "This can lead to multiple hypotheses about the agent's learned representation: (1) the agent has learned to associate increasing score with higher reward; (2) due to the deterministic nature of Amidar, the agent has created a lookup table that associates its score and its actions.", "We can summarize these hypotheses as follows:", "Hypothesis 2: score is salient =⇒ agent has learned to {use score as a guide to traverse the board} resulting in {successfully following similar paths in games}.", "To evaluate hypothesis 2, we designed four interventions on score:", "• intermittent reset: modify the score to 0 every x ∈ [5, 20] timesteps.", "• random varying: modify the score to a random number between [1, 200] [5, 20] timesteps.", "• fixed: select a score from [0, 200] and fix it for the whole game.", "• decremented: modify score to be 3000 initially and decrement score by d ∈ [1, 20] at every timestep.", "Figures 4a and 4b show the result of intervening on displayed score on reward and saliency intensity, measured as the average saliency over a 25x15 bounding box, respectively for the first 1000 timesteps of an episode.", "The mean is calculated over 50 samples.", "If an agent died before 1000 timesteps, the last reward was extended for the remainder of the timesteps and saliency was set to zero.", "Using reward as a summary of agent behavior, different interventions on score produce different agent behavior.", "Total accumulated reward differs over time for all interventions, typically due to early agent death.", "However, salience intensity patterns of all interventions follow the original trajectory very closely.", "Different interventions on displayed score cause differing degrees of degraded performance ( Figure 4a ) despite producing similar saliency maps (Figure 4b ), indicating that agent behavior is underdetermined by salience.", "Specifically, the salience intensity patterns are similar for the Interventions on displayed score result in differing levels of degraded performance but produce similar saliency maps, suggesting that agent behavior as measured by rewards is underdetermined by salience.", "control, fixed, and decremented scores, while the non-ordered score interventions result in degraded performance.", "Figure 4c indicates only very weak correlations between the difference-in-reward and difference-in-saliency-under-intervention as compared to the original trajectory.", "Correlation coefficients range from 0.041 to 0.274, yielding insignificant p-values for all but one intervention.", "See full results in Appendix E.1, Table 6 .", "Similar trends are noted for Jacobian and perturbation saliency methods in Appendix E.1.", "The existence of a high correlation between two processes (e.g., incrementing score and persistence of saliency) does not imply causation.", "Interventions can be useful in identifying the common cause leading to the high correlation.", "Case Study 3: Amidar Enemy Distance.", "Enemies are salient in Amidar at varying times.", "From visual inspection, we observe that enemies close to the player tend to have higher saliency.", "Accordingly, we generate the following hypothesis:", "Hypothesis 3: enemy is salient =⇒ agent has learned to {look for enemies close to it} resulting in {successful avoidance of enemy collision}.", "Without directly intervening on the game state, we can first identify whether the player-enemy distance and enemy saliency is correlated using observational data.", "We collect 1000 frames of an episode of Amidar and record the Manhattan distance between the midpoints of the player and enemies, represented by 7x7 bounding boxes, along with the object salience of each enemy.", "Figure 5a shows the distance of each enemy to the player over time with saliency intensity represented by the shaded region.", "Figure 5b shows the correlation between the distance to each enemy and the corresponding saliency.", "Correlation coefficients and significance values are reported in Table 3 .", "It is clear that there is no correlation between saliency and distance of each enemy to the player.", "Given that statistical dependence is almost always a necessary pre-condition for causation, we expect that there will not be any causal dependence.", "To further examine this, we intervene on enemy positions of salient enemies at each timestep by moving the enemy closer and farther away from the player.", "Figure 5c contains these results.", "Given Hypothesis 3, we would expect to see an increasing trend in saliency for enemies closer to the player.", "However, the size of the effect is close to 0 (see Table 3 ).", "In addition, we find no correlation in the enemy distance experiments for the Jacobian or perturbation saliency methods (included in Appendix E.2).", "Conclusion.", "Spurious correlations, or misinterpretations of existing correlation, can occur between two processes (e.g. correlation between player-enemy distance and saliency), and human observers are susceptible to identifying spurious correlations (Simon, 1954) .", "Spurious correlations can sometimes be identified from observational analysis without requiring interventional analysis.", "Thinking counterfactually about the explanations generated from saliency maps facilitates empirical evaluation of those explanations.", "The experiments above show some of the difficulties in drawing conclusions from saliency maps.", "These include the tendency of human observers to incorrectly infer association between observed processes, the potential for experimental evidence to contradict seemingly obvious observational conclusions, and the challenges of potential confounding in temporal processes.", "One of the main conclusions from this evaluation is that saliency maps are an exploratory tool rather than an explanatory tool.", "Saliency maps alone cannot be reliably used to infer explanations and instead require other supporting tools.", "This can include combining evidence from saliency maps with other explanation methods or employing a more experimental approach to evaluation of saliency maps such as the approach demonstrated in the case studies above.", "The framework for generating falsifiable hypotheses suggested in Section 4 can assist with designing more specific and falsifiable explanations.", "The distinction between the components of an explanation, particularly the semantic concept set X, learned representation R and observed behavior B, can further assist in experimental evaluation.", "Generalization of Proposed Methodology.", "The methodology presented in this work can be easily extended to other vision-based domains in deep RL.", "Particularly, the framework of the graphical model introduced in Figure 2a applies to all domains where the input to the network is image data.", "An extended version of the model for Breakout can be found in Appendix 7.", "We propose intervention-based experimentation as a primary tool to evaluate the hypotheses generated from saliency maps.", "Yet, alternative methods can identify a false hypothesis even earlier.", "For instance, evaluating statistical dependence alone can help identify some situations in which causation is absent (e.g., Case Study 3).", "We also employ TOYBOX in this work.", "However, limited forms of evaluation may be possible in non-intervenable environments, though they may be more tedious to implement.", "For instance, each of the interventions conducted in Case Study 1 can be produced in an observation-only environment by manipulating the pixel input (Chalupka et al., 2015; Brunelli, 2009 ).", "Developing more experimental systems for evaluating explanations is an open area of research.", "This work analyzes explanations generated from feed-forward deep RL agents.", "Yet, given that the proposed methodology is not model dependent, aspects of the approach will carry over to recurrent deep RL agents.", "The proposed methodology would not work for repeated interventions on recurrent deep RL agents due to their capacity for memorization.", "We conduct a survey of uses of saliency maps, propose a methodology to evaluate saliency maps, and examine the extent to which the agent's learned representations can be inferred from saliency maps.", "We investigate how well the pixel-level inferences of saliency maps correspond to the semantic concept-level inferences of human-level interventions.", "Our results show saliency maps cannot be trusted to reflect causal relationships between semantic concepts and agent behavior.", "We recommend saliency maps to be used as an exploratory tool, not explanatory tool." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.277777761220932, 0.23529411852359772, 0.2978723347187042, 0.11428570747375488, 0.09999999403953552, 0.054054051637649536, 0.11764705181121826, 0.1764705777168274, 0, 0.1666666567325592, 0.2448979616165161, 0.380952388048172, 0.2641509473323822, 0.4878048598766327, 0.2702702581882477, 0.4848484694957733, 0.29411762952804565, 0, 0.20689654350280762, 0.2222222238779068, 0.21052631735801697, 0.10810810327529907, 0.11428570747375488, 0.05405404791235924, 0.09090908616781235, 0, 0.15789473056793213, 0.11764705181121826, 0.0555555522441864, 0.2222222238779068, 0.07692307233810425, 0.1860465109348297, 0.06896550953388214, 0.12121211737394333, 0.1764705777168274, 0.1764705777168274, 0.05405404791235924, 0.12244897335767746, 0, 0.19999998807907104, 0.1818181723356247, 0.11764705181121826, 0.0624999962747097, 0.1599999964237213, 0.1538461446762085, 0.060606054961681366, 0.1111111044883728, 0.11428570747375488, 0, 0.060606054961681366, 0.04999999329447746, 0.1249999925494194, 0, 0, 0.1764705777168274, 0.07999999821186066, 0.09999999403953552, 0.09756097197532654, 0.04255318641662598, 0.15789473056793213, 0.1875, 0, 0.1666666567325592, 0.051282044500112534, 0.09302324801683426, 0, 0.1621621549129486, 0.1249999925494194, 0.09999999403953552, 0.0416666604578495, 0.06451612710952759, 0.3636363446712494, 0.24242423474788666, 0.0833333283662796, 0.21052631735801697, 0.11428570747375488, 0.25, 0.05405404791235924, 0.08888888359069824, 0, 0.22857142984867096, 0.10256409645080566, 0.060606054961681366, 0.514285683631897, 0.06896550953388214, 0, 0, 0.0555555522441864, 0.0416666604578495, 0, 0.27586206793785095, 0.25, 0.21052631735801697, 0.3636363446712494, 0.22857142984867096, 0.2702702581882477, 0.1818181723356247 ]
rkl3m1BFDB
true
[ "Proposing a new counterfactual-based methodology to evaluate the hypotheses generated from saliency maps about deep RL agent behavior. " ]
[ "One of the unresolved questions in deep learning is the nature of the solutions that are being discovered.", "We investigate the collection of solutions reached by the same network architecture, with different random initialization of weights and random mini-batches.", "These solutions are shown to be rather similar - more often than not, each train and test example is either classified correctly by all the networks, or by none at all. ", "Surprisingly, all the network instances seem to share the same learning dynamics, whereby initially the same train and test examples are correctly recognized by the learned model, followed by other examples which are learned in roughly the same order.", "When extending the investigation to heterogeneous collections of neural network architectures, once again examples are seen to be learned in the same order irrespective of architecture, although the more powerful architecture may continue to learn and thus achieve higher accuracy.", "This pattern of results remains true even when the composition of classes in the test set is unrelated to the train set, for example, when using out of sample natural images or even artificial images.", "To show the robustness of these phenomena we provide an extensive summary of our empirical study, which includes hundreds of graphs describing tens of thousands of networks with varying NN architectures, hyper-parameters and domains.", "We also discuss cases where this pattern of similarity breaks down, which show that the reported similarity is not an artifact of optimization by gradient descent.", "Rather, the observed pattern of similarity is characteristic of learning complex problems with big networks.", "Finally, we show that this pattern of similarity seems to be strongly correlated with effective generalization.", "The recent success of deep networks in solving a variety of classification problems effectively, in some cases reaching human-level precision, is not well understood.", "One baffling result is the incredible robustness of the learned models: using variants of Stochastic Gradient Descent (SGD), with random weight initialization and random sampling of mini-batches, different solutions are obtained.", "While these solutions typically correspond to different parameter values and possibly different local minima of the loss function, nevertheless they demonstrate similar performance reliably.", "To advance our understating of this issue, we are required to compare different network instances.", "Most comparison approaches (briefly reviewed in Appendix A) are based on deciphering the internal representations of the learned models (see Lenc & Vedaldi, 2015; Alain & Bengio, 2016; Li et al., 2016; Raghu et al., 2017; Wang et al., 2018) .", "We propose a simpler and more direct approachcomparing networks by their classifications of the data.", "To this end, we represent each network instance by 2 binary vectors which capture the train and test classification accuracy.", "Each vector's dimension corresponds to the size of the train/test dataset; each element is assigned 1 if the network classifies the corresponding data point correctly, and 0 otherwise.", "Recall the aforementioned empirical observation -different neural network instances, obtained by repeatedly training the same architecture with SGD while randomly sampling its initial weights, achieve similar accuracy.", "At the very least, this observation predicts that the test-based vector representation of different networks should have similar L 1 /L 2 norms.", "But there is more: it has been recently shown that features of deep networks capture perceptual similarity reliably and consistently, similarly across different instances and different architectures (Zhang et al., 2018) .", "These results seem to suggest that our proposed representation vectors may not only have a similar norm, but should also be quite similar as individual vectors.", "But similar in what way?", "In this paper, we analyze collections of deep neural networks classifiers, where the only constraint is that the instances are trained on the same classification problem, and investigate the similarity between them.", "Using the representation discussed above, we measure this similarity by two scores, consistency score and consensus score, as defined in §2.", "Like other comparison approaches (see Appendix A), our analysis reveals a high level of similarity between trained networks.", "Interestingly, it reveals a stronger sense of similarity than previously appreciated: not only is the accuracy of all the networks in the collection similar, but so is the pattern of classification.", "Specifically, at each time point during the learning process (or in each epoch), most of the data points in both the train and test sets are either classified correctly by all the networks, or by none at all.", "As shown in §3, these results are independent of choices such as optimization method, hyperparameter values, the detailed architecture, or the particular dataset.", "They can be replicated for a fixed test set even when each instance in the collection sees a different train set, as long as the training data is sampled from the same distribution.", "Moreover, the same pattern of similarity is observed for a wide range of test data, including out-of-sample images of new classes, randomly generated images, or even artificial images generated by StyleGAN (Karras et al., 2019) .", "These results are also reproduce-able across domains, and were reproduced using BiLSTM (Hochreiter & Schmidhuber, 1997) with attention (Bahdanau et al., 2014) for text classification.", "We may therefore conclude that different network instances compute similar classification functions, even when being trained with different training samples.", "It is in the dynamic of learning, where the results of our analysis seem to go significantly beyond what has been shown before, revealing an even more intriguing pattern of similarity between trained NN instances.", "Since deep NNs are almost always trained using gradient descent, each network can be represented by a time series of train-based and test-based representation vectors, one per epoch.", "We find that network instances in the collection do not only show the same pattern of classification at the end of the training, but they also evolve in the same way across time and epochs, gradually learning to correctly or incorrectly classify the same examples in the same order.", "When considering bigger classification problems such as the classification of ImageNet with big modern CNN architectures, a more intricate pattern of dynamics is evident: to begin with, all networks wrongly classify most of the examples, and correctly classify a minority of the examples.", "The learning process is revealed by examples moving from one end (100% false classification) to the other end (100% correct classification), which implies two things:", "(i) the networks learn to correctly classify examples in the same order;", "(ii) the networks agree on the examples they misclassify throughout.", "As shown in §4, these results hold regardless of the network' architecture.", "To drive this point home we compare a variety of public domain architectures such as VGG19 (Simonyan & Zisserman, 2014) , AlexNet (Krizhevsky et al., 2012) , DenseNet (Huang et al., 2017) and ResNet-50 (He et al., 2016) .", "In all cases, different architectures may learn at a different pace and achieve different generalization accuracy, but they still learn in the same order.", "Thus all networks start by learning roughly the same examples, but the more powerful networks may continue to learn additional examples as learning proceeds.", "A related phenomenon is observed when extending the analysis to simpler learning paradigms, such as deep linear networks, SVM, and KNN classifiers.", "Our empirical study extends to cases where these robust patterns of similarity break down, see §5.", "For example, when randomly shuffling the labels in a known benchmark (Zhang et al., 2016) , the agreement between different classifiers disappear.", "This stands in agreement with (Morcos et al., 2018) , where it is shown that networks that generalize are more similar than those that memorize.", "Nevertheless, the similarity in learning dynamic is not an artifact of learnability, or the fact that the networks have converged to solutions with similar accuracy.", "To see this we constructed a test case where shallow CNNs are trained to discriminate an artificial dataset of images of Gabor patches (see Appendix C).", "Here it is no longer true that different network instances learn in the same order; rather, each network instance follows its own path while converging to the final model.", "The similarity in learning dynamic is likewise not an artifact of using gradient descent.", "To see this we use SGD to train linear classifiers to discriminate vectors sampled from two largely overlapping Gaussian distributions.", "Once again, each classifier follows its own path while converging to the same optimal solution.", "We empirically show that neural networks learn similar classification functions.", "More surprisingly with respect to earlier work, the learning dynamics is also similar, as they seem to learn similar functions also in intermediate stages of learning, before convergence.", "This is true for a variety of architectures, including different CNN architectures and LSTMs, irrespective of size and other hyper-parameters of the learning algorithms.", "We have verified this pattern of results using many different CNN architectures, including most of those readily available in the public domain, and many of the datasets of natural images which are in common use when evaluating deep learning.", "The similarity of network instances is measured in the way they classify examples, including known (train) and new examples.", "Typically, the similarity over test data is as pronounced as it is over train data, as long as the train and test examples are sampled from the same distribution.", "We show that this similarity extends also to out of sample test data, but it seems to decrease as the gap between the distribution of the train data and the test data is increased.", "This pattern of similarity crosses architectural borders: while different architectures may learn at a different speed, the data is learned in the same order.", "Thus all architectures which reach a certain error rate seem to classify, for the most part, the same examples in the same manner.", "We also see that stronger architectures, which reach a lower generalization error, seem to start by first learning the examples that weaker architectures classify correctly, followed by the learning of some more difficult examples.", "This may suggest that the order in which data is learned is an internal property of the data.", "We also discuss cases where this similarity breaks down, indicating that the observed similarity is not an artifact of using stochastic gradient descent.", "Rather, the observed pattern of similarity seems to characterize the learning of complex problems with big networks.", "Curiously, the deeper the network is and the more non-linearities it has, and even though the model has more learning parameters, the progress of learning in different network instances becomes more similar to each other.", "Un-intuitively, this suggests that in a sense the number of degrees of freedom in the learning process is reduced, and that there are fewer ways to learn the data.", "This effect seems to force different networks, as long they are deep enough, to learn the dataset in the same way.", "This counter-intuitive result joins other non-intuitive results, like the theoretical result that a deeper linear neural network converges faster to the global optimum than a shallow network (Arora et al., 2018) .", "We also show that the observed pattern of similarity is strongly correlated with effective generalization.", "What does it tell us about the generalization of neural networks, a question which is considered by many to be poorly understood?", "Neural networks can memorize an almost limitless number of examples, it would seem.", "To achieve generalization, most training protocols employ some regularization mechanism which does not allow for unlimited data memorization.", "As a result, the network fits only the train and test examples it would normally learn first, which are, based on our analysis, also the \"easier\" (or more typical) examples.", "We hypothesize that this may explain why a regularized network discovers robust solutions, with little variability among its likely instances.", "(Cybenko, 1989) , and they can learn any arbitrary complex function (Hornik et al., 1989) .", "This extended capacity can indeed be reached, and neural networks can memorize datasets with randomly assigned labels (Zhang et al., 2016) .", "Nevertheless, the dominant hypothesis today is that in natural datasets they \"prefer\" to learn an easier hypothesis that fits the data rather than memorize it all (Zhang et al., 2016; Arpit et al., 2017) .", "Our work is consistent with a hypothesis which requires fewer assumptions, see Section 6.", "The direct comparison of neural representations is regarded to be a hard problem, due to a large number of parameters and the many underlying symmetries.", "Many non-direct approaches are available in the literature: (Li et al., 2016; Wang et al., 2018) compare subsets of similar features across multiple networks, which span similar low dimensional spaces, and show that while single neurons can vary drastically, some features are reliably learned across networks.", "(Raghu et al., 2017) proposed the SVCCA method, which can compare layers and networks efficiently, with an amalgamation of SVD and CCA.", "They showed that multiple instances of the same converged network are similar to each other and that networks converge in a bottom-up way, from earlier layers to deeper ones.", "Morcos et al. (2018) builds off the results of (Raghu et al., 2017) , further showing that networks which generalize are more similar than ones which memorize, and that similarity grows with the width of the network.", "In various machine learning methods such as curriculum learning (Bengio et al., 2009 ), self-paced learning (Kumar et al., 2010) and active learning (Schein & Ungar, 2007) , examples are presented to the learner in a specific order (Hacohen & Weinshall, 2019; Jiang et al., 2017) .", "Although conceptually similar, here we analyze the order in which examples are learned, while the aforementioned methods seek ways to alter it.", "Likewise, the design of effective initialization methods is a striving research area (Erhan et al., 2010; Glorot & Bengio, 2010; Rumelhart et al., 1988 ).", "Here we do not seek to improve these methods, but rather analyze the properties of a collection of network instances generated by the same initialization methodology." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19354838132858276, 0.1764705777168274, 0.08695651590824127, 0.17777776718139648, 0.19607841968536377, 0.13636362552642822, 0.17391303181648254, 0.09999999403953552, 0.2666666507720947, 0.0624999962747097, 0.15789473056793213, 0.09302324801683426, 0.1538461446762085, 0.06451612710952759, 0.11999999731779099, 0.19354838132858276, 0.1111111044883728, 0.09756097197532654, 0.1428571343421936, 0.15789473056793213, 0.1304347813129425, 0, 0, 0.2666666507720947, 0.05405404791235924, 0.11764705181121826, 0.24390242993831635, 0.17391303181648254, 0.10526315122842789, 0.13333332538604736, 0.1666666567325592, 0.0952380895614624, 0.11428570747375488, 0.1249999925494194, 0.045454539358615875, 0.22641508281230927, 0.23529411852359772, 0.10256409645080566, 0.2222222238779068, 0.1599999964237213, 0.1428571343421936, 0.04081632196903229, 0.1621621549129486, 0.2702702581882477, 0.10526315122842789, 0.0624999962747097, 0.05405404791235924, 0.05128204822540283, 0.20512819290161133, 0.04878048226237297, 0.09302324801683426, 0.13333332538604736, 0, 0.12903225421905518, 0.23076923191547394, 0.1904761791229248, 0.21621620655059814, 0.16326530277729034, 0.11428570747375488, 0.1111111044883728, 0.09302324801683426, 0.15789473056793213, 0.1666666567325592, 0.17777776718139648, 0.12903225421905518, 0.10526315122842789, 0.25806450843811035, 0.1904761791229248, 0.14999999105930328, 0.11428570747375488, 0.09090908616781235, 0.12903225421905518, 0.15789473056793213, 0.13793103396892548, 0, 0.04651162400841713, 0, 0, 0.10810810327529907, 0.08695651590824127, 0, 0.15789473056793213, 0.14035087823867798, 0.15789473056793213, 0.1860465109348297, 0.1304347813129425, 0.07407406717538834, 0.05405404791235924, 0.10256409645080566, 0.14999999105930328 ]
HJgub1SKDH
true
[ "Most neural networks approximate the same classification function, even across architectures, through all stages of learning." ]
[ "We propose a unified framework for building unsupervised representations of individual objects or entities (and their compositions), by associating with each object both a distributional as well as a point estimate (vector embedding).", "This is made possible by the use of optimal transport, which allows us to build these associated estimates while harnessing the underlying geometry of the ground space.", "Our method gives a novel perspective for building rich and powerful feature representations that simultaneously capture uncertainty (via a distributional estimate) and interpretability (with the optimal transport map).", "As a guiding example, we formulate unsupervised representations for text, in particular for sentence representation and entailment detection.", "Empirical results show strong advantages gained through the proposed framework.", "This approach can be used for any unsupervised or supervised problem (on text or other modalities) with a co-occurrence structure, such as any sequence data.", "The key tools underlying the framework are Wasserstein distances and Wasserstein barycenters (and, hence the title!).", "One of the main driving factors behind the recent surge of interest and successes in natural language processing and machine learning has been the development of better representation methods for data modalities.", "Examples include continuous vector representations for language (Mikolov et al., 2013; Pennington et al., 2014) , convolutional neural network (CNN) based text representations (Kim, 2014; Kalchbrenner et al., 2014; Severyn and Moschitti, 2015; BID4 , or via other neural architectures such as RNNs, LSTMs BID14 Collobert and Weston, 1 And, hence the title! 2008), all sharing one core idea -to map input entities to dense vector embeddings lying in a lowdimensional latent space where the semantics of the inputs are preserved.While existing methods represent each entity of interest (e.g., a word) as a single point in space (e.g., its embedding vector), we here propose a fundamentally different approach.", "We represent each entity based on the histogram of contexts (cooccurring with it), with the contexts themselves being points in a suitable metric space.", "This allows us to cast the distance between histograms associated with the entities as an instance of the optimal transport problem (Monge, 1781; Kantorovich, 1942; Villani, 2008) .", "For example, in the case of words as entities, the resulting framework then intuitively seeks to minimize the cost of moving the set of contexts of a given word to the contexts of another.", "Note that the contexts here can be words, phrases, sentences, or general entities cooccurring with our objects to be represented, and these objects further could be any type of events extracted from sequence data, including e.g., products such as movies or web-advertisements BID8 , nodes in a graph BID9 , or other entities (Wu et al., 2017) .", "Any co-occurrence structure will allow the construction of the histogram information, which is the crucial building block for our approach.A strong motivation for our proposed approach here comes from the domain of natural language, where the entities (words, phrases or sentences) generally have multiple semantics under which they are present.", "Hence, it is important that we consider representations that are able to effectively capture such inherent uncertainty and polysemy, and we will argue that histograms (or probability distributions) over embeddings allows to capture more of this information compared to point-wise embeddings alone.", "We will call the histogram as the distributional estimate of our object of interest, while we refer to the individual embeddings of single contexts as point estimates.Next, for the sake of clarity, we discuss the framework in the concrete use-case of text representations, when the contexts are just words, by employing the well-known Positive Pointwise Mutual Information (PPMI) matrix to compute the histogram information for each word.With the power of optimal transport, we show how this framework can be of significant use for a wide variety of important tasks in NLP, including word and sentence representations as well as hypernymy (entailment) detection, and can be readily employed on top of existing pre-trained embeddings for the contexts.", "The connection to optimal transport at the level of words and contexts paves the way to make better use of its vast toolkit (like Wasserstein distances, barycenters, etc.) for applications in NLP, which in the past has primarily been restricted to document distances (Kusner et al., 2015; BID16 .We", "demonstrate that building the required histograms comes at almost no additional cost, as the co-occurrence counts are obtained in a single pass over the corpus. Thanks", "to the entropic regularization introduced by Cuturi (2013), Optimal Transport distances can be computed efficiently in a parallel and batched manner on GPUs. Lastly", ", the obtained transport map FIG0 ) also provides for interpretability of the suggested framework.", "To sum up, we advocate for associating both a distributional and point estimate as a representation for each entity.", "We show how this allows us to use optimal transport over the set of contexts associated with these entities, in problems with a co-occurrence structure.", "Further, the framework Aitor Gonzalez-Agirre.", "2012.", "Semeval-2012 In particular, when β = 1, we recover the equation for histograms as in Section 5, and β = 0 would imply normalization with respect to cluster sizes." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.08695651590824127, 0.09999999403953552, 0.0476190410554409, 0.060606054961681366, 0, 0, 0.13333332538604736, 0.09302324801683426, 0.1320754736661911, 0.37837836146354675, 0.04878048226237297, 0.14999999105930328, 0.08955223858356476, 0.10344827175140381, 0.11999999731779099, 0.12371134012937546, 0.16393442451953888, 0, 0.09999999403953552, 0.06666666269302368, 0.1818181723356247, 0.09999999403953552, 0, 0.04651162400841713 ]
Bkx2jd4Nx7
true
[ "Represent each entity based on its histogram of contexts and then Wasserstein is all you need!" ]
[ "In this paper, we propose two methods, namely Trace-norm regression (TNR) and Stable Trace-norm Analysis (StaTNA), to improve performances of recommender systems with side information.", "Our trace-norm regression approach extracts low-rank latent factors underlying the side information that drives user preference under different context.", "Furthermore, our novel recommender framework StaTNA not only captures latent low-rank common drivers for user preferences, but also considers idiosyncratic taste for individual users.", "We compare performances of TNR and StaTNA on the MovieLens datasets against state-of-the-art models, and demonstrate that StaTNA and TNR in general outperforms these methods.", "The boom of user activity on e-commerce and social networks has continuously fueled the development of recommender systems to most effectively provide suggestions for items that may potentially match user interest.", "In highlyrated Internet sites such as Amazon.com, YouTube, Netflix, Spotify, LinkedIn, Facebook, Tripadvisor, Last.fm, and IMDb, developing and deploying personalized recommender systems lie at the crux of the services they provide to users and subscribers (Ricci et al., 2015) .", "For example, Youtube, one of the worlds most popular video sites, has deployed a recommender system that updates regularly to deliver personalized sets of videos to users based on their previous or recent activity on site to help users find videos relevant to their interests, potentially keeping users entertained and engaged BID5 .Among", "the vast advancements in deep learning and matrix completion techniques to build recommender systems (Ricci BID21 , one of the most imperative aspect of research in such area is to identify latent (possibly low-rank) commonalities that drive specific types of user behaviour. For example", ", BID6 proposes a deep neural network based matrix factorization approach that uses explicit rating as well as implicit ratings to map user and items into common low-dimensional space. Yet, such", "variety of low-rank methodologies do not address the impact of idiosyncratic behaviour among buyers, which may potentially skew the overall learned commonalities across user groups.In this work, we propose two multi-task learning methods to improve performances of recommender systems using contextual side information. We first", "introduce an approach based on trace-norm regression (TNR) that enables us to extract low-rank latent dimensions underlying the side information that drive user preference according to variations in context, such as item features, user characteristics, time, season, location, etc. This is", "achieved by introducing a nuclear-norm regularization penalty term in the multi-task regression model, and we highlight that such latent dimensions can be thought of as homogeneous behaviour among particular types of user groups. Furthermore", ", we propose a novel recommender framework called Stable Trace-norm Analysis (StaTNA) that not only captures latent low-rank common drivers for user preference, but also considers idiosyncratic taste for individual users. This is achieved", "by, in addition to the low-rank penalty, adding a sparsity regularization term to exploit the sparse nature of heterogeneous behaviour. Finally, we test", "the performance of StaTNA on the MovieLens datasets against state-of-the-art models, and demonstrate that StaTNA and TNR in general outperforms these methods.", "As mentioned in earlier sections, we are interested in analyzing particular underlying commonalities in user preferences.", "We achieve this by investigating the principal components of our estimate of the low-rank matrix L, each of which we consider as a common type of user preference.", "Since our estimated L is of rank 6, we conclude that there are 6 major common types of user preferences, whose component scores (i.e. explained variance percentages) are listed in Table 4 , where we observe that the first principal component explains 88.94% of the variability in user ratings.", "Table 5 .", "Top 12 features of highest absolute weights within the first two principal components (PC1 and PC2).", "Details of other principle components are shown in TAB9 in Appendinx C.2.", "Our methodology to solve TNR and StaTNA (i.e. Algorithm 1 in Appendix A.1) may be computationally expensive when the matrix is large since it requires calling a Singular Value Decomposition (SVD) oracle in each iteration of the algorithm.", "Hence we propose two alternative methods, a FW-T algorithm and a nonconvex reformulation of the problem, to avoid using an SVD oracle.", "These are detailed in Appendix A.2.", "Furthermore, our current studies use side information from only one side, namely movie information.", "Our StaTNA framework can be extended to incorporate side information for both movies and users: DISPLAYFORM0 where U and M denotes users and movies respectively.", "Moreover, our StaTNA framework is also compatible with neural networks by including nuclear norm and sparse penalties to the objective.", "We believe that similar formulations will provide us with better performance guarantees, but at the cost of model interpretability.", "In this section, we discuss the methodologies we use to solve TNR and StaTNA.", "As mentioned earlier, we use (Fast) Iterative Shrinkage-Thresholding Algorithm (FISTA, BID2 ) to solve these problems.", "Before we address the detailed applications of these algorithms in our context to solve TNR and StaTNA, we introduce the following optimization oracles.", "We define the proximal mapping of the 1 norm as DISPLAYFORM1 , whose extension to matrices is obtained by applying the scalar operator to each element.", "Moreover, we define the proximal mapping of the nuclear norm BID4 BID13 DISPLAYFORM2 V , and Y = U DV is the SVD of matrix Y .", "Now, using these definitions, we detail the algorithm to solve StaTNA in Algorithm 1.", "Note that one can also initialize L 0 in both Algorithm 1 as DISPLAYFORM3 , where † denotes the pseudo-inverse of a matrix.For StaTNA, we directly apply FISTA to estimate L and S, and the procedures are detailed in Algorithm 1.", "As aforementioned, TNR is a special case for StaTNA, so to solve TNR, we simply take λ S = ∞ in Algorithm 1, which forces all S k andŜ k to 0.", "DISPLAYFORM4" ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2857142686843872, 0.19999998807907104, 0.11764705181121826, 0.0624999962747097, 0.20000000298023224, 0.07999999821186066, 0.1090909093618393, 0.07999999821186066, 0.04878048226237297, 0.14814814925193787, 0.20408162474632263, 0.045454543083906174, 0.09090908616781235, 0.0624999962747097, 0.06666666269302368, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.1666666567325592, 0.1818181723356247, 0.06451612710952759, 0.06666666269302368, 0, 0, 0, 0, 0, 0, 0, 0.04999999701976776 ]
HJlNE5rinE
true
[ "Methodologies for recommender systems with side information based on trace-norm regularization" ]
[ "This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games.", "Text-based computer games describe their world to the player through natural language and expect the player to interact with the game using text.", "These games are of interest as they can be seen as a testbed for language understanding, problem-solving, and language generation by artificial agents.", "Moreover, they provide a learning environment in which these skills can be acquired through interactions with an environment rather than using fixed corpora. \n", "One aspect that makes these games particularly challenging for learning agents is the combinatorially large action space.\n", "Existing methods for solving text-based games are limited to games that are either very simple or have an action space restricted to a predetermined set of admissible actions.", "In this work, we propose to use the exploration approach of Go-Explore (Ecoffet et al., 2019) for solving text-based games.", "More specifically, in an initial exploration phase, we first extract trajectories with high rewards, after which we train a policy to solve the game by imitating these trajectories.\n", "Our experiments show that this approach outperforms existing solutions in solving text-based games, and it is more sample efficient in terms of the number of interactions with the environment.", "Moreover, we show that the learned policy can generalize better than existing solutions to unseen games without using any restriction on the action space.", "Text-based games became popular in the mid 80s with the game series Zork (Anderson & Galley, 1985) resulting in many different text-based games being produced and published (Spaceman, 2019) .", "These games use a plain text description of the environment and the player has to interact with them by writing natural-language commands.", "Recently, there has been a growing interest in developing agents that can automatically solve text-based games by interacting with them.", "These settings challenge the ability of an artificial agent to understand natural language, common sense knowledge, and to develop the ability to interact with environments using language (Luketina et al., 2019; Branavan et al., 2012) .", "Since the actions in these games are commands that are in natural language form, the major obstacle is the extremely large action space of the agent, which leads to a combinatorially large exploration problem.", "In fact, with a vocabulary of N words (e.g. 20K) and the possibility of producing sentences with at most m words (e.g. 7 words), the total number of actions is O(N m ) (e.g. 20K 7 ≈ 1.28e 30 ).", "To avoid this large action space, several existing solutions focus on simpler text-based games with very small vocabularies where the action space is constrained to verb-object pairs (DePristo & Zubek, 2001; Narasimhan et al., 2015; Zelinka, 2018) .", "Moreover, many existing works rely on using predetermined sets of admissible actions (He et al., 2015; Tessler et al., 2019; Zahavy et al., 2018) .", "However, a more ideal, and still under explored, alternative would be an agent that can operate in the full, unconstrained action space of natural language that can systematically generalize to new text-based games with no or few interactions with the environment.", "To address this challenge, we propose to use the idea behind the recently proposed GoExplore (Ecoffet et al., 2019) algorithm.", "Specifically, we propose to first extract high reward trajectories of states and actions in the game using the exploration methodology proposed in Go-Explore and then train a policy using a Seq2Seq (Sutskever et al., 2014) model that maps observations to actions, in an imitation learning fashion.", "To show the effectiveness of our proposed methodology, we first benchmark the exploration ability of Go-Explore on the family of text-based games called CoinCollector .", "Then we use the 4,440 games of \"First TextWorld Problems\" (Côté, 2018) , which are generated using the machinery introduced by , to show the generalization ability of our proposed methodology.", "In the former experiment we show that Go-Explore finds winning trajectories faster than existing solutions, and in the latter, we show that training a Seq2Seq model on the trajectories found by Go-Explore results in stronger generalization, as suggested by the stronger performance on unseen games, compared to existing competitive baselines (He et al., 2015; Narasimhan et al., 2015) .", "Reinforcement Learning Based Approaches for Text-Based Games Among reinforcement learning based efforts to solve text-based games two approaches are prominent.", "The first approach assumes an action as a sentence of a fixed number of words, and associates a separate Qfunction (Watkins, 1989; Mnih et al., 2015) with each word position in this sentence.", "This method was demonstrated with two-word sentences consisting of a verb-object pair (e.g. take apple) (DePristo & Zubek, 2001; Narasimhan et al., 2015; Zelinka, 2018; Fulda et al., 2017) .", "In the second approach, one Q-function that scores all possible actions (i.e. sentences) is learned and used to play the game (He et al., 2015; Tessler et al., 2019; Zahavy et al., 2018) .", "The first approach is quite limiting since a fixed number of words must be selected in advance and no temporal dependency is enforced between words (e.g. lack of language modelling).", "In the second approach, on the other hand, the number of possible actions can become exponentially large if the admissible actions (a predetermined low cardinality set of actions that the agent can take) are not provided to the agent.", "A possible solution to this issue has been proposed by Tao et al. (2018) , where a hierarchical pointer-generator is used to first produce the set of admissible actions given the observation, and subsequently one element of this set is chosen as the action for that observation.", "However, in our experiments we show that even in settings where the true set of admissible actions is provided by the environment, a Q-scorer (He et al., 2015) does not generalize well in our setting (Section 5.2 Zero-Shot) and we would expect performance to degrade even further if the admissible actions were generated by a separate model.", "Less common are models that either learn to reduce a large set of actions into a smaller set of admissible actions by eliminating actions (Zahavy et al., 2018) or by compressing them in a latent space (Tessler et al., 2019) .", "Experimental results show that our proposed Go-Explore exploration strategy is a viable methodology for extracting high-performing trajectories in text-based games.", "This method allows us to train supervised models that can outperform existing models in the experimental settings that we study.", "Finally, there are still several challenges and limitations that both our methodology and previous solutions do not fully address yet.", "For instance:", "State Representation The state representation is the main limitation of our proposed imitation learning model.", "In fact, by examining the observations provided in different games, we notice a large overlap in the descriptions (D) of the games.", "This overlap leads to a situation where the policy receives very similar observations, but is expected to imitate two different actions.", "This show especially in the joint setting of CookingWorld, where the 222 games are repeated 20 times with different entities and room maps.", "In this work, we opted for a simple Seq2Seq model for our policy, since our goal is to show the effectiveness of our proposed exploration methods.", "However, a more complex Hierarchical-Seq2Seq model (Sordoni et al., 2015) or a better encoder representation based on knowledge graphs (Ammanabrolu & Riedl, 2019a; b) would likely improve the of performance this approach.", "Language Based Exploration In Go-Explore, the given admissible actions are used during random exploration.", "However, in more complex games, e.g. Zork I and in general the Z-Machine games, these admissible actions are not provided.", "In such settings, the action space would explode in size, and thus Go-Explore, even with an appropriate cell representation, would have a hard time finding good trajectories.", "To address this issue one could leverage general language models to produce a set of grammatically correct actions.", "Alternatively one could iteratively learn a policy to sample actions, while exploring with Go-Explore.", "Both strategies are viable, and a comparison is left to future work.", "It is worth noting that a hand-tailored solution for the CookingWorld games has been proposed in the \"First TextWorld Problems\" competition .", "This solution 6 managed to obtain up to 91.9% of the maximum possible score across the 514 test games on an unpublished dataset.", "However, this solution relies on entity extraction and template filling, which we believe limits its potential for generalization.", "Therefore, this approach should be viewed as complementary rather than competitor to our approach as it could potentially be used as an alternative way of getting promising trajectories.", "In this paper we presented a novel methodology for solving text-based games which first extracts high-performing trajectories using phase 1 of Go-Explore and then trains a simple Seq2Seq model that maps observations to actions using the extracted trajectories.", "Our experiments show promising results in three settings, with improved generalization and sample efficiency compared to existing methods.", "Finally, we discussed the limitations and possible improvements of our methodology, which leads to new research challenges in text-based games." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.9714285731315613, 0.1621621549129486, 0.1538461446762085, 0.1463414579629898, 0.1111111044883728, 0.1860465109348297, 0.20512819290161133, 0.17777776718139648, 0.1818181723356247, 0.04878048226237297, 0.1818181723356247, 0.1538461446762085, 0.15789473056793213, 0.1666666567325592, 0.17391303181648254, 0.07999999821186066, 0.072727270424366, 0.051282044500112534, 0.2545454502105713, 0, 0.17241379618644714, 0.21052631735801697, 0.08888888359069824, 0.09677419066429138, 0.10526315122842789, 0.1666666567325592, 0.08510638028383255, 0.0416666604578495, 0.1304347813129425, 0.08510638028383255, 0.06896550953388214, 0.12121211737394333, 0.07999999821186066, 0.21052631735801697, 0.1111111044883728, 0.05405404791235924, 0.060606054961681366, 0.1621621549129486, 0.052631575614213943, 0.25, 0.09756097197532654, 0.07999999821186066, 0.0624999962747097, 0.10810810327529907, 0.13636362552642822, 0.0555555522441864, 0, 0.13333332538604736, 0.10526315122842789, 0.19999998807907104, 0.0555555522441864, 0.0952380895614624, 0.15094339847564697, 0.1111111044883728, 0.2631579041481018 ]
BygSXCNFDB
true
[ "This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games. " ]
[ "The recent “Lottery Ticket Hypothesis” paper by Frankle & Carbin showed that a simple approach to creating sparse networks (keep the large weights) results in models that are trainable from scratch, but only when starting from the same initial weights.", "The performance of these networks often exceeds the performance of the non-sparse base model, but for reasons that were not well understood.", "In this paper we study the three critical components of the Lottery Ticket (LT) algorithm, showing that each may be varied significantly without impacting the overall results.", "Ablating these factors leads to new insights for why LT networks perform as well as they do.", "We show why setting weights to zero is important, how signs are all you need to make the re-initialized network train, and why masking behaves like training.", "Finally, we discover the existence of Supermasks, or masks that can be applied to an untrained, randomly initialized network to produce a model with performance far better than chance (86% on MNIST, 41% on CIFAR-10).", "Many neural networks are over-parameterized BID0 BID1 , enabling compression of each layer BID1 BID14 BID4 or of the entire network BID9 .", "Some compression approaches enable more efficient computation by pruning parameters, by factorizing matrices, or via other tricks BID4 BID5 BID8 BID10 BID11 BID12 BID13 BID14 BID15 BID16 .", "A recent work by Frankle & Carbin BID2 presented a simple algorithm for finding sparse subnetworks within larger networks that can meet or exceed the performance of the original network.", "Their approach is as follows: after training a network, set all weights smaller than some threshold to zero BID2 , rewind the rest of the weights to their initial configuration BID3 , and then retrain the network from this starting configuration but with the zero weights frozen (not trained).", "See Section S1 for a more formal description of this algorithm.In this paper we perform ablation studies along the above three dimensions of variability, considering alternate mask criteria (Section 2), alternate mask-1 actions (Section 3), and alternate mask-0 actions (Section 4).", "These studies in aggregate reveal new insights for why lottery ticket networks work as they do.", "Along the way we also discover the existence of Supermasks-masks that produce above-chance performance when applied to untrained networks (Section 5)." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.035087715834379196, 0.051282044500112534, 0.13333332538604736, 0.0555555522441864, 0.31111109256744385, 0.15094339847564697, 0.14999999105930328, 0, 0.12244897335767746, 0.23333333432674408, 0.1090909019112587, 0.0555555522441864, 0.04999999329447746 ]
rkeTDNS3hN
true
[ "In neural network pruning, zeroing pruned weights is important, sign of initialization is key, and masking can be thought of as training." ]
[ "Fine-tuning with pre-trained models has achieved exceptional results for many language tasks.", "In this study, we focused on one such self-attention network model, namely BERT, which has performed well in terms of stacking layers across diverse language-understanding benchmarks.", "However, in many downstream tasks, information between layers is ignored by BERT for fine-tuning.", "In addition, although self-attention networks are well-known for their ability to capture global dependencies, room for improvement remains in terms of emphasizing the importance of local contexts.", "In light of these advantages and disadvantages, this paper proposes SesameBERT, a generalized fine-tuning method that (1) enables the extraction of global information among all layers through Squeeze and Excitation and (2) enriches local information by capturing neighboring contexts via Gaussian blurring.", "Furthermore, we demonstrated the effectiveness of our approach in the HANS dataset, which is used to determine whether models have adopted shallow heuristics instead of learning underlying generalizations.", "The experiments revealed that SesameBERT outperformed BERT with respect to GLUE benchmark and the HANS evaluation set.", "In recent years, unsupervised pretrained models have dominated the field of natural language processing (NLP).", "The construction of a framework for such a model involves two steps: pretraining and fine-tuning.", "During pretraining, an encoder neural network model is trained using large-scale unlabeled data to learn word embeddings; parameters are then fine-tuned with labeled data related to downstream tasks.", "Traditionally, word embeddings are vector representations learned from large quantities of unstructured textual data such as those from Wikipedia corpora (Mikolov et al., 2013) .", "Each word is represented by an independent vector, even though many words are morphologically similar.", "To solve this problem, techniques for contextualized word representation (Peters et al., 2018; Devlin et al., 2019) have been developed; some have proven to be more effective than conventional word-embedding techniques, which extract only local semantic information of individual words.", "By contrast, pretrained contextual representations learn sentence-level information from sentence encoders and can generate multiple word embeddings for a word.", "Pretraining methods related to contextualized word representation, such as BERT (Devlin et al., 2019) , OpenAI GPT (Radford et al., 2018) , and ELMo (Peters et al., 2018) , have attracted considerable attention in the field of NLP and have achieved high accuracy in GLUE tasks such as single-sentence, similarity and paraphrasing, and inference tasks .", "Among the aforementioned pretraining methods, BERT, a state-of-the-art network, is the leading method that applies the architecture of the Transformer encoder, which outperforms other models with respect to the GLUE benchmark.", "BERT's performance suggests that self-attention is highly effective in extracting the latent meanings of sentence embeddings.", "This study aimed to improve contextualized word embeddings, which constitute the output of encoder layers to be fed into a classifier.", "We used the original method of the pretraining stage in the BERT model.", "During the fine-tuning process, we introduced a new architecture known as Squeeze and Excitation alongside Gaussian blurring with symmetrically SAME padding (\"SESAME\" hereafter).", "First, although the developer of the BERT model initially presented several options for its use, whether the selective layer approaches involved information contained in all layers was unclear.", "In a previous study, by investigating relationships between layers, we observed that the Squeeze and Excitation method (Hu et al., 2018) is key for focusing on information between layer weights.", "This method enables the network to perform feature recalibration and improves the quality of representations by selectively emphasizing informative features and suppressing redundant ones.", "Second, the self-attention mechanism enables a word to analyze other words in an input sequence; this process can lead to more accurate encoding.", "The main benefit of the self-attention mechanism method is its high ability to capture global dependencies.", "Therefore, this paper proposes the strategy, namely Gaussian blurring, to focus on local contexts.", "We created a Gaussian matrix and performed convolution alongside a fixed window size for sentence embedding.", "Convolution helps a word to focus on not only its own importance but also its relationships with neighboring words.", "Through such focus, each word in a sentence can simultaneously maintain global and local dependencies.", "We conducted experiments with our proposed method to determine whether the trained model could outperform the BERT model.", "We observed that SesameBERT yielded marked improvement across most GLUE tasks.", "In addition, we adopted a new evaluation set called HANS , which was designed to diagnose the use of fallible structural heuristics, namely the lexical overlap heuristic, subsequent heuristic, and constituent heuristic.", "Models that apply these heuristics are guaranteed to fail in the HANS dataset.", "For example, although BERT scores highly in the given test set, it performs poorly in the HANS dataset; BERT may label an example correctly not based on reasoning regarding the meanings of sentences but rather by assuming that the premise entails any hypothesis whose words all appear in the premise (Dasgupta et al., 2018) .", "By contrast, SesameBERT performs well in the HANS dataset; this implies that this model does not merely rely on heuristics.", "In summary, our final model proved to be competitive on multiple downstream tasks.", "This paper proposes a fine-tuning approach named SesameBERT based on the pretraining model BERT to improve the performance of self-attention networks.", "Specifically, we aimed to find highquality attention output layers and then extract information from aspects in all layers through Squeeze and Excitation.", "Additionally, we adopted Gaussian blurring to help capture local contexts.", "Experiments using GLUE datasets revealed that SesameBERT outperformed the BERT baseline model.", "The results also revealed the weight distributions of different layers and the effects of applying different Gaussian-blurring approaches when training the model.", "Finally, we used the HANS dataset to determine whether our models were learning what we wanted them to learn rather than using shallow heuristics.", "We highlighted the use of lexical overlap heuristics as an advantage over the BERT model.", "SesameBERT could be further applied to prevent models from easily adopting shallow heuristics.", "A DESCRIPTIONS OF GLUE DATASETS" ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.0714285671710968, 0.1818181723356247, 0.1818181723356247, 0.8235294222831726, 0.0714285671710968, 0.12765957415103912, 0.08888888359069824, 0.1818181723356247, 0, 0.03703703358769417, 0.04444444179534912, 0.0882352888584137, 0.12244897335767746, 0.08571428060531616, 0.17543859779834747, 0.1304347813129425, 0.1599999964237213, 0.19512194395065308, 0.30188679695129395, 0.178571417927742, 0.29999998211860657, 0.23076923191547394, 0.11538460850715637, 0.17391303181648254, 0.1818181723356247, 0.17777776718139648, 0.0833333283662796, 0.17777776718139648, 0.17391303181648254, 0.09756097197532654, 0.13333332538604736, 0.09302324801683426, 0.12987013161182404, 0.08163265138864517, 0, 0.1599999964237213, 0.2800000011920929, 0.20000000298023224, 0.0952380895614624, 0.1666666567325592, 0.038461532443761826, 0.13636362552642822, 0, 0 ]
H1lac2Vtwr
true
[ "We proposed SesameBERT, a generalized fine-tuning method that enables the extraction of global information among all layers through Squeeze and Excitation and enriches local information by capturing neighboring contexts via Gaussian blurring." ]
[ "A growing number of learning methods are actually differentiable games whose players optimise multiple, interdependent objectives in parallel – from GANs and intrinsic curiosity to multi-agent RL.", "Opponent shaping is a powerful approach to improve learning dynamics in these games, accounting for player influence on others’ updates.", "Learning with Opponent-Learning Awareness (LOLA) is a recent algorithm that exploits this response and leads to cooperation in settings like the Iterated Prisoner’s Dilemma.", "Although experimentally successful, we show that LOLA agents can exhibit ‘arrogant’ behaviour directly at odds with convergence.", "In fact, remarkably few algorithms have theoretical guarantees applying across all (n-player, non-convex) games.", "In this paper we present Stable Opponent Shaping (SOS), a new method that interpolates between LOLA and a stable variant named LookAhead.", "We prove that LookAhead converges locally to equilibria and avoids strict saddles in all differentiable games.", "SOS inherits these essential guarantees, while also shaping the learning of opponents and consistently either matching or outperforming LOLA experimentally.", "Problem Setting.", "While machine learning has traditionally focused on optimising single objectives, generative adversarial nets (GANs) BID9 have showcased the potential of architectures dealing with multiple interacting goals.", "They have since then proliferated substantially, including intrinsic curiosity BID19 , imaginative agents BID20 , synthetic gradients , hierarchical reinforcement learning (RL) BID23 BID22 and multi-agent RL in general BID2 .These", "can effectively be viewed as differentiable games played by cooperating and competing agents -which may simply be different internal components of a single system, like the generator and discriminator in GANs. The difficulty", "is that each loss depends on all parameters, including those of other agents. While gradient", "descent on single functions has been widely successful, converging to local minima under rather mild conditions BID13 , its simultaneous generalisation can fail even in simple two-player, two-parameter zero-sum games. No algorithm has", "yet been shown to converge, even locally, in all differentiable games.Related Work. Convergence has", "widely been studied in convex n-player games, see especially BID21 ; BID5 . However, the recent", "success of non-convex games exemplified by GANs calls for a better understanding of this general class where comparatively little is known. BID14 recently prove", "local convergence of no-regreat learning to variationally stable equilibria, though under a number of regularity assumptions.Conversely, a number of algorithms have been successful in the non-convex setting for restricted classes of games. These include policy", "prediction in two-player two-action bimatrix games BID24 ; WoLF in two-player two-action games BID1 ; AWESOME in repeated games BID3 ; Optimistic Mirror Descent in two-player bilinear zero-sum games BID4 and Consensus Optimisation (CO) in two-player zerosum games BID15 ). An important body of", "work including BID10 ; BID16 has also appeared for the specific case of GANs.Working towards bridging this gap, some of the authors recently proposed Symplectic Gradient Adjustment (SGA), see BID0 . This algorithm is provably", "'attracted' to stable fixed points while 'repelled' from unstable ones in all differentiable games (n-player, non-convex). Nonetheless, these results", "are weaker than strict convergence guarantees. Moreover, SGA agents may act", "against their own self-interest by prioritising stability over individual loss. SGA was also discovered independently", "by BID8 , drawing on variational inequalities.In a different direction, Learning with Opponent-Learning Awareness (LOLA) modifies the learning objective by predicting and differentiating through opponent learning steps. This is intuitively appealing and experimentally", "successful, encouraging cooperation in settings like the Iterated Prisoner's Dilemma (IPD) where more stable algorithms like SGA defect. However, LOLA has no guarantees of converging or", "even preserving fixed points of the game.Contribution. We begin by constructing the first explicit tandem", "game where LOLA agents adopt 'arrogant' behaviour and converge to non-fixed points. We pinpoint the cause of failure and show that a natural", "variant named LookAhead (LA), discovered before LOLA by BID24 , successfully preserves fixed points. We then prove that LookAhead locally converges and avoids", "strict saddles in all differentiable games, filling a theoretical gap in multi-agent learning. This is enabled through a unified approach based on fixed-point", "iterations and dynamical systems. These techniques apply equally well to algorithms like CO and SGA", ", though this is not our present focus.While LookAhead is theoretically robust, the shaping component endowing LOLA with a capacity to exploit opponent dynamics is lost. We solve this dilemma with an algorithm named Stable Opponent Shaping", "(SOS), trading between stability and exploitation by interpolating between LookAhead and LOLA. Using an intuitive and theoretically grounded criterion for this interpolation", "parameter, SOS inherits both strong convergence guarantees from LA and opponent shaping from LOLA.On the experimental side, we show that SOS plays tit-for-tat in the IPD on par with LOLA, while all other methods mostly defect. We display the practical consequences of our theoretical guarantees in the tandem", "game, where SOS always outperforms LOLA. Finally we implement a more involved GAN setup, testing for mode collapse and mode", "hopping when learning Gaussian mixture distributions. SOS successfully spreads mass across all Gaussians, at least matching dedicated algorithms", "like CO, while LA is significantly slower and simultaneous gradient descent fails entirely.", "We evaluate the performance of SOS in three differentiable games.", "We first showcase opponent shaping and superiority over LA/CO/SGA/NL in the Iterated Prisoner's Dilemma (IPD).", "This leaves SOS and LOLA, which have differed only in theory up to now.", "We bridge this gap by showing that SOS always outperforms LOLA in the tandem game, avoiding arrogant behaviour by decaying p while LOLA overshoots.", "Finally we test SOS on a more involved GAN learning task, with results similar to dedicated methods like Consensus Optimisation.", "IPD: Results are given in FIG1 .", "Parameters in part (A) are the end-run probabilities of cooperating for each memory state, encoded in different colours.", "Only 50 runs are shown for visibility.", "Losses at each step are displayed in part (B), averaged across 300 episodes with shaded deviations.SOS and LOLA mostly succeed in playing tit-for-tat, displayed by the accumulation of points in the correct corners of (A) plots.", "For instance, CC and CD points are mostly in the top right and left corners so agent 2 responds to cooperation with cooperation.", "Agents also cooperate at the start state, represented by ∅ points all hidden in the top right corner.", "Tit-for-tat strategy is further indicated by the losses close to 1 in part (B).", "On the other hand, most points for LA/CO/SGA/NL are accumulated at the bottom left, so agents mostly defect.", "This results in poor losses, demonstrating the limited effectiveness of recent proposals like SGA and CO.", "Finally note that trained parameters and losses for SOS are almost identical to those for LOLA, displaying equal capacity in opponent shaping while also inheriting convergence guarantees and outperforming LOLA in the next experiment.Tandem: Results are given in Figure 3 .", "SOS always succeeds in decreasing p to reach the correct equilibria, with losses averaging at 0.", "LOLA fails to preserve fixed points, overshooting with losses averaging at 4/9.", "The criterion for SOS is shown in action in part (B), decaying p to avoid overshooting.", "This illustrates that purely theoretical guarantees descend into practical outperfor- mance.", "Note that SOS even gets away from the LOLA fixed points if initialised there (not shown), converging to improved losses using the alignment criterion with LookAhead.", "Theoretical results in machine learning have significantly helped understand the causes of success and failure in applications, from optimisation to architecture.", "While gradient descent on single losses has been studied extensively, algorithms dealing with interacting goals are proliferating, with little grasp of the underlying dynamics.", "The analysis behind CO and SGA has been helpful in this respect, though lacking either in generality or convergence guarantees.", "The first contribution of this paper is to provide a unified framework and fill this theoretical gap with robust convergence results for LookAhead in all differentiable games.", "Capturing stable fixed points as the correct solution concept was essential for these techniques to apply.Furthermore, we showed that opponent shaping is both a powerful approach leading to experimental success and cooperative behaviour -while at the same time preventing LOLA from preserving fixed points in general.", "This conundrum is solved through a robust interpolation between LookAhead and LOLA, giving birth to SOS through a robust criterion.", "This was partially enabled by choosing to preserve the 'middle' term in LOLA, and using it to inherit stability from LookAhead.", "This results in convergence guarantees stronger than all previous algorithms, but also in practical superiority over LOLA in the tandem game.", "Moreover, SOS fully preserves opponent shaping and outperforms SGA, CO, LA and NL in the IPD by encouraging tit-for-tat policy instead of defecting.", "Finally, SOS convincingly learns Gaussian mixtures on par with the dedicated CO algorithm." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.23076923191547394, 0.4000000059604645, 0.2857142686843872, 0.0952380895614624, 0.1538461446762085, 0.1304347813129425, 0.24390242993831635, 0.13333332538604736, 0.07843136787414551, 0.1111111044883728, 0.178571417927742, 0.09999999403953552, 0.17543859779834747, 0.25, 0.04999999701976776, 0.1666666567325592, 0.178571417927742, 0.072727270424366, 0.10344827175140381, 0.22727271914482117, 0.0555555522441864, 0, 0.1428571343421936, 0.07999999821186066, 0, 0.08510638028383255, 0, 0.3478260934352875, 0.05128204822540283, 0.3050847351551056, 0.045454539358615875, 0.23880596458911896, 0.09090908616781235, 0.1395348757505417, 0.052631575614213943, 0.22857142984867096, 0.09999999403953552, 0.1538461446762085, 0.12765957415103912, 0.2222222238779068, 0.06451612710952759, 0.0476190410554409, 0, 0.10526315122842789, 0.1304347813129425, 0.0952380895614624, 0.1538461446762085, 0, 0.04878048226237297, 0.16393442451953888, 0.19512194395065308, 0.10810810327529907, 0.19999998807907104, 0.0555555522441864, 0.11999999731779099, 0.13333332538604736, 0.0416666604578495, 0.13636362552642822, 0.3529411852359772, 0.20588235557079315, 0.1904761791229248, 0.08888888359069824, 0.1818181723356247, 0.12765957415103912, 0.15789473056793213 ]
SyGjjsC5tQ
true
[ "Opponent shaping is a powerful approach to multi-agent learning but can prevent convergence; our SOS algorithm fixes this with strong guarantees in all differentiable games." ]
[ "The rate at which medical questions are asked online significantly exceeds the capacity of qualified people to answer them, leaving many questions unanswered or inadequately answered.", "Many of these questions are not unique, and reliable identification of similar questions would enable more efficient and effective question answering schema.", "While many research efforts have focused on the problem of general question similarity, these approaches do not generalize well to the medical domain, where medical expertise is often required to determine semantic similarity.", "In this paper, we show how a semi-supervised approach of pre-training a neural network on medical question-answer pairs is a particularly useful intermediate task for the ultimate goal of determining medical question similarity.", "While other pre-training tasks yield an accuracy below 78.7% on this task, our model achieves an accuracy of 82.6% with the same number of training examples, an accuracy of 80.0% with a much smaller training set, and an accuracy of 84.5% when the full corpus of medical question-answer data is used.", "With the ubiquity of the Internet and the emergence of medical question-answering websites such as ADAM (www.adam.com), WebMD (www.webmd.com), and HealthTap (www.healthtap. com), people are increasingly searching online for answers to their medical questions.", "However, the number of people asking medical questions online far exceeds the number of qualified experts -i.e doctors -answering them.", "One way to address this imbalance is to build a system that can automatically match unanswered questions with semantically similar answered questions, or mark them as priority if no similar answered questions exist.", "This approach uses doctor time more efficiently, reducing the number of unanswered questions and lowering the cost of providing online care.", "Many of the individuals seeking medical advice online are otherwise reluctant to seek medical help due to cost, convenience, or embarrassment.", "For these patients, an accurate online system is critical because it may be the only medical advice they receive.", "Of course, some medical problems require in-person care, and an online system must indicate that.", "Other patients use the internet in addition to in-person care either to determine when an appointment is needed or to follow up after visits when they have lingering questions.", "For this second group, if the answers they see online do not match those given to them by their doctors, they are less likely to follow the advice of their doctors (Nosta, 2017) , which can have serious consequences.", "Coming up with an accurate algorithm for finding similar medical questions, however, is difficult.", "Simple heuristics such as word-overlap are ineffective because Can a menstrual blood clot travel to your heart or lungs like other blood clots can?", "and Can clots from my period cause a stroke or embolism?", "are similar questions with low overlap, but Is candida retested after treatment and Is Chlamydia retested after treatment?", "are critically different and only one word apart.", "Machine learning is a good candidate for such complex tasks, but requires labeled training data.", "As no widely available data for this particular task exists, we generate and release our own dataset of medical question pairs such as the ones shown in Table 1 .", "Given the recent success of pre-trained bi-directional transformer networks for natural language processing (NLP) outside the medical field (Peters et al., 2018; Devlin et al., 2018; Radford et al.; Yang et al., 2019; Liu et al., 2019) , most research efforts in medical NLP have tried to apply general .", "However, these models are not trained on medical information, and make errors that reflect this.", "In this work, we augment the features in these general language models using the depth of information that is stored within a medical question-answer pair to embed medical knowledge into the model.", "Our models pre-trained on this task outperform models pre-trained on out-of-domain question similarity with high statistical significance, and the results show promise of generalizing to other domains as well.", "The task of question-answer matching was specifically chosen because it is closely related to that of question similarity; one component of whether or not two questions are semantically similar is whether or not the answer to one also answers the other.", "We show that the performance gains achieved by this particular task are not realized by other in-domain tasks, such as medical questioncategorization and medical answer completion.", "The main contributions of this paper are:", "• We release a dataset of medical question pairs generated and labeled by doctors that is based upon real, patient-asked questions", "• We prove that, particularly for medical NLP, domain matters: pre-training on a different task in the same domain outperforms pre-training on the same task in a different domain", "• We show that the task of question-answer matching embeds relevant medical information for question similarity that is not captured by other in-domain tasks 2 RELATED WORK 2.1 PRE-TRAINED NETWORKS FOR GENERAL LANGUAGE UNDERSTANDING NLP has undergone a transfer learning revolution in the past year, with several large pre-trained models earning state-of-the-art scores across many linguistic tasks.", "Two such models that we use in our own experiments are BERT (Devlin et al., 2018) and XLNet (Yang et al., 2019) .", "These models have been trained on semi-supervised tasks such as predicting a word that has been masked out from a random position in a sentence, and predicting whether or not one sentence is likely to follow another.", "The corpus used to train BERT was exceptionally large (3.3 billion words), but all of the data came from BooksCorpus and Wikipedia.", "Talmor & Berant (2019) recently found that BERT generalizes better to other datasets drawn from Wikipedia than to tasks using other web snippets.", "This is consistent with our finding that pre-training domain makes a big difference.", "To understand the broader applicability of our findings, we apply our approach to a non-medical domain: the AskUbuntu question-answer pairs from Lei et al. (2016) .", "As before, we avoid making the pre-training task artificially easy by creating negatives from related questions.", "This time, since there are no category labels, we index all of the data with Elasticsearch 1 .", "For the question similarity task, the authors have released a candidate set of pairs that were human labeled as similar or dissimilar.", "Without any pre-training (baseline), we observe an accuracy of 65.3% ± 1.2% on the question similarity task.", "Pre-training on QQP leads to a significant reduction in accuracy to 62.3% ± 2.1% indicating that an out-of-domain pretraining task can actually hurt performance.", "When the QA task is used for intermediate pre-training, the results improve to 66.6% ± 0.9%.", "While this improvement may not be statistically significant, it is consistent with the main premise of our work that related tasks in the same domain can help performance.", "We believe that the low accuracy on this task, as well as the small inter-model performance gains, may be due to the exceptionally long question lengths, some of which are truncated by the models during tokenization.", "In the future, we would explore ways to reduce the length of these questions before feeding them into the model.", "In this work, we release a medical question-pairs dataset and show that the semi-supervised approach of pre-training on in-domain question-answer matching (QA) is particularly useful for the difficult task of duplicate question recognition.", "Although the QA model outperforms the out-of-domain same-task QQP model, there are a few examples where the QQP model seems to have learned information that is missing from the QA model (see Appendix A).", "In the future, we can further explore whether these two models learned independently useful information from their pre-training tasks.", "If they did, then we hope to be able to combine these features into one model with multitask learning.", "An additional benefit of the error analysis is that we have a better understanding of the types of mistakes that even our best model is making.", "It is therefore now easier to use weak supervision and augmentation rules to supplement our datasets to increase the number of training examples in those difficult regions of the data.", "With both of these changes, we expect to be able to bump up accuracy on this task by several more percentage points." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.045454539358615875, 0.10526315122842789, 0.16326530277729034, 0.4583333432674408, 0.19672130048274994, 0.12244897335767746, 0.05405404791235924, 0.1249999925494194, 0.052631575614213943, 0.052631575614213943, 0.10526315122842789, 0.1764705777168274, 0.04444443807005882, 0, 0.1818181723356247, 0.0476190410554409, 0.13333332538604736, 0.05882352590560913, 0.07407406717538834, 0.23529411852359772, 0.2916666567325592, 0.06896550953388214, 0.1764705777168274, 0.2083333283662796, 0.2222222238779068, 0.23529411852359772, 0.2790697515010834, 0, 0.44999998807907104, 0.3684210479259491, 0.3287671208381653, 0.09999999403953552, 0.1538461446762085, 0.0476190410554409, 0.04999999329447746, 0.25, 0.0952380895614624, 0.11428570747375488, 0, 0.19999998807907104, 0.21052631735801697, 0.13636362552642822, 0.1666666567325592, 0.08695651590824127, 0.11764705181121826, 0, 0.6000000238418579, 0.1304347813129425, 0.052631575614213943, 0, 0.14999999105930328, 0.08888888359069824, 0.04999999329447746 ]
Byxn9CNYPr
true
[ "We show that question-answer matching is a particularly good pre-training task for question-similarity and release a dataset for medical question similarity" ]
[ "We study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning.", "We leverage the assumption that learning from different tasks, sharing common properties, is helpful to generalize the knowledge of them resulting in a more effective feature extraction compared to learning a single task.", "Intuitively, the resulting set of features offers performance benefits when used by Reinforcement Learning algorithms.", "We prove this by providing theoretical guarantees that highlight the conditions for which is convenient to share representations among tasks, extending the well-known finite-time bounds of Approximate Value-Iteration to the multi-task setting.", "In addition, we complement our analysis by proposing multi-task extensions of three Reinforcement Learning algorithms that we empirically evaluate on widely used Reinforcement Learning benchmarks showing significant improvements over the single-task counterparts in terms of sample efficiency and performance.", "Multi-Task Learning (MTL) ambitiously aims to learn multiple tasks jointly instead of learning them separately, leveraging the assumption that the considered tasks have common properties which can be exploited by Machine Learning (ML) models to generalize the learning of each of them.", "For instance, the features extracted in the hidden layers of a neural network trained on multiple tasks have the advantage of being a general representation of structures common to each other.", "This translates into an effective way of learning multiple tasks at the same time, but it can also improve the learning of each individual task compared to learning them separately (Caruana, 1997) .", "Furthermore, the learned representation can be used to perform Transfer Learning (TL), i.e. using it as a preliminary knowledge to learn a new similar task resulting in a more effective and faster learning than learning the new task from scratch (Baxter, 2000; Thrun & Pratt, 2012) .", "The same benefits of extraction and exploitation of common features among the tasks achieved in MTL, can be obtained in Multi-Task Reinforcement Learning (MTRL) when training a single agent on multiple Reinforcement Learning (RL) problems with common structures (Taylor & Stone, 2009; Lazaric, 2012) .", "In particular, in MTRL an agent can be trained on multiple tasks in the same domain, e.g. riding a bicycle or cycling while going towards a goal, or on different but similar domains, e.g. balancing a pendulum or balancing a double pendulum 1 .", "Considering recent advances in Deep Reinforcement Learning (DRL) and the resulting increase in the complexity of experimental benchmarks, the use of Deep Learning (DL) models, e.g. deep neural networks, has become a popular and effective way to extract common features among tasks in MTRL algorithms (Rusu et al., 2015; Liu et al., 2016; Higgins et al., 2017) .", "However, despite the high representational capacity of DL models, the extraction of good features remains challenging.", "For instance, the performance of the learning process can degrade when unrelated tasks are used together (Caruana, 1997; Baxter, 2000) ; another detrimental issue may occur when the training of a single model is not balanced properly among multiple tasks (Hessel et al., 2018) .", "Recent developments in MTRL achieve significant results in feature extraction by means of algorithms specifically developed to address these issues.", "While some of these works rely on a single deep neural network to model the multi-task agent (Liu et al., 2016; Yang et al., 2017; Hessel et al., 2018; Wulfmeier et al., 2019) , others use multiple deep neural networks, e.g. one for each task and another for the multi-task agent (Rusu et al., 2015; Parisotto et al., 2015; Higgins et al., 2017; Teh et al., 2017) .", "Intuitively, achieving good results in MTRL with a single deep neural network is more desirable than using many of them, since the training time is likely much less and the whole architecture is easier to implement.", "In this paper we study the benefits of shared representations among tasks.", "We theoretically motivate the intuitive effectiveness of our method, deriving theoretical guarantees that exploit the theoretical framework provided by Maurer et al. (2016) , in which the authors present upper bounds on the quality of learning in MTL when extracting features for multiple tasks in a single shared representation.", "The significancy of this result is that the cost of learning the shared representation decreases with a factor O( 1 / √ T ), where T is the number of tasks for many function approximator hypothesis classes.", "The main contribution of this work is twofold.", "As stated in the remarks of Equation (9), the benefit of MTRL is evinced by the second component of the bound, i.e. the cost of learning h, which vanishes with the increase of the number of tasks.", "Obviously, adding more tasks require the shared representation to be large enough to include all of them, undesirably causing the term sup h,w h(w(X)) in the fourth component of the bound to increase.", "This introduces a tradeoff between the number of features and number of tasks; however, for Figure 1 : (a) The architecture of the neural network we propose to learn T tasks simultaneously.", "The w t block maps each input x t from task µ t to a shared set of layers h which extracts a common representation of the tasks.", "Eventually, the shared representation is specialized in block f t and the output y t of the network is computed.", "Note that each block can be composed of arbitrarily many layers.", "a reasonable number of tasks the number of features used in the single-task case is enough to handle them, as we show in some experiments in Section 5.", "Notably, since the AVI/API framework provided by Farahmand (2011) provides an easy way to include the approximation error of a generic function approximator, it is easy to show the benefit in MTRL of the bound in Equation (9).", "Despite being just multi-task extensions of previous works, our results are the first one to theoretically show the benefit of sharing representation in MTRL.", "Moreover, they serve as a significant theoretical motivation, besides to the intuitive ones, of the practical algorithms that we describe in the following sections.", "We have theoretically proved the advantage in RL of using a shared representation to learn multiple tasks w.r.t. learning a single task.", "We have derived our results extending the AVI/API bounds (Farahmand, 2011) to MTRL, leveraging the upper bounds on the approximation error in MTL provided in Maurer et al. (2016) .", "The results of this analysis show that the error propagation during the AVI/API iterations is reduced according to the number of tasks.", "Then, we proposed a practical way of exploiting this theoretical benefit which consists in an effective way of extracting shared representations of multiple tasks by means of deep neural networks.", "To empirically show the advantages of our method, we carried out experiments on challenging RL problems with the introduction of multi-task extensions of FQI, DQN, and DDPG based on the neural network structure we proposed.", "As desired, the favorable empirical results confirm the theoretical benefit we described.", "A PROOFS" ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.5625, 0.19512194395065308, 0.29629629850387573, 0.09756097197532654, 0.25531914830207825, 0.17777776718139648, 0.2631579041481018, 0.09999999403953552, 0.1538461446762085, 0.2745097875595093, 0.1304347813129425, 0.16949152946472168, 0.1538461446762085, 0.07692307233810425, 0.12903225421905518, 0.10344827175140381, 0.13333332538604736, 0.25, 0.18518517911434174, 0.1395348757505417, 0.09999999403953552, 0.20512820780277252, 0.20512820780277252, 0.09999999403953552, 0.1666666567325592, 0.2857142686843872, 0.08695651590824127, 0.17142856121063232, 0.1860465109348297, 0.3529411852359772, 0.1764705777168274, 0.22857142984867096, 0.1621621549129486, 0.12903225421905518, 0.15789473056793213, 0.1463414579629898, 0.17391303181648254 ]
rkgpv2VFvr
true
[ "A study on the benefit of sharing representation in Multi-Task Reinforcement Learning." ]
[ "We present a 3D capsule architecture for processing of point clouds that is equivariant with respect to the SO(3) rotation group, translation and permutation of the unordered input sets.", "The network operates on a sparse set of local reference frames, computed from an input point cloud and establishes end-to-end equivariance through a novel 3D quaternion group capsule layer, including an equivariant dynamic routing procedure.", "The capsule layer enables us to disentangle geometry from pose, paving the way for more informative descriptions and a structured latent space.", "In the process, we theoretically connect the process of dynamic routing between capsules to the well-known Weiszfeld algorithm, a scheme for solving iterative re-weighted least squares (IRLS) problems with provable convergence properties, enabling robust pose estimation between capsule layers.", "Due to the sparse equivariant quaternion capsules, our architecture allows joint object classification and orientation estimation, which we validate empirically on common benchmark datasets. \n\n", "It is now well understood that in order to learn a compact representation of the input data, one needs to respect the symmetries in the problem domain (Cohen et al., 2019; Weiler et al., 2018a) .", "Arguably, one of the primary reasons of the success of 2D convolutional neural networks (CNN) is the translation-invariance of the 2D convolution acting on the image grid (Giles & Maxwell, 1987; .", "Recent trends aim to transfer this success into the 3D domain in order to support many applications such as shape retrieval, shape manipulation, pose estimation, 3D object modeling and detection.", "There, the data is naturally represented as sets of 3D points or a point cloud (Qi et al., 2017a; b) .", "Unfortunately, extension of CNN architectures to 3D point clouds is non-trivial due to two reasons:", "1) point clouds are irregular and unstructured,", "2) the group of transformations that we are interested in is more complex as 3D data is often observed under arbitrary non-commutative SO(3) rotations.", "As a result, achieving appropriate embeddings requires 3D networks that work on points to be equivariant to these transformations, while also being invariant to the permutations of the point set.", "In order to fill this important gap, we propose the quaternion equivariant point capsule network or QE-Network that is suited to process point clouds and is equivariant to SO(3) rotations compactly parameterized by quaternions (Fig. 2) , in addition to preserved translation and permutation equivariance.", "Inspired by the local group equivariance Cohen et al., 2019) , we efficiently cover SO(3) by restricting ourselves to the sparse set of local reference frames (LRF) that collectively characterize the object orientation.", "The proposed capsule layers (Hinton et al., 2011) deduces equivariant latent representations by robustly combining those local LRFs using the proposed Weiszfeld dynamic routing.", "Hence, our latent features specify to local orientations disentangling the pose from object existence.", "Such explicit storage is unique to our work and allows us to perform rotation estimation jointly with object classification.", "Our final architecture is a hierarchy of QE-networks, where we use classification error as the only training cue and adapt a Siamese version when the relative rotation is to be regressed.", "We neither explicitly supervise the network with pose annotations nor train by augmenting rotations.", "Overall, our contributions are:", "1. We propose a novel, fully SO(3)-equivariant capsule architecture that is tailored for simultaneous classification and pose estimation of 3D point clouds.", "This network produces in- shows the LRFs randomly sampled from (a) and these are inputs to the first layer of our network.", "Subsequently, we obtain a multi-channel LRF that is a set of reference frames per pooling center (d).", "Holistically, our network aggregates the LRFs to arrive at rotation equivariant capsules.", "variant latent representations while explicitly decoupling the orientation into capsules, thus attaining equivariance.", "Note that equivariance results have not been previously achieved regarding the quaternion parameterization of the 3D special orthogonal group.", "2. By utilizing LRFs on points, we reduce the space of orientations that we consider and hence can work sparsely on a subset of the group elements.", "3. We theoretically prove the equivariance properties of our 3D network regarding the quaternion group.", "Moreover, to the best of our knowledge, we for the first time establish a connection between the dynamic routing of Sabour et al. (2017) and Generalized Weiszfeld iterations (Aftab et al., 2015) .", "By that, we theoretically argue for the convergence of the employed dynamic routing.", "4. We experimentally demonstrate the capabilities of our network on classification and orientation estimation of 3D shapes.", "In this work, we have presented a new framework for achieving permutation invariant and SO(3) equivariant representations on 3D point clouds.", "Proposing a variant of the capsule networks, we operate on a sparse set of rotations specified by the input LRFs thereby circumventing the effort to cover the entire SO(3).", "Our network natively consumes a compact representation of the group of 3D rotations -quaternions, and we have theoretically shown its equivariance.", "We have also established convergence results for our Weiszfeld dynamic routing by making connections to the literature of robust optimization.", "Our network is among the few for having an explicit group-valued latent space and thus naturally estimates the orientation of the input shape, even without a supervision signal.", "Limitations.", "In the current form our performance is severely affected by the shape symmetries.", "The length of the activation vector depends on the number of classes and for achieving sufficiently descriptive latent vectors we need to have a significant number of classes.", "On the other side, this allows us to perform with merit on problems where the number of classes are large.", "Although, we have reported robustness to those, the computation of LRFs are still sensitive to the point density changes and resampling.", "LRFs themselves are also ambiguous and sometimes non-unique.", "Future work.", "Inspired by Cohen et al. (2019) and Poulenard & Ovsjanikov (2018) our feature work will involve establishing invariance to the direction in the tangent plane.", "We also plan to apply our network in the broader context of 3D object detection under arbitrary rotations and look for equivariances among point resampling.", "A PROOF OF PROPOSITION 1", "Before presenting the proof we recall the three individual statements contained in Prop.", "1:", "Operator A is invariant under permutations: A({q σ(1) , . . . , q σ(Q) }, w σ ) = A({q 1 , . . . , q Q }, w).", "3.", "The transformations g ∈ H 1 preserve the geodesic distance δ(·).", "Proof.", "We will prove the propositions in order.", "1. We start by transforming each element and replace q i by (g • q i ) of the cost in Eq (4):", "where M i = w i q i q i and p = G q.", "From orthogonallity of G it follows p = G −1 q =⇒ g • p = q and hence g • A(S, w) = A(g • S, w).", "2. The proof follows trivially from the permutation invariance of the symmetric summation operator over the outer products in Eq (8).", "3. It is sufficient to show that |q 1 q 2 | = |(g • q 1 ) (g • q 2 )| for any g ∈ H 1 :", "where g • q ≡ Gq.", "The result is a direct consequence of the orthonormality of G." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.4000000059604645, 0.15686273574829102, 0.14999999105930328, 0.07407406717538834, 0.1860465109348297, 0.1249999925494194, 0, 0.17777776718139648, 0.1538461446762085, 0.3125, 0.3199999928474426, 0.24390242993831635, 0.2666666507720947, 0.25, 0.12765957415103912, 0.0476190410554409, 0.0624999962747097, 0.1111111044883728, 0.1304347813129425, 0, 0, 0.29999998211860657, 0.15789473056793213, 0.05882352590560913, 0.13333332538604736, 0, 0.1111111044883728, 0.09756097197532654, 0.0624999962747097, 0.1304347813129425, 0.06666666269302368, 0.11764705181121826, 0.3589743673801422, 0.0952380895614624, 0.10526315122842789, 0.10526315122842789, 0.09090908616781235, 0, 0.1463414579629898, 0.10810810327529907, 0.21621620655059814, 0.1538461446762085, 0.0952380895614624, 0.23255813121795654, 0, 0, 0.052631575614213943, 0, 0, 0.052631575614213943, 0.07407406717538834, 0.05405404791235924, 0, 0.1428571343421936, 0, 0 ]
B1xtd1HtPS
true
[ "Deep architectures for 3D point clouds that are equivariant to SO(3) rotations, as well as translations and permutations. " ]
[ "Vector semantics, especially sentence vectors, have recently been used successfully in many areas of natural language processing.", "However, relatively little work has explored the internal structure and properties of spaces of sentence vectors.", "In this paper, we will explore the properties of sentence vectors by studying a particular real-world application: Automatic Summarization.", "In particular, we show that cosine similarity between sentence vectors and document vectors is strongly correlated with sentence importance and that vector semantics can identify and correct gaps between the sentences chosen so far and the document.", "In addition, we identify specific dimensions which are linked to effective summaries.", "To our knowledge, this is the first time specific dimensions of sentence embeddings have been connected to sentence properties.", "We also compare the features of different methods of sentence embeddings.", "Many of these insights have applications in uses of sentence embeddings far beyond summarization.", "Vector semantics have been growing in popularity for many other natural language processing applications.", "Vector semantics attempt to represent words as vectors in a high-dimensional space, where vectors which are close to each other have similar meanings.", "Various models of vector semantics have been proposed, such as LSA BID10 , word2vec BID14 , and GLOVE BID17 , and these models have proved to be successful in other natural language processing applications.While these models work well for individual words, producing equivalent vectors for sentences or documents has proven to be more difficult.In recent years, a number of techniques for sentence embeddings have emerged.", "One promising method is paragraph vectors (Also known as Doc2Vec), described by BID12 .", "The model behind paragraph vectors resembles that behind word2vec, except that a classifier uses an additional 'paragraph vector' to predict words in a Skip-Gram model.Another model, skip-thoughts, attempts to extend the word2vec model in a different way BID9 .", "The center of the skip-thought model is an encoder-decoder neural network.", "The result, skip-thought vectors, achieve good performance on a wide variety of natural language tasks.Simpler approaches based on linear combinations of the word vectors have managed to achieve state-of-the-art results for non-domain-specific tasks BID20 .", "Arora et al. BID1 offer one particularly promising such approach, which was found to achieve equal or greater performance in some tasks than more complicated supervised learning methods.", "Despite the poor performance of our models compared to the baselines, analyses of the underlying data provide many useful insights into the behavior of vector semantics in real-world tasks.", "We have identified differences in different forms of sentence vectors when applied to real-world tasks.", "In particular, each sentence vector form seems to be more successful when used in a particular way.", "Roughly speaking, Arora's vectors excel at judging the similarity of two sentences while Paragraph Vectors excel at representing document vectors, and at representing features as dimensions of vectors.", "While we do not have enough data to pinpoint the strengths of Skipthought vectors, they seem to work well in specific contexts that our work did not fully explore.", "These differences are extremely significant, and will likely make or break real-world applications.", "Therefore, special care should be taken when selecting the sentence vector method for a real-world task." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.12121211737394333, 0.25806450843811035, 0.22857142984867096, 0.13636362552642822, 0, 0.1764705777168274, 0.23076923191547394, 0.20689654350280762, 0, 0, 0.11428570747375488, 0, 0.04255318641662598, 0.14814814925193787, 0.08510638028383255, 0, 0.19999998807907104, 0.19354838132858276, 0.060606054961681366, 0.15789473056793213, 0.0952380895614624, 0.13793103396892548, 0.25 ]
S1347ot3b
true
[ "A comparison and detailed analysis of various sentence embedding models through the real-world task of automatic summarization." ]
[ "We present Value Propagation (VProp), a parameter-efficient differentiable planning module built on Value Iteration which can successfully be trained in a reinforcement learning fashion to solve unseen tasks, has the capability to generalize to larger map sizes, and can learn to navigate in dynamic environments.", "We evaluate on configurations of MazeBase grid-worlds, with randomly generated environments of several different sizes.", "Furthermore, we show that the module enables to learn to plan when the environment also includes stochastic elements, providing a cost-efficient learning system to build low-level size-invariant planners for a variety of interactive navigation problems.", "Planning is a key component for artificial agents in a variety of domains.", "However, a limit of classical planning algorithms is that one needs to know how to search for an optimal (or good) solution, for each type of plan.", "When the complexity of the planning environment and the diversity of tasks increase, this makes writing planners difficult, cumbersome, or entirely infeasible.", "\"Learning to plan\" has been an active research area to address this shortcoming BID13 BID5 .", "To be useful in practice, we propose that methods for learning to plan should have at least two properties: they should be traces free, i.e. not require traces from an optimal planner, and they should generalize, i.e. learn planners that generalize to plans of the same type but of unseen instance and/or planning horizons.In a Reinforcement Learning (RL) setting, learning to plan can be framed as the problem of finding a policy that maximises the expected reward, where such policy is a greedy function that selects actions that will visit states with a higher value for the agent.", "In such cases, Value Iteration (VI) is a algorithm that is naturally used to learn to estimate the value of states, by propagating the rewards and values until a fixed point is reached.", "When the environment can be represented as an occupancy map (a 2D grid), it is possible to approximate this learning algorithm using a deep convolutional neural network (CNN) to propagate the value on the grid cells.", "This enables one to differentiate directly through the planner steps and perform end-to-end learning.", "One way to train such models is with a supervised loss on the trace from a search/planning algorithm, e.g. as seen in the supervised learning section of Value Iteration Networks (VIN) BID17 , in which the model is tasked with reproducing the function to iteratively build values aimed at solving the shortest path task.", "However, this baseline violates our wished trace free property because of the required target values, and it doesn't fully demonstrate the capabilities to deal with interactive and generalized settings.", "That is what we set out to extend and further study.In this work we extend the formalization used in VIN to more accurately represent the structure of gridworld-like scenarios, enabling Value Iteration modules to be naturally used within the reinforcement learning framework, while also removing some of the limitations and underlying assumptions of the model.", "Furthermore we propose hierarchical extensions of such a model that allow agents to do multi-step planning, effectively learning models with the capacity to provide useful path-finding and planning capabilities in relatively complex tasks and comparably large scenarios.", "We show that our models can not only learn to plan and navigate in complex and dynamic environments, but that their hierarchical structure provides a way to generalize to navigation tasks where the required planning and the size of the map are much larger than the ones seen at training time.Our main contributions include: (1) introducing VProp, a network module which successfully learns to solve pathfinding via reinforcement learning, (2) demonstrating the ability to generalize, leading our models to solve large unseen maps by training exclusively on much smaller ones, and (3) showing that our modules can learn to navigate environments with more complex dynamics than a static grid-world.", "Architectures that try to solve the large but structured space of navigation tasks have much to benefit from employing planners that can be learnt from data, however these need to quickly adapt to local environment dynamics so that they can provide a flexible planning horizon without the need to collect new data and training again.", "Value Propagation modules' performances show that, if the problem is carefully formalized, such planners can be successfully learnt via Reinforcement Learning, and that great generalization capabilities can be expected when these models are built on convnets and are correctly applied to 2D path-planning tasks.", "Furthermore, we have demonstrated that our methods can even generalize when the environments are dynamics, enabling them to be employed in complex, interactive tasks.", "In future we expect to test our methods on a variety of tasks that can be embedded as graph-like structures (and for which we have the relevant convolutional operators).", "We also plan to evaluate the effects of plugging VProp into architectures that are employing VI modules (see Section 3), since most of these models could make use of the ability to propagate multiple channels to tackle more complex interactive environments.", "Finally, VProp architectures could be applied to algorithms used in mobile robotics and visual tracking BID2 , as they can learn to propagate arbitrary value functions and model a wide range of potential functions." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.380952388048172, 0.10256409645080566, 0.178571417927742, 0.05405404791235924, 0.12244897335767746, 0.09090908616781235, 0.05128204822540283, 0.1599999964237213, 0.22641508281230927, 0.13793103396892548, 0.20512820780277252, 0.11428570747375488, 0.07692307233810425, 0.08695651590824127, 0.19999998807907104, 0.28037384152412415, 0.22857142984867096, 0.307692289352417, 0.20408162474632263, 0.22641508281230927, 0.13114753365516663, 0.178571417927742 ]
Bya8fGWAZ
true
[ "We propose Value Propagation, a novel end-to-end planner which can learn to solve 2D navigation tasks via Reinforcement Learning, and that generalizes to larger and dynamic environments." ]
[ "Recommendation is a prevalent application of machine learning that affects many users; therefore, it is crucial for recommender models to be accurate and interpretable.", "In this work, we propose a method to both interpret and augment the predictions of black-box recommender systems.", "In particular, we propose to extract feature interaction interpretations from a source recommender model and explicitly encode these interactions in a target recommender model, where both source and target models are black-boxes.", "By not assuming the structure of the recommender system, our approach can be used in general settings. ", "In our experiments, we focus on a prominent use of machine learning recommendation: ad-click prediction.", "We found that our interaction interpretations are both informative and predictive, i.e., significantly outperforming existing recommender models.", "What's more, the same approach to interpreting interactions can provide new insights into domains even beyond recommendation.", "Despite their impact on users, state-of-the-art recommender systems are becoming increasingly inscrutable.", "For example, the models that predict if a user will click on an online advertisement are often based on function approximators that contain complex components in order to achieve optimal recommendation accuracy.", "The complex components come in the form of modules for better learning relationships among features, such as interactions between user and ad features (Cheng et al., 2016; Guo et al., 2017; Wang et al., 2017; Lian et al., 2018; Song et al., 2018) .", "Although efforts have been made to understand the feature relationships, there is still no method that can interpret the feature interactions learned by a generic recommender system, nor is there a strong commercial incentive to do so.", "In this work, we identify and leverage feature interactions that represent how a recommender system generally behaves.", "We propose a novel approach, Global Interaction Detection and Encoding for Recommendation (GLIDER), which detects feature interactions that span globally across multiple data-instances from a source recommender model, then explicitly encodes the interactions in a target recommender model, both of which can be black-boxes.", "GLIDER achieves this by first utilizing feature interaction detection with a data-instance level interpretation method called LIME (Ribeiro et al., 2016 ) over a batch of data samples.", "GLIDER then explicitly encodes the collected global interactions into a target model via sparse feature crossing.", "In our experiments on ad-click recommendation, we found that the interpretations generated by GLIDER are informative, and the detected global interactions can significantly improve the target model's prediction performance, even in a setting where the source and target models are the same.", "Because our interaction interpretation method is very general, we also show that the interpretations are informative in domains outside of recommendation, such as image and text classification.", "Our contributions are as follows:", "1. We propose GLIDER to detect and explicitly encode global feature interactions in blackbox recommender systems.", "We proposed GLIDER that detects and explicitly encodes global feature interactions in black-box recommender systems.", "In our experiments, we found that the detected global interactions are informative and that explicitly encoding interactions can improve the accuracy of CTR predictions.", "We further validated interaction interpretations on image, text, and graph classifiers.", "We hope GLIDER encourages investigation into the complex interaction behaviors of recommender models to understand why certain feature interactions are very predictive.", "For future research, we wish to understand how feature interactions play a role in the integrity of automatic recommendations." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.23529411852359772, 0.3448275923728943, 0.3684210479259491, 0.0714285671710968, 0.1538461446762085, 0.13333332538604736, 0.1428571343421936, 0, 0.09756097197532654, 0.1304347813129425, 0.2380952388048172, 0.3571428656578064, 0.20408162474632263, 0.20512820780277252, 0.2222222238779068, 0.17391304671764374, 0.21052631735801697, 0, 0.29629629850387573, 0.23076923191547394, 0.1875, 0.1818181723356247, 0.24242423474788666, 0.3333333432674408 ]
BkgnhTEtDS
true
[ "Proposed a method to extract and leverage interpretations of feature interactions" ]
[ "Rectified linear units, or ReLUs, have become a preferred activation function for artificial neural networks.", "In this paper we consider the problem of learning a generative model in the presence of nonlinearity (modeled by the ReLU functions).", "Given a set of signal vectors $\\mathbf{y}^i \\in \\mathbb{R}^d, i =1, 2, \\dots , n$, we aim to learn the network parameters, i.e., the $d\\times k$ matrix $A$, under the model $\\mathbf{y}^i = \\mathrm{ReLU}(A\\mathbf{c}^i +\\mathbf{b})$, where $\\mathbf{b}\\in \\mathbb{R}^d$ is a random bias vector, and {$\\mathbf{c}^i \\in \\mathbb{R}^k$ are arbitrary unknown latent vectors}.", "We show that it is possible to recover the column space of $A$ within an error of $O(d)$ (in Frobenius norm) under certain conditions on the distribution of $\\mathbf{b}$.", "Rectified Linear Unit (ReLU) is a basic nonlinear function defined to be ReLU : R → R + ∪ {0} as ReLU(x) ≡ max(0, x).", "For any matrix X, ReLU(X) denotes the matrix obtained by applying the ReLU function on each of the coordinates of the matrix X. ReLUs are building blocks of many nonlinear data-fitting problems based on deep neural networks (see, e.g., [20] for a good exposition).", "In particular, [7] showed that supervised training of very deep neural networks is much faster if the hidden layers are composed of ReLUs.", "Let Y ⊂ R d be a collection of signal vectors that are of interest to us.", "Depending on the application at hand, the signal vectors, i.e., the constituents of Y, may range from images, speech signals, network access patterns to user-item rating vectors and so on.", "We assume that the signal vectors satisfy a generative model, where each signal vector can be approximated by a map g : R k → R d from the latent space to the ambient space, i.e., for each y ∈ Y, y ≈ g(c) for some c ∈ R k .", "In this paper we consider the following specific model (single layer ReLU-network), with the weight (generator) matrix A ∈ R d×k and bias b ∈ R d :", "The generative model in (2) raises multiple interesting questions that play fundamental role in understanding the underlying data and designing systems and algorithms for information processing.", "Here, we consider the following network parameter learning problem under the specific generative model of (2) .", "Learning the network parameters: Given the n observations {y i } i∈[n] ⊂ R d from the model (cf.", "(2)), recover the parameters of the model, i.e., A ∈ R d×k such that", "with latent vectors {c i } i∈[n] ⊂ R k .", "We assume that the bias vector b is a random vector comprising of i.i.d. coordinates with each coordinate distributed according to the probability density function 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.", "p(·).", "This question is closely related to the dictionary-learning problem [16] .", "We also note that this question is different from the usual task of training a model (such as, [11] ), in which case the set {c i } i∈[n] is also known (and possibly chosen accordingly) in addition to {y i } i∈[n] .", "Related works.", "There have been a recent surge of interest in learning ReLUs, and the above question is of basic interest even for a single-layer network (i.e., nonlinearity comprising of a single ReLU function).", "It is conceivable that understanding the behavior of a single-layer network would allow one to use some iterative peeling off technique to develop a theory for the generative models comprising of multiple layers.", "To the best of our knowledge, the network parameter learning problem, even for single-layer networks has not been studied as such, i.e., theoretical guarantees do not exist.", "Only in a very recent paper [22] the unsupervised problem was studied when the latent vectors {c i } i∈ [n] are random Gaussian.", "The principled approaches to solve this unsupervised problem in practice reduce this to the 'training' problem, such as the autoencoders [10] that learn features by extensive end-to-end training of encoderdecoder pairs; or use the recently popular generative adversarial networks (GAN) [9] that utilize a discriminator network to tune the generative network.", "The method that we are going to propose here can be seen as an alternative to using GANs for this purpose, and can be seen as an isolated 'decoder' learning of the autoencoder.", "Note that the problem bears some similarity with matrix completion problems, a fact we greatly exploit.", "In matrix completion, a matrix M is visible only partially, and the task is to recover the unknown entries by exploiting some prior knowledge about M .", "In the case of (3), we are more likely to observe the positive entries of the matrix M , which, unlike a majority of matrix completion literature, creates the dependence between M and the sampling procedure." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.05405404791235924, 0.3414634168148041, 0.17391303181648254, 0.4166666567325592, 0.17391303181648254, 0.1666666567325592, 0.1818181723356247, 0.21052631735801697, 0.19607841968536377, 0.2539682388305664, 0.08510638028383255, 0.17391303181648254, 0.21621620655059814, 0.1538461446762085, 0.2702702581882477, 0, 0.24561403691768646, 0.1875, 0.3103448152542114, 0.19607841968536377, 0.2745097875595093, 0.08163265138864517, 0.08888888359069824, 0.21875, 0.16326530277729034, 0.15789473056793213, 0.27272728085517883, 0.1599999964237213 ]
HkxW672q8B
true
[ "We show that it is possible to recover the parameters of a 1-layer ReLU generative model from looking at samples generated by it" ]
[ "Methods that calculate dense vector representations for features in unstructured data—such as words in a document—have proven to be very successful for knowledge representation.", "We study how to estimate dense representations when multiple feature types exist within a dataset for supervised learning where explicit labels are available, as well as for unsupervised learning where there are no labels.", "Feat2Vec calculates embeddings for data with multiple feature types enforcing that all different feature types exist in a common space.", "In the supervised case, we show that our method has advantages over recently proposed methods; such as enabling higher prediction accuracy, and providing a way to avoid the cold-start\n", "problem.", "In the unsupervised case, our experiments suggest that Feat2Vec significantly outperforms existing algorithms that do not leverage the structure of the data.", "We believe that we are the first to propose a method for learning unsuper vised embeddings that leverage the structure of multiple feature types.", "Informally, in machine learning a dense representation, or embedding of a vector x ∈ R n is another vector y ∈ R r that has much lower dimensionality (r n) than the original representation, and can be used to replace the original vector in downstream prediction tasks.", "Embeddings have multiple advantages, as they enable more efficient training BID17 , and unsupervised learning BID25 .", "For example, when applied to text, semantically similar words are mapped to nearby points.We consider two kind of algorithms that use embeddings:", "Embeddings have proven useful in a wide variety of contexts, but they are typically built from datasets with a single feature type as in the case of Word2Vec, or tuned for a single prediction task as in the case of Factorization Machine.", "We believe Feat2Vec is an important step towards generalpurpose methods, because it decouples feature extraction from prediction for datasets with multiple feature types, it is general-purpose, and its embeddings are easily interpretable.In the supervised setting, Feat2Vec is able to calculate embeddings for whole passages of texts, and we show experimental results outperforming an algorithm specifically designed for text-even when using the same feature extraction CNN.", "This suggests that the need for ad-hoc networks should be situated in relationship to the improvements over a general-purpose method.In the unsupervised setting, Feat2Vec's embeddings are able to capture relationships across features that can be twice as better as Word2Vec's CBOW algorithm on some evaluation metrics.", "Feat2Vec exploits the structure of a datasets to learn embeddings in a way that is structurally more sensible than existing methods.", "The sampling method, and loss function that we use have interesting theoretical properties.", "To the extent of our knowledge, Unsupervised Feat2Vec is the first method able to calculate continuous representations of data with arbitrary feature types.Future work could study how to reduce the amount of human knowledge our approach requires; for example by automatically grouping features into entities, or by automatically choosing a feature extraction function.", "These ideas can extend to our codebase that we make available 8 .", "Overall, we evaluate supervised and unsupervised Feat2Vec on 2 datasets each.", "Though further experimentation is necessary, we believe that our results are an encouraging step towards general-purpose embedding models.", "Bag of categories 244,241 \"George Johnson\", \"Jack Russell\" Principal cast members (actors) Bag of categories 1,104,280 \"George Clooney\", \"Brad Pitt\", \"Julia Roberts\" A APPENDIXES A.1", "UNSUPERVISED RANKING EXPERIMENT DETAILS For our evaluation, we define a testing set that was not used to tune the parameters of the model.", "For the IMDB dataset, we randomly select a 10% sample of the observations that contain a director that appears at least twice in the database 9 .", "We do this to guarantee that the set of directors in the left-out dataset appear during training at least once, so that each respective algorithm can learn something about the characteristics of these directors.", "For the educational dataset, our testing set only has observations of textbooks and users that appear at least 10 times in training.For both Feat2Vec and CBOW, we perform cross-validation on the loss function, by splitting the 10% of the training data randomly into a validation set, to determine the number of epochs to train, and then train the full training dataset with this number of epochs.10", "While regularization of the embeddings during training is possible, this did not dramatically change results, so we ignore this dimension of hyperparameters.We rank left-out entity pairs in the test dataset using the ordinal ranking of the cosine similarity of target and input embeddings.", "For the IMDB dataset, the target is the director embedding, and the input embedding is the sum of the cast member embeddings.", "For the educational dataset, the target is the textbook embedding, and the input embedding is the user embedding.For training Feat2Vec we set α 1 = α 2 = 3/4 in the IMDB dataset; and α 1 = 0 and α 2 = 0.5 for the educational.", "In each setting, α 2 is set to the same flattening hyperparameter we use for CBOW to negatively sample words in a document.", "We learn r = 50 dimensional embeddings under both algorithms.Below we describe how CBOW is implemented on our datasets for unsupervised experiments and what extraction functions are used to represent features in the IMDB dataset.Word2Vec For every observation in each of the datasets, we create a document that tokenizes the same information that we feed into Feat2Vec.", "We prepend each feature value by its feature name, and we remove spaces from within features.", "In Figure A .2 we show an example document.", "Some features may allow multiple values (e.g., multiple writers, directors).", "To feed these features into the models, for convenience, we constraint the number of values, by truncating each feature to no more than 10 levels (and sometimes less if reasonable).", "This results in retaining the full set of information for well over 95% of the values.", "We pad the sequences with a \"null\" category whenever necessary to maintain a fixed length.", "We do this consistently for both Word2Vec and Feat2Vec.", "We use the CBOW Word2Vec algorithm and set the context window to encompass all other tokens in a document during training, since the text in this application is unordered.", "Here, we explain how we build these functions:• Bag of categories, categorical, and boolean: For all of the categorical variables, we learn a unique r-dimensional embedding for each entity using a linear fully-connected layer (Equation 4).", "We do not require one-hot encodings, and thus we allow multiple categories to be active; resulting in a single embedding for the group that is the sum of the embeddings of the subfeatures.", "This is ordering-invariant: the embedding of \"Brad Pitt\" would be the same when he appears in a movie as a principal cast member, regardless whether he was 1st or 2nd star.", "Though, if he were listed as a director it may result in a different embedding.•", "Text: We preprocess the text by removing non alpha-numeric characters, stopwords, and stemming the remaining words. We", "then follow the same approach that we did for categorical variables, summing learned word embeddings to a \"title embedding\" before interacting. It", "would be easy to use more sophisticated methods (e.g, convolutions), but we felt this would not extract further information.• Real-valued", ": For all real-valued features, we pass these features through a 3-layer feedforward fully connected neural network that outputs a vector of dimension r, which we treat as the feature's embedding. Each intermediate", "layer has r units with relu activation functions. These real-valued", "features highlight one of the advantages of the Feat2Vec algorithm: using a numeric value as an input, Feat2Vec can learn a highly nonlinear relation mapping a real number to our high-dimensional embedding space. In contrast, Word2Vec", "would be unable to know ex ante that an IMDB rating of 5.5 is similar to 5.6. Figure A .3 shows the", "full distribution of rankings of the IMDB dataset, rather than summary statistics, in the form of a Cumulative Distribution Function (CDF) of all rankings calculated in the test dataset. The graphic makes it", "apparent for the vast majority of the ranking space, the rank CDF of Feat2Vec is to the left of CBOW, indicating a greater probability of a lower ranking under Feat2Vec. This is not, however", ", the case at the upper tail of ranking space, where it appears CBOW is superior." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2857142686843872, 0.1463414579629898, 0.12903225421905518, 0.04878048226237297, 0.0624999962747097, 0.11428570747375488, 0.19607843458652496, 0.06896550953388214, 0.05714285373687744, 0.13333332538604736, 0.09090908616781235, 0.07407406717538834, 0.1818181723356247, 0.07692307233810425, 0.17241379618644714, 0, 0.1666666567325592, 0, 0.05882352590560913, 0.05714285373687744, 0.11428570747375488, 0.0952380895614624, 0.09375, 0.12244897335767746, 0.13793103396892548, 0.09999999403953552, 0.05714285373687744, 0.1515151411294937, 0.1428571343421936, 0, 0.0833333283662796, 0.0952380895614624, 0.14814814925193787, 0, 0.09090908616781235, 0.10256409645080566, 0.08888888359069824, 0.1428571343421936, 0.09756097197532654, 0.0714285671710968, 0.0714285671710968, 0, 0, 0.13636362552642822, 0, 0.09090908616781235, 0.05882352590560913, 0.10256409645080566, 0.05405404791235924, 0.0714285671710968 ]
rkZzY-lCb
true
[ "Learn dense vector representations of arbitrary types of features in labeled and unlabeled datasets" ]
[ "We investigate the internal representations that a recurrent neural network (RNN) uses while learning to recognize a regular formal language.", "Specifically, we train a RNN on positive and negative examples from a regular language, and ask if there is a simple decoding function that maps states of this RNN to states of the minimal deterministic finite automaton (MDFA) for the language.", "Our experiments show that such a decoding function indeed exists, and that it maps states of the RNN not to MDFA states, but to states of an {\\em abstraction} obtained by clustering small sets of MDFA states into ``''superstates''.", "A qualitative analysis reveals that the abstraction often has a simple interpretation.", "Overall, the results suggest a strong structural relationship between internal representations used by RNNs and finite automata, and explain the well-known ability of RNNs to recognize formal grammatical structure. \n", "Recurrent neural networks (RNNs) seem \"unreasonably\" effective at modeling patterns in noisy realworld sequences.", "In particular, they seem effective at recognizing grammatical structure in sequences, as evidenced by their ability to generate structured data, such as source code (C++, LaTeX, etc.) , with few syntactic grammatical errors BID9 .", "The ability of RNNs to recognize formal languages -sets of strings that possess rigorously defined grammatical structure -is less well-studied.", "Furthermore, there remains little systematic understanding of how RNNs recognize rigorous structure.", "We aim to explain this internal algorithm of RNNs through comparison to fundamental concepts in formal languages, namely, finite automata and regular languages.In this paper, we propose a new way of understanding how trained RNNs represent grammatical structure, by comparing them to finite automata that solve the same language recognition task.", "We ask: Can the internal knowledge representations of RNNs trained to recognize formal languages be easily mapped to the states of automata-theoretic models that are traditionally used to define these same formal languages?", "Specifically, we investigate this question for the class of regular languages, or formal languages accepted by finite automata (FA).In", "our experiments, RNNs are trained on a dataset of positive and negative examples of strings randomly generated from a given formal language. Next", ", we ask if there exists a decoding function: an isomorphism that maps the hidden states of the trained RNN to the states of a canonical FA. Since", "there exist infinitely many FA that accept the same language, we focus on the minimal deterministic finite automaton (MDFA) -the deterministic finite automaton (DFA) with the smallest possible number of states -that perfectly recognizes the language.Our experiments, spanning 500 regular languages, suggest that such a decoding function exists and can be understood in terms of a notion of abstraction that is fundamental in classical system theory. An abstraction", "A of a machine M (either finite-state, like an FA, or infinite-state, like a RNN) is a machine obtained by clustering some of the states of M into \"superstates\". Intuitively, an", "abstraction Figure 1: t-SNE plot (Left) of the hidden states of a RNN trained to recognize a regular language specified by a 6-state DFA (Right). Color denotes DFA", "state. The trained RNN has", "abstracted DFA states 1(green) and 2(blue) (each independently model the pattern [4-6] * ) into a single state.A loses some of the discerning power of the original machine M, and as such recognizes a superset of the language that M recognizes. We observe that the", "states of a RNN R, trained to recognize a regular language L, commonly exibit this abstraction behavior in practice. These states can be", "decoded into states of an abstraction A of the MDFA for the language, such that with high probability, A accepts any input string that is accepted by R. Figure 1 shows a t-SNE embedding BID13 of RNN states trained to perform language recognition on strings from the regex [(([4-6] {2}[4-6]+)?)3[4-6]+]. Although", "the MDFA has", "6 states, we observe the RNN abstracting two states into one. Remarkably, a linear", "decoding function suffices to achieve maximal decoding accuracy: allowing nonlinearity in the decoder does not lead to significant gain. Also, we find the abstraction", "has low \"coarseness\", in the sense that only a few of the MDFA states need be clustered, and a qualitative analysis reveals that the abstractions often have simple interpretations.", "We have studied how RNNs trained to recognize regular formal languages represent knowledge in their hidden state.", "Specifically, we have asked if this internal representation can be decoded into canonical, minimal DFA that exactly recognizes the language, and can therefore be seen to be the \"ground truth\".", "We have shown that a linear function does a remarkably good job at performing such a decoding.", "Critically, however, this decoder maps states of the RNN not to MDFA states, but to states of an abstraction obtained by clustering small sets of MDFA states into \"abstractions\".", "Overall, the results suggest a strong structural relationship between internal representations used by RNNs and finite automata, and explain the well-known ability of RNNs to recognize formal grammatical structure.We see our work as a fundamental step in the larger effort to study how neural networks learn formal logical concepts.", "We intend to explore more complex and richer classes of formal languages, such as context-free languages and recursively enumerable languages, and their neural analogs." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.07547169178724289, 0.039215680211782455, 0.06451612710952759, 0.1304347813129425, 0, 0, 0.052631575614213943, 0.06451612710952759, 0.0624999962747097, 0.12765957415103912, 0, 0.14999999105930328, 0, 0.07999999821186066, 0, 0.04651162400841713, 0, 0.0363636314868927, 0.09999999403953552, 0.1269841194152832, 0, 0, 0.04999999329447746, 0.13636362552642822, 0.0555555522441864, 0.13333332538604736, 0, 0.0476190410554409, 0.06451612710952759, 0.04999999329447746 ]
H1zeHnA9KX
true
[ "Finite Automata Can be Linearly decoded from Language-Recognizing RNNs using low coarseness abstraction functions and high accuracy decoders. " ]
[ "Designing accurate and efficient convolutional neural architectures for vast amount of hardware is challenging because hardware designs are complex and diverse.", "This paper addresses the hardware diversity challenge in Neural Architecture Search (NAS).", "Unlike previous approaches that apply search algorithms on a small, human-designed search space without considering hardware diversity, we propose HURRICANE that explores the automatic hardware-aware search over a much larger search space and a multistep search scheme in coordinate ascent framework, to generate tailored models for different types of hardware.", "Extensive experiments on ImageNet show that our algorithm consistently achieves a much lower inference latency with a similar or better accuracy than state-of-the-art NAS methods on three types of hardware.", "Remarkably, HURRICANE achieves a 76.63% top-1 accuracy on ImageNet with a inference latency of only 16.5 ms for DSP, which is a 3.4% higher accuracy and a 6.35x inference speedup than FBNet-iPhoneX.", "For VPU, HURRICANE achieves a 0.53% higher top-1 accuracy than Proxyless-mobile with a 1.49x speedup.", "Even for well-studied mobile CPU, HURRICANE achieves a 1.63% higher top-1 accuracy than FBNet-iPhoneX with a comparable inference latency.", "HURRICANE also reduces the training time by 54.7% on average compared to SinglePath-Oneshot.", "Neural Architecture Search (NAS) is a powerful mechanism to automatically generate efficient Convolutional Neural Networks (CNNs) without requiring huge manual efforts of human experts to design good CNN models (Zoph & Le, 2016; Guo et al., 2019; Bender et al., 2017) .", "However, most existing NAS methods focus on searching for a single DNN model of high accuracy but pay less attention on the performance of executing the model on hardware, e.g., inference latency or energy cost.", "Recent NAS methods (Guo et al., 2019; Cai et al., 2019; Stamoulis et al., 2018b; Wu et al., 2019 ) start to consider model-inference performance but they use FLOPs 1 to estimate inference latency or only consider the same type of hardware, e.g., smartphones from different manufacturers but all ARM-based.", "However, the emerging massive smart devices are equipped with very diverse processors, such as CPU, GPU, DSP, FPGA, and various AI accelerators that have fundamentally different hardware designs.", "Such a big hardware diversity makes FLOPs an improper metric to predict model-inference performance and calls for new trade-offs and designs for NAS to generate efficient models for diverse hardware.", "To demonstrate it, we conduct an experiment to measure the performance of a set of widely used neural network operators (a.k.a. operations) on three types of mobile processors: Hexagon TM 685 DSP, Snapdragon 845 ARM CPU, and Movidius TM Myriad TM X Vision Processing Unit (VPU).", "Figure 1 shows the results and we make the following key observations.", "First, from Figure 1", "(a), we can see that even the operators have similar FLOPs, the same operator may have very different inference latency on different processors.", "For example, the latency of operator SEP 5 is nearly 12× higher than that of operator Choice 3 on the ARM CPU, but the difference on the VPU is less than 4×.", "Therefore, FLOPs is not the right metric to decide the inference latency on different hardware.", "Second, the relative effectiveness of different operators on different processors is also different.", "For example, operator SEP 3 has the smallest latency on the DSP, but operator Choice 3 has the 1 In this paper, the definition of F LOP s follows , i.e., the number of multiply-adds.", "smallest latency on the VPU.", "Thus, different processors should choose different operators for the best trade-off between model accuracy and inference latency.", "Furthermore, as shown in Figure 1", "(b), the computational complexity and latency of the same operator are also affected by the execution context, such as input feature map shapes, number of channels, etc.", "Such a context is determined by which layer the operator is placed on.", "As a result, even on the same hardware, optimal operators may change at different layers of the network.", "In addition, we observe that the existing NAS methods tends to handle all the layers equally.", "For instance, the uniform sampling in one-shot NAS (Brock et al., 2018; Guo et al., 2019) will give the same sampling opportunities to every layer.", "However, not all the layers are the same: different layers may have different impacts on inference latency and model accuracy.", "Indeed, some previous works (D.Zeiler & Fergus, 2014; Girish et al., 2019) have revealed different behaviors between the earlier layers (close to data input) and the latter layers (close to classification output) in CNN models.", "The earlier layers extract low-level features from inputs (e.g., edges and colors), are computation intensive and demands more data to converge, while the latter layers capture high-level class-specific features but are less computation intensive.", "From these findings, we argue that exploring more architecture selections in the latter layers may help find better architectures with the limited sampling budget, and limiting the latency in the earlier layers is critical to search for low-latency models.", "To this end, it is desirable to explore how to leverage this layer diversity for better architecture sampling in NAS.", "Motivated by these observations, we argue that there is no one-size-fits-all model for different hardware, and thus propose and develop a novel hardware-aware method, called HURRICANE (Hardware aware one-shot neUral aRchitecture seaRch In Coordinate AsceNt framEwork), to tackle the challenge of hardware diversity in NAS.", "Different from the existing hardware-aware NAS methods that use a small set of operators (e.g., 6 or 9) manually selected for a specific hardware platform, HURRICANE is initialized with a large-size (32 in our implementation) candidate operators set to cover the diversity of hardware platforms.", "However, doing so increases the search space by many orders of magnitude and thus leads to unacceptable search and training cost and may even cause non-convergence problem.", "To reduce the cost, we propose hardware-aware search space reduction at both operator level and layer level.", "In the operator-level search space reduction, a toolkit is developed to automatically score every layer's candidate operators on target hardware platforms, and choose a sub-set of them with low latency for further utilization.", "In the layer-level search space reduction, we split the layers into two groups, the earlier group and the latter group according to their locations in the network.", "Based on a coordinate ascent framework (Wright, 2015) (Appendix A), we propose a multistep search scheme, which searches the complete architecture by a sequence of simpler searching of sub-networks.", "In each iteration (step), we alternatively fix one group of layers and optimize the other group of layers to maximize the validation accuracy by a one-shot NAS 2 .", "The searching of sub-networks is much easier to complete because of the much smaller size of search space, and the better architectures are reached by a sequence of iterations.", "This layer-level search space reduction is inspired by the layer diversity mentioned above.", "We choose most latencyeffective operators for earlier layers and allocate more sampling opportunities to latter layers.", "As a result, we are able to search for models with both low latency and high accuracy.", "We evaluate the effectiveness of our proposed approach on ImageNet 2012 dataset and a small OUIAdience-Age dataset with the above three mobile hardware platforms (DSP/CPU/VPU).", "Under all the three platforms, HURRICANE consistently achieves the same level (or better) accuracy with much lower inference latency than state-of-the-art hardware-aware NAS methods.", "Remarkably, HURRICANE reduces the inference latency by 6.35× on DSP compared to FBNet-iPhoneX and 1.49× On VPU compared to Proxyless-mobile, respectively.", "Compared to Singlepath-Oneshot, on average HURRICANE reduces the training time by 54.7% on ImageNet.", "In this paper, we propose HURRICANE to address the challenge of hardware diversity in NAS.", "By exploring hardware-aware search space and a multistep search scheme based on coordinate ascent framework, our solution achieves better accuracy and much lower latency on three hardware platforms than state-of-the-art hardware-aware NAS.", "And the searching cost (searching time) is also significantly reduced.", "For future work, we plan to support more diverse hardware and speed up more NAS methods." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.1764705777168274, 0.37037035822868347, 0.2857142686843872, 0.09302324801683426, 0.08695651590824127, 0.06451612710952759, 0.05882352590560913, 0.20689654350280762, 0.07547169178724289, 0.08510638028383255, 0.10526315122842789, 0.09302324801683426, 0.14999999105930328, 0.13793103396892548, 0.07692307233810425, 0, 0.05714285373687744, 0.10256409645080566, 0.20689654350280762, 0.1538461446762085, 0.09302324801683426, 0.09999999403953552, 0.06451612710952759, 0.0952380895614624, 0.10256409645080566, 0.07407406717538834, 0.1249999925494194, 0.13333332538604736, 0.21621620655059814, 0.0624999962747097, 0.1249999925494194, 0.08888888359069824, 0.20408162474632263, 0.24242423474788666, 0.33898305892944336, 0.2545454502105713, 0.20512820780277252, 0.19354838132858276, 0.21276594698429108, 0.21621620655059814, 0.24390242993831635, 0.20512820780277252, 0.20512820780277252, 0.2142857164144516, 0.13333332538604736, 0.1249999925494194, 0.21052631735801697, 0.10526315122842789, 0.1666666567325592, 0.20689654350280762, 0.6666666865348816, 0.09302324801683426, 0.07999999821186066, 0.13333332538604736 ]
BJe6BkHYDB
true
[ "We propose HURRICANE to address the challenge of hardware diversity in one-shot neural architecture search" ]
[ "In this paper, we present Neural Phrase-based Machine Translation (NPMT).", "Our method explicitly models the phrase structures in output sequences using Sleep-WAke Networks (SWAN), a recently proposed segmentation-based sequence modeling method.", "To mitigate the monotonic alignment requirement of SWAN, we introduce a new layer to perform (soft) local reordering of input sequences.", "Different from existing neural machine translation (NMT) approaches, NPMT does not use attention-based decoding mechanisms. ", "Instead, it directly outputs phrases in a sequential order and can decode in linear time.", "Our experiments show that NPMT achieves superior performances on IWSLT 2014 German-English/English-German and IWSLT 2015 English-Vietnamese machine translation tasks compared with strong NMT baselines.", "We also observe that our method produces meaningful phrases in output languages.", "A word can be considered as a basic unit in languages.", "However, in many cases, we often need a phrase to express a concrete meaning.", "For example, consider understanding the following sentence, \"machine learning is a field of computer science\".", "It may become easier to comprehend if we segment it as \" [machine learning] [is] [a field of] [computer science]\", where the words in the bracket '[]' are regarded as \"phrases\".", "These phrases have their own meanings, and can often be reused in other contexts.The goal of this paper is to explore the use of phrase structures aforementioned for neural networkbased machine translation systems BID22 BID0 .", "To this end, we develop a neural machine translation method that explicitly models phrases in target language sequences.", "Traditional phrase-based statistical machine translation (SMT) approaches have been shown to consistently outperform word-based ones (Koehn et al., 2003; Koehn, 2009; BID15 .", "However, modern neural machine translation (NMT) methods BID22 BID0 do not have an explicit treatment on phrases, but they still work surprisingly well and have been deployed to industrial systems BID31 BID28 .", "The proposed Neural Phrase-based Machine Translation (NPMT) method tries to explore the advantages from both kingdoms.", "It builds upon Sleep-WAke Networks (SWAN), a segmentation-based sequence modeling technique described in BID25 , where segments (or phrases) are automatically discovered given the data.", "However, SWAN requires monotonic alignments between inputs and outputs.", "This is often not an appropriate assumption in many language pairs.", "To mitigate this issue, we introduce a new layer to perform (soft) local reordering on input sequences.", "Experimental results show that NPMT outperforms attention-based NMT baselines in terms of the BLEU score BID19 on IWSLT 2014 German-English/English-German and IWSLT 2015 English-Vietnamese translation tasks.", "We believe our method is one step towards the full integration of the advantages from neural machine translation and phrase-based SMT.", "This paper is organized as follows.", "Section 2 presents the neural phrase-based machine translation model.", "Section 3 demonstrates the usefulness of our approach on several language pairs.", "We conclude our work with some discussions in Section 4." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.1111111044883728, 0, 0, 0.25, 0.1818181723356247, 0.19354838132858276, 0, 0, 0, 0, 0, 0.09302325546741486, 0.1538461446762085, 0.19354838132858276, 0.10256409645080566, 0.0833333283662796, 0, 0, 0, 0, 0.060606058686971664, 0.2142857164144516, 0, 0.3529411852359772, 0, 0.1111111044883728 ]
HktJec1RZ
true
[ "Neural phrase-based machine translation with linear decoding time" ]
[ "Generative Adversarial Networks (GANs) have shown impressive results in modeling distributions over complicated manifolds such as those of natural images.", "However, GANs often suffer from mode collapse, which means they are prone to characterize only a single or a few modes of the data distribution.", "In order to address this problem, we propose a novel framework called LDMGAN.", "We first introduce Latent Distribution Matching (LDM) constraint which regularizes the generator by aligning distribution of generated samples with that of real samples in latent space.", "To make use of such latent space, we propose a regularized AutoEncoder (AE) that maps the data distribution to prior distribution in encoded space.", "Extensive experiments on synthetic data and real world datasets show that our proposed framework significantly improves GAN’s stability and diversity.", "Generative models (Smolensky, 1986; Salakhutdinov & Hinton, 2009; Hinton & Salakhutdinov, 2006; Hinton, 2007; Kingma & Welling, 2013; Rezende et al., 2014; Goodfellow et al., 2014) provide powerful tools for unsupervised learning of probability distributions over difficult manifolds such as those of natural images.", "Among these models, instead of requiring explicit parametric specification of the model distribution and a likelihood function, Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) only have a generating procedure.", "They generate samples that are sharp and compelling, which have gained great successes on image generation tasks (Denton et al., 2015; Radford et al., 2015; Karras et al., 2017; Zhang et al., 2018) recently.", "GANs are composed of two types of deep neural networks that compete with each other: a generator and a discriminator.", "The generator tries to map noise sampled from simple prior distribution which is usually a multivariate gaussian to data space with the aim of fooling the discriminator, while the discriminator learns to determine whether a sample comes from the real dataset or generated samples.", "In practice, however, GANs are fragile and in general notoriously hard to train.", "On the one hand, they are sensitive to architectures and hyper-parameters (Goodfellow et al., 2014) .", "For example, the imbalance between discriminator and generator capacities often leads to convergence issues.", "On the other hand, there is a common failure issue in GANs called mode collapse.", "The generator tends to produce only a single sample or a few very similar samples in that case, which means GANs put large volumes of probability mass onto a few modes.", "We conjecture the mode missing issue in GANs is probably because GANs lack a regularization term that can lead the generator to produce diverse samples.", "To remedy this problem, in this work, we first propose a regularization constraint called Latent Distribution Matching.", "It suppresses the mode collapse issue in GANs by aligning the distributions between true data and generated data in encoded space.", "To obtain such encoded space, we introduce a regularized autoencoder which maps data distribution to a simple prior distribution, eg.", ", a gaussian.", "As shown in Figure 1 , we collapse the decoder of the regularized AE and generator of GAN into one and propose LDMGAN.", "Our framework can stabilize GAN's training and reduce mode collapse issue in GANs.", "Compared to other AE-based methods on 2D synthetic, MNIST, Stacked-MNIST, CIFAR-10 and CelebA datasets, our method obtains better stability, diversity and competitive standard scores." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.06451612710952759, 0.11428570747375488, 0.0833333283662796, 0.17142856121063232, 0.1764705777168274, 0.06666666269302368, 0, 0, 0.05128204822540283, 0.13793103396892548, 0, 0.1666666567325592, 0, 0, 0.307692289352417, 0.1538461446762085, 0.29411762952804565, 0.14814814925193787, 0.27586206793785095, 0, 0, 0.25806450843811035, 0.3333333432674408, 0.05882352590560913 ]
HygHbTVYPB
true
[ "We propose an AE-based GAN that alleviates mode collapse in GANs." ]
[ "Achieving faster execution with shorter compilation time can foster further diversity and innovation in neural networks.", "However, the current paradigm of executing neural networks either relies on hand-optimized libraries, traditional compilation heuristics, or very recently genetic algorithms and other stochastic methods.", "These methods suffer from frequent costly hardware measurements rendering them not only too time consuming but also suboptimal.", "As such, we devise a solution that can learn to quickly adapt to a previously unseen design space for code optimization, both accelerating the search and improving the output performance.", "This solution dubbed CHAMELEON leverages reinforcement learning whose solution takes fewer steps to converge, and develops an adaptive sampling algorithm that not only focuses on the costly samples (real hardware measurements) on representative points but also uses a domain knowledge inspired logic to improve the samples itself.", "Experimentation with real hardware shows that CHAMELEON provides 4.45×speed up in optimization time over AutoTVM, while also improving inference time of the modern deep networks by 5.6%.", "The enormous computational intensity of DNNs have resulted in developing either hand-optimized kernels, such as NVIDIA cuDNN or Intel MKL that serve as backend for a variety of programming environment such as TensorFlow (Abadi et al., 2016) and PyTorch (Paszke et al., 2017) .", "However, the complexity of the tensor operations in DNNs and the volatility of algorithms, which has led to unprecedented rate of innovation (LeCun, 2019) , calls for developing automated compilation frameworks.", "To imitate or even surpass the success of hand-optimized libraries, recent research has developed stochastic optimization passes for general code: STOKE (Schkufza et al., 2013) , and neural network code, TVM (Chen et al., 2018a) and TensorComprehensions (Vasilache et al., 2018) .", "TVM and TensorComprehensions are based on random or genetic algorithms to search the space of optimized code for neural networks.", "AutoTVM (Chen et al., 2018b ) builds on top of TVM and leverage boosted trees (Chen & Guestrin, 2016) as part of the search cost model to avoid measuring the fitness of each solution (optimized candidate neural network code), and instead predict its fitness.", "However, even with these innovations the optimizing compilation time can be around 10 hours for ResNet-18 (He et al., 2016) , and even more for deeper or wider networks.", "Since the general objective is to unleash new possibilities by developing automatic optimization passes, long compilation time hinders innovation and could put the current solutions in a position of questionable utility.", "To solve this problem, we first question the very statistical guarantees which the aforementioned optimization passes rely on.", "The current approaches are oblivious to the patterns in the design space of schedules that are available for exploitation, and causes inefficient search or even converges to solutions that may even be suboptimal.", "Also, we notice that current approaches rely on greedy sampling that neglects the distribution of the candidate solutions (configurations).", "While greedy sampling that passively filter samples based on the fitness estimations from the cost models work, many of their hardware measurements (required for optimization) tend to be redundant and wasteful.", "Moreover, we found that current solutions that rely on greedy sampling lead to significant fractions of the candidate configurations being redundant over iterations, and that any optimizing compiler are prone to invalid configurations which significantly prolongs the optimization time.", "As such, this work sets out to present an Adaptive approach to significantly reduce the compilation time and offer automation while avoiding dependence to hand-optimization, enabling far more diverse tensor operations in the next generation DNNs.", "We tackle this challenge from two fronts with the following contributions:", "(1) Devising an Adaptive Exploration module that utilizes reinforcement learning to adapt to unseen design space of new networks to reduce search time yet achieve better performance.", "(2) Proposing an Adaptive Sampling algorithm that utilizes clustering to adaptively reduce the number of costly hardware measurements, and devising a domain-knowledge inspired Sample Synthesis to find configurations that would potentially yield better performance.", "Real hardware experimentation with modern DNNs (AlexNet, VGG-16, and ResNet-18) on a highend GPU (Titan Xp), shows that the combination of these two innovations, dubbed CHAMELEON, yields 4.45×speedup over the leading framework, AutoTVM.", "CHAMELEON is anonymously available in https://github.com/anony-sub/chameleon, which will be made public.", "We present CHAMELEON to allow optimizing compilers to adapt to unseen design spaces of code schedules to reduce the optimization time.", "This paper is also an initial effort to bring Reinforcement Learning to the realm of optimizing compilers for neural networks, and we also develop an Adaptive Sampling with domain-knowledge inspired Sample Synthesis to not only reduce the number of samples required to navigate the design space but also augment its quality in terms of fitness.", "Experimentation with real-world deep models shows that CHAMELEON not only reduces the time for compilation significantly, but also improves the quality of the code.", "This encouraging result suggests a significant potential for various learning techniques to optimizing deep learning models." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.0714285671710968, 0.10810810327529907, 0, 0.10256409645080566, 0.07407406717538834, 0.04999999701976776, 0.11999999731779099, 0.1538461446762085, 0.1249999925494194, 0.1875, 0.07843136787414551, 0.10256409645080566, 0.0952380895614624, 0, 0.14999999105930328, 0.06896550953388214, 0.1428571343421936, 0.08695651590824127, 0.08888888359069824, 0, 0.1621621549129486, 0.1818181723356247, 0.08888888359069824, 0, 0.06666666269302368, 0.21052631735801697, 0.11764705181121826, 0.14814814925193787 ]
rygG4AVFvH
true
[ "Reinforcement learning and Adaptive Sampling for Optimized Compilation of Deep Neural Networks." ]
[ "In this paper, we propose a differentiable adversarial grammar model for future prediction.", "The objective is to model a formal grammar in terms of differentiable functions and latent representations, so that their learning is possible through standard backpropagation.", "Learning a formal grammar represented with latent terminals, non-terminals, and productions rules allows capturing sequential structures with multiple possibilities from data.\n\n", "The adversarial grammar is designed so that it can learn stochastic production rules from the data distribution.", "Being able to select multiple production rules leads to different predicted outcomes, thus efficiently modeling many plausible futures. ", "We confirm the benefit of the adversarial grammar on two diverse tasks: future 3D human pose prediction and future activity prediction.", "For all settings, the proposed adversarial grammar outperforms the state-of-the-art approaches, being able to predict much more accurately and further in the future, than prior work.", "Future prediction in videos is one of the most challenging visual tasks.", "Being able to accurately predict future activities, human or object pose has many important implications, most notably for robot action planning.", "Prediction is particularly hard because it is not a deterministic process as multiple potential 'futures' are possible, and in the case of human pose, predicting real-valued output vectors is further challenging.", "Given these challenges, we address the long standing questions: how should the sequential dependencies in the data be modeled and how can multiple possible long-term future outcomes be predicted at any given time.", "To address these challenges, we propose an adversarial grammar model for future prediction.", "The model is a differentiable form of a regular grammar trained with adversarial sampling of various possible futures, which is able to output real-valued predictions (e.g., 3D human pose) or semantic prediction (e.g., activity classes).", "Learning sequences of actions or other sequential processes with the imposed rules of a grammar is valuable, as it imposes temporal structural dependencies and captures relationships between states (e.g., activities).", "At the same time, the use of adversarial sampling when learning the grammar rules is essential, as this adversarial process is able to produce multiple candidate future sequences that follow a similar distribution to sequences seen in the data.", "More importantly, a traditional grammar will need to enumerate all possible rules (exponential growth in time) to learn multiple futures.", "This adversarial stochastic sampling process allows for much more memory-efficient learning without enumeration.", "Additionally, unlike other techniques for future generation (e.g., autoregressive RNNs), we show the adversarial grammar is able to learn long sequences, can handle multi-label settings, and predict much further into the future.", "The proposed approach is driven entirely by the structure imposed from learning grammar rules and their relationships to the terminal symbols of the data and by the adversarial losses which help model the data distribution over long sequences.", "To our knowledge this is the first approach of adversarial grammar learning and the first to be able to successfully produce multiple feasible long-term future predictions for high dimensional outputs.", "The approach outperforms previous state-of-the-art methods, including RNN/LSTM and memory based methods.", "We evaluate future prediction on high dimensional data and are able to predict much further in the future than prior work.", "The proposed approach is also general -it is applied to diverse future prediction tasks: 3D human pose prediction and multi-class and multi-label activity forecasting, and on three challenging datasets: Charades, MultiTHUMOS, and Human3.6M.", "We propose a novel differentiable adversarial grammar and apply it to several diverse future prediction and generation tasks.", "Because of the structure we impose for learning grammar-like rules for sequences and learning in adversarial fashion, we are able to generate multiple sequences that follow the distribution seen in data.", "Our work outperforms prior approaches on all tasks and is able to generate sequences much further in the future.", "We plan to release the code." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.32258063554763794, 0.3333333432674408, 0.1538461446762085, 0.2857142686843872, 0.0555555522441864, 0.3333333134651184, 0.2380952388048172, 0.19999998807907104, 0.10256409645080566, 0.21276594698429108, 0.12765957415103912, 0.32258063554763794, 0.23529411852359772, 0.20408162474632263, 0.3199999928474426, 0.21621620655059814, 0.06451612710952759, 0.23999999463558197, 0.20408162474632263, 0.2666666507720947, 0.06666666269302368, 0.31578946113586426, 0.21276594698429108, 0.5714285373687744, 0.23255813121795654, 0.2702702581882477, 0.1666666567325592 ]
Syl5mRNtvr
true
[ "We design a grammar that is learned in an adversarial setting and apply it to future prediction in video." ]
[ "We consider a simple and overarching representation for permutation-invariant functions of sequences (or set functions).", "Our approach, which we call Janossy pooling, expresses a permutation-invariant function as the average of a permutation-sensitive function applied to all reorderings of the input sequence.", "This allows us to leverage the rich and mature literature on permutation-sensitive functions to construct novel and flexible permutation-invariant functions.", "If carried out naively, Janossy pooling can be computationally prohibitive.", "To allow computational tractability, we consider three kinds of approximations: canonical orderings of sequences, functions with k-order interactions, and stochastic optimization algorithms with random permutations.", "Our framework unifies a variety of existing work in the literature, and suggests possible modeling and algorithmic extensions.", "We explore a few in our experiments, which demonstrate improved performance over current state-of-the-art methods.", "Pooling is a fundamental operation in deep learning architectures BID23 .", "The role of pooling is to merge a collection of related features into a single, possibly vector-valued, summary feature.", "A prototypical example is in convolutional neural networks (CNNs) BID22 , where linear activations of features in neighborhoods of image locations are pooled together to construct more abstract features.", "A more modern example is in neural networks for graphs, where each layer pools together embeddings of neighbors of a vertex to form a new embedding for that vertex, see for instance, BID20 BID0 BID15 Velickovic et al., 2017; BID28 Xu et al., 2018; BID26 BID25 van den Berg et al., 2017; BID12 BID13 Ying et al., 2018; Xu et al., 2019) .A", "common requirement of a pooling operator is invariance to the ordering of the input features. In", "CNNs for images, pooling allows invariance to translations and rotations, while for graphs, it allows invariance to graph isomorphisms. Existing", "pooling operators are mostly limited to predefined heuristics such as max-pool, min-pool, sum, or average. Another", "desirable characteristic of pooling layers is the ability to take variable-size inputs. This is", "less important in images, where neighborhoods are usually fixed a priori. However", "in applications involving graphs, the number of neighbors of different vertices can vary widely. Our goal", "is to design flexible and learnable pooling operators satisfying these two desiderata.Abstractly, we will view pooling as a permutation-invariant (or symmetric) function acting on finite but arbitrary length sequences h. All elements", "h i of the sequences are features lying in some space H (which itself could be a high-dimensional Euclidean space R d or some subset thereof). The sequences", "h are themselves elements of the union of products of the H-space: h ∈ ∞ j=0 H j ≡ H ∪ . Throughout the", "paper, we will use Π n to represent the set of all permutations of the integers 1 to n, where n will often be clear from the context. In addition, h", "π , π ∈ Π |h| , will represent a reordering of the elements of a sequence h according to π, where |h| is the length of the sequence h. We will use the", "double bar superscript f to indicate that a function is permutation-invariant, returning the same value no matter the order of its arguments: f (h) = f (h π ), ∀π ∈ Π |h| . We will use the", "arrow superscript f to indicate general functions on sequences h which may or may not be permutationinvariant 1 . Functions f without", "any markers are 'simple' functions, acting on elements in H, scalars or any other argument that is not a sequence of elements in H.Our goal in this paper is to model and learn permutation-sensitive functions f that can be used to construct flexible and learnable permutation-invariant neural networks. A recent step in this", "direction is work on DeepSets by Zaheer et al. (2017) , who argued for learning permutation-invariant functions through the following composition: DISPLAYFORM0 f (|h|, h; θ (f ) ) = |h| j=1 f (h j ; θ (f ) ) and h ≡ h(x; θ (h) ).Here, (a) x ∈ X is one", "observation", "in the training data (X itself may contain variable-length sequences), h ∈ H is the embedding (output) of the data given by the lower layers h : X × R a → H ∪ , a > 0 with parameters θ (h) ∈ R a ; (b) f : H × R b → F is a middle-layer", "embedding function with parameters θ (f ) ∈ R b , b > 0, and F is the embedding space of f ; and (c) ρ : F × R c → Y is a neural network", "with parameters θ (ρ) ∈ R c , c > 0, that maps to the final output space Y. Typically H and F are high-dimensional real-valued spaces; Y is often R d in d-dimensional regression problems or the simplex in classification problems. Effectively, the neural network f learns", "an embedding for each element in H, and given a sequence h, its component embeddings are added together before a second neural network transformation ρ is applied. Note that the function h may be the identity", "mapping h(x; ·) = x that makes f act directly on the input data. Zaheer et al. (2017) argue that if ρ is a universal", "function approximator, the above architecture is capable of approximating any symmetric function on h-sequences, which justifies the widespread use of average (sum) pooling to make neural networks permutation-invariant in BID12 , BID15 , BID20 , BID0 , among other works. We note that Zaheer et al. (2017) focus on functions", "of sets but the work was extended to functions of multisets by Xu et al. (2019) and that Janossy pooling can be used to represent multiset functions. The embedding h is permuted in all |h|! possible ways", ", and for each permutation h π , f (|h|, h π ; θ (f ) ) is computed. These are summed and passed to a second function ρ(·", "; θ (ρ) ) which gives the final permutation-invariant output y(x; θ (ρ) , θ (f ) , θ (h) ); the gray rectangle represents Janossy pooling. We discuss how this can be made computationally tractable", ".In practice, there is a gap between flexibility and learnability. While the architecture of equations 1 and 2 is a universal", "approximator to permutationinvariant functions, it does not easily encode structural knowledge about y.Consider trying to learn the permutation-invariant function y(x) = max i,j≤|x| |x i − x j |. With higherorder interactions between the elements of h, the", "functions f of equation 2 cannot capture any useful intermediate representations towards the final output, with the burden shifted entirely to the function ρ. Learning ρ means learning to undo mixing performed by the summation", "layer f (|h|, h; θ (f ) ) = |h| j=1 f (h j ; θ (f ) ). As we show in our experiments, in many applications this is too much", "to ask of ρ.Contributions. We investigate a learnable permutation-invariant pooling layer for variable-size", "inputs inspired by the Janossy density framework, widely used in the theory of point processes (Daley & Vere-Jones, 2003, Chapter 7) . This approach, which we call Janossy pooling, directly allows the user to model", "what higher-order dependencies in h are relevant in the pooling. FIG0 summarizes a neural network with a single Janossy pooling layer f (detailed", "in Definition 2.1 below): given an input embedding h, we apply a learnable (permutation-sensitive) function f to every permutation h π of the input sequence h. These outputs are added together, and fed to the second function ρ. Examples of", "function f include feedforward and recurrent neural networks (RNNs)", ". We call the operation used to construct f from f the Janossy pooling. Definition", "2.1 gives a more detailed description. We will detail three broad strategies", "for making this computation tractable and discuss", "how existing methods can be seen as tractability strategies under the Janossy pooling framework.Thus, we propose a framework and tractability strategies that unify and extend existing methods in the literature. We contribute the following analysis: (a) We show DeepSets (Zaheer et al., 2017) is a special", "case of Janossy pooling where the function", "f depends only on the first element of the sequence h π . In the most general form of Janossy pooling (as described above), f depends on its entire input sequence", "h π . This naturally raises the possibility of intermediate choices of f that allow practitioners to trade between", "flexibility and tractability. We will show that functions f that depend on their first k arguments of h π allow the Janossy pooling layer", "to capture up to k-ary dependencies in h. (b) We show Janossy pooling can be used to learn permutation-invariant neural networks y(x) by sampling a random", "permutation of h during training, and then modeling this permuted sequence using a sequence model such as a recurrent neural network (LSTMs BID17 , GRUs BID6 ) or a vector model such as a feedforward network. We call this permutation-sampling learning algorithm π-SGD (π-Stochastic Gradient Descent). Our analysis explains", "why this seemingly unsound procedure is theoretically justified, which sheds light on the recent", "puzzling success of permutation sampling and LSTMs in relational models BID29 BID15 . We show that this property relates to randomized model ensemble techniques. (c) In Zaheer et al. (2017) , the authors", "describe a connection between DeepSets and infinite de Finetti exchangeabilty", ". We provide a probabilistic connection between Janossy pooling and finite de Finetti exchangeabilty BID11 .", "Our approach of permutation-invariance through Janossy pooling unifies a number of existing approaches, and opens up avenues to develop both new methodological extensions, as well as better theory.", "Our paper focused on two main approaches: k-ary interactions and random permutations.", "The former involves exact Janossy pooling for a restricted class of functions f .", "Adding an additional neural network ρ can recover lost model capacity and capture additional higher-order interactions, but hurts tractability and identifiability.", "Placing restrictions on ρ (convexity, Lipschitz continuity etc.) can allow a more refined control of this trade-off, allowing theoretical and empirical work to shed light on the compromises involved.", "The second was a random permutation approach which conversely involves no clear trade-offs between model capacity and computation when ρ is made more complex, instead it modifies the relationship between the tractable approximate loss J and the original Janossy loss L. While there is a difference between J and L, we saw the strongest empirical performance coming from this approach in our experiments (shown in the last row of TAB0 ; future work is required to identify which problems π-SGD is best suited for and when its conver-gence criteria are satisfied.", "Further, a better understanding how the loss-functions L and J relate to each other can shed light on the slightly black-box nature of this procedure.", "It is also important to understand the relationship between the random permutation optimization to canonical ordering and how one might be used to improve the other.", "Finally, it is important to apply our methodology to a wider range of applications.", "Two immediate domains are more challenging tasks involving graphs and tasks involving non-Poisson point processes.", "is now a summation over only |h|!/(|h|", "− k)! terms. We can", "conclude that" ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.21276594698429108, 0.37037035822868347, 0.16326530277729034, 0.0476190447807312, 0.1818181723356247, 0.12244897335767746, 0.08510638028383255, 0.1428571343421936, 0.08163265138864517, 0.03448275476694107, 0.07499999552965164, 0.17391303181648254, 0.1249999925494194, 0.1249999925494194, 0.08888888359069824, 0.045454543083906174, 0.04255318641662598, 0.15625, 0.07017543166875839, 0.04081632196903229, 0.10526315122842789, 0.18518517911434174, 0.1230769157409668, 0.07843136787414551, 0.1315789371728897, 0.12987013161182404, 0.054794516414403915, 0.0952380895614624, 0.08219177275896072, 0.1515151411294937, 0.1071428507566452, 0.10666666179895401, 0.1515151411294937, 0.178571417927742, 0.13114753365516663, 0.11999999731779099, 0.05882352590560913, 0.12903225421905518, 0.035087715834379196, 0.17391303181648254, 0.1904761791229248, 0.11538460850715637, 0.23880596458911896, 0.04878048598766327, 0.22727271914482117, 0.13333332538604736, 0.1538461446762085, 0.2571428418159485, 0.10256409645080566, 0.145454540848732, 0.08163265138864517, 0.1818181723356247, 0.1428571343421936, 0.25, 0.08695651590824127, 0.15625, 0.0952380895614624, 0.17391303181648254, 0.17241378128528595, 0.045454543083906174, 0.17777776718139648, 0.039215680211782455, 0.13114753365516663, 0.17307691276073456, 0.178571417927742, 0.18518517911434174, 0.08888888359069824, 0.04444444179534912, 0.05128204822540283, 0.054054051637649536 ]
BJluy2RcFm
true
[ "We propose Janossy pooling, a method for learning deep permutation invariant functions designed to exploit relationships within the input sequence and tractable inference strategies such as a stochastic optimization procedure we call piSGD" ]
[ "While tasks could come with varying the number of instances and classes in realistic settings, the existing meta-learning approaches for few-shot classification assume that number of instances per task and class is fixed.", "Due to such restriction, they learn to equally utilize the meta-knowledge across all the tasks, even when the number of instances per task and class largely varies.", "Moreover, they do not consider distributional difference in unseen tasks, on which the meta-knowledge may have less usefulness depending on the task relatedness.", "To overcome these limitations, we propose a novel meta-learning model that adaptively balances the effect of the meta-learning and task-specific learning within each task.", "Through the learning of the balancing variables, we can decide whether to obtain a solution by relying on the meta-knowledge or task-specific learning.", "We formulate this objective into a Bayesian inference framework and tackle it using variational inference.", "We validate our Bayesian Task-Adaptive Meta-Learning (Bayesian TAML) on two realistic task- and class-imbalanced datasets, on which it significantly outperforms existing meta-learning approaches.", "Further ablation study confirms the effectiveness of each balancing component and the Bayesian learning framework.", "Despite the success of deep learning in many real-world tasks such as visual recognition and machine translation, such good performances are achievable at the availability of large training data, and many fail to generalize well in small data regimes.", "To overcome this limitation of conventional deep learning, recently, researchers have explored meta-learning (Schmidhuber, 1987; Thrun & Pratt, 1998) approaches, whose goal is to learn a model that generalizes well over distribution of tasks, rather than instances from a single task, in order to utilize the obtained meta-knowledge across tasks to compensate for the lack of training data for each task.", "However, so far, most existing meta-learning approaches (Santoro et al., 2016; Vinyals et al., 2016; Snell et al., 2017; Ravi & Larochelle, 2017; Finn et al., 2017; have only targeted an artificial scenario where all tasks participating in the multi-class classification problem have equal number of training instances per class.", "Yet, this is a highly restrictive setting, as in real-world scenarios, tasks that arrive at the model may have different training instances (task imbalance), and within each task, the number of training instances per class may largely vary (class imbalance).", "Moreover, the new task may come from a distribution that is different from the task distribution the model has been trained on (out-of-distribution task) (See (a) of Figure 1 ).", "Under such a realistic setting, the meta-knowledge may have a varying degree of utility to each task.", "Tasks with small number of training data, or close to the tasks trained in meta-training step may want to rely mostly on meta-knowledge obtained over other tasks, whereas tasks that are out-of-distribution or come with more number of training data may obtain better solutions when trained in a task-specific manner.", "Furthermore, for multi-class classification, we may want to treat the learning for each class differently to handle class imbalance.", "Thus, to optimally leverage meta-learning under various imbalances, it would be beneficial for the model to task-and class-adaptively decide how much to use from the meta-learner, and how much to learn specifically for each task and class.", "We propose Bayesian TAML that learns to balance the effect of meta-learning and task-adaptive learning, to consider meta-learning under a more realistic task distribution where each task and class can have varying number of instances.", "Specifically, we encode the dataset for each task into hierarchical set-of-sets representations, and use it to generate attention mask for the original parameter, learning rate decay, and the class-specific learning rate.", "We use a Bayesian framework to infer the posterior of these balancing variables, and propose an effective variational inference framework to solve for them.", "Our model outperforms existing meta-learning methods when validated on imbalanced few-shot classification tasks.", "Further analysis of each balancing variable shows that each variable effectively handles task imbalance, class imbalance, and out-of-distribution tasks respectively.", "We believe that our work makes a meaningful step toward application of meta-learning to real-world problems.", "A EXPERIMENTAL SETUP A.1", "BASELINES AND NETWORK ARCHITECTURE.", "We describe baseline models and our task-adaptive learning to balance model.", "Note that all gradientbased models can be extended to take K inner-gradient steps for both meta-training and meta-testing.", "1) Meta-Learner LSTM.", "A meta-learner that learns optimization algorithm with LSTM (Ravi & Larochelle, 2017) .", "The model performs few-shot classification using cosine similarities between the embeddings generated from a shared convolutional network.", "2) Prototypical Networks.", "A metric-based few-shot classification model proposed by (Snell et al., 2017) .", "The model learns the metric space based on Euclidean distance between class prototypes and query embeddings.", "3) MAML.", "The Model-Agnostic Meta-Learning (MAML) model by (Finn et al., 2017) , which aims to learn the global initial model parameter, from which we can take a few gradient steps to get task-specific predictors." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.25531914830207825, 0.1860465109348297, 0.09999999403953552, 0.7317073345184326, 0.20512819290161133, 0.060606054961681366, 0.09756097197532654, 0.3030303120613098, 0.1538461446762085, 0.21917808055877686, 0.09999999403953552, 0.2545454502105713, 0.22727271914482117, 0.22857142984867096, 0.1355932205915451, 0.17142856121063232, 0.25, 0.36734694242477417, 0.27272728085517883, 0.1463414579629898, 0.1249999925494194, 0.277777761220932, 0.17142856121063232, 0.08695651590824127, 0, 0.19999998807907104, 0.10810810327529907, 0, 0.12903225421905518, 0.1111111044883728, 0, 0.12903225421905518, 0.17142856121063232, 0.12244897335767746 ]
rkeZIJBYvr
true
[ "A novel meta-learning model that adaptively balances the effect of the meta-learning and task-specific learning, and also class-specific learning within each task." ]
[ "Many tasks in artificial intelligence require the collaboration of multiple agents.", "We exam deep reinforcement learning for multi-agent domains.", "Recent research efforts often take the form of two seemingly conflicting perspectives, the decentralized perspective, where each agent is supposed to have its own controller; and the centralized perspective, where one assumes there is a larger model controlling all agents.", "In this regard, we revisit the idea of the master-slave architecture by incorporating both perspectives within one framework.", "Such a hierarchical structure naturally leverages advantages from one another.", "The idea of combining both perspective is intuitive and can be well motivated from many real world systems, however, out of a variety of possible realizations, we highlights three key ingredients, i.e. composed action representation, learnable communication and independent reasoning.", "With network designs to facilitate these explicitly, our proposal consistently outperforms latest competing methods both in synthetics experiments and when applied to challenging StarCraft micromanagement tasks.", "Reinforcement learning (RL) provides a formal framework concerned with how an agent takes actions in one environment so as to maximize some notion of cumulative reward.", "Recent years have witnessed successful application of RL technologies to many challenging problems, ranging from game playing [17; 21] to robotics BID8 and other important artificial intelligence (AI) related fields such as BID19 etc.", "Most of these works have been studying the problem of a single agent.However, many important tasks require the collaboration of multiple agents, for example, the coordination of autonomous vehicles BID1 , multi-robot control BID12 , network packet delivery BID31 and multi-player games BID24 to name a few.", "Although multi-agent reinforcement learning (MARL) methods have historically been applied in many settings [1; 31] , they were often restricted to simple environments and tabular methods.Motivated from the success of (single agent) deep RL, where value/policy approximators were implemented via deep neural networks, recent research efforts on MARL also embrace deep networks and target at more complicated environments and complex tasks, e.g. [23; 19; 4; 12] etc.", "Regardless though, it remains an open challenge how deep RL can be effectively scaled to more agents in various situations.", "Deep RL is notoriously difficult to train.", "Moreover, the essential state-action space of multiple agents becomes geometrically large, which further exacerbates the difficulty of training for multi-agent deep reinforcement learning (deep MARL for short).From", "the viewpoint of multi-agent systems, recent methods often take the form of one of two perspectives. That", "is, the decentralized perspective where each agent has its own controller; and the centralized perspective where there exists a larger model controlling all agents. As a", "consequence, learning can be challenging in the decentralized settings due to local viewpoints of agents, which perceive non-stationary environment due to concurrently exploring teammates. On the", "other hand, under a centralized perspective, one needs to directly deal with parameter search within the geometrically large state-action space originated from the combination of multiple agents. BID0 StarCraft", "and its expansion StarCraft: Brood War are trademarks of Blizzard Entertainment TM In this regard, we revisit the idea of master-slave architecture to combine both perspectives in a complementary manner. The master-slave", "architecture is a canonical communication architecture which often effectively breaks down the original challenges of multiple agents. Such architectures", "have been well explored in multi-agent tasks [18; 28; 15; 16] . Although our designs", "vary from these works, we have inherited the spirit of leveraging agent hierarchy in a master-slave manner. That is, the master", "agent tends to plan in a global manner without focusing on potentially distracting details from each slave agent and meanwhile the slave agents often locally optimize their actions with respect to both their local state and the guidance coming from the master agent. Such idea can be well", "motivated from many real world systems. One can consider the", "master agent as the central control of some organized traffic systems and the slave agents as each actual vehicles. Another instantiation", "of this idea is to consider the coach and the players in a football/basketball team. However, although the", "idea is clear and intuitive, we notice that our work is among the first to explicitly design master-slave architecture for deep MARL.Specifically, we instantiate our idea with policy-based RL methods and propose a multi-agent policy network constructed with the master-slave agent hierarchy. For both each slave agent", "and the master agent, the policy approximators are realized using recurrent neural networks (RNN). At each time step, we can", "view the hidden states/representations of the recurrent cells as the \"thoughts\" of the agents. Therefore each agent has", "its own thinking/reasoning of the situation. While each slave agent takes", "local states as its input, the master agent takes both the global states and the messages from all slave agents as its input. The final action output of each", "slave agent is composed of contributions from both the corresponding slave agent and the master agent. This is implemented via a gated", "composition module (GCM) to process and transform \"thoughts\" from both agents to the final action.We test our proposal (named MS-MARL) using both synthetic experiments and challenging StarCraft micromanagement tasks. Our method consistently outperforms", "recent competing MARL methods by a clear margin. We also provide analysis to showcase", "the effectiveness of the learned policies, many of which illustrate interesting phenomena related to our specific designs.In the rest of this paper, we first discuss some related works in Section 2. In Section 3, we introduce the detailed", "proposals to realize our master-slave multi-agent RL solution. Next, we move on to demonstrate the effectiveness", "of our proposal using challenging synthetic and real multi-agent tasks in Section 4. And finally Section 5 concludes this paper with discussions", "on our findings. Before proceeding, we summarize our major contributions as", "follows• We revisit the idea of master-slave architecture for deep MARL. The proposed instantiation effectively combines both the centralized", "and decentralized perspectives of MARL.• Our observations highlight and verify that composable action representation", ", independent master/slave reasoning and learnable communication in-between are key factors to be successful in MS-MARL.• Our proposal empirically outperforms recent state-of-the-art methods on both", "synthetic experiments and challenging StarCraft micromanagement tasks, rendering it a novel competitive MARL solution in general.", "As stated above, our MS-MARL proposal can leverage advantages from both the centralized perspective and the decentralized perspective.", "Comparing with the latter, we would like to argue that, not only does our design facilitate regular communication channels between slave agents as in previous works, we also explicitly formulate an independent master agent reasoning based on all slave agents' messages and its own state.", "Later we empirically verify that, even when the overall information revealed does not increase per se, an independent master agent tend to absorb the same information within a big picture and effectively helps to make decisions in a global manner.", "Therefore compared with pure in-between-agent communications, MS-MARL is more efficient in reasoning and planning once trained.On the other hand, when compared with methods taking a regular centralized perspective, we realize that our master-slave architecture explicitly explores the large action space in a hierarchical way.", "This is so in the sense that if the action space is very large, the master agent can potentially start searching at a coarse scale and leave the slave agents focus their efforts in a more fine-grained domain.", "This not only makes training more efficient but also more stable in a similar spirit as the dueling Q-network design BID29 , where the master agent works as base estimation while leaving the slave agents focus on estimating the advantages.", "And of course, in the perspective of applying hierarchy, we can extend master-slave to master-master-slave architectures etc.", "In this paper, we revisit the master-slave architecture for deep MARL where we make an initial stab to explicitly combine a centralized master agent with distributed slave agents to leverage their individual contributions.", "With the proposed designs, the master agent effectively learns to give high-level instructions while the local agents try to achieve fine-grained optimality.", "We empirically demonstrate the superiority of our proposal against existing MARL methods in several challenging mutli-agent tasks.", "Moreover, the idea of master-slave architecture should not be limited to any specific RL algorithms, although we instantiate this idea with a policy gradient method, more existing RL algorithms can also benefit from applying similar schemes." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.23076923191547394, 0.43478259444236755, 0.11999999731779099, 0.375, 0, 0.11320754140615463, 0.14999999105930328, 0.1463414579629898, 0.0833333283662796, 0.1071428507566452, 0.20779220759868622, 0.11428570747375488, 0, 0.29999998211860657, 0.20689654350280762, 0.10810810327529907, 0.21052631735801697, 0.09302324801683426, 0.35555556416511536, 0.1818181723356247, 0.13793103396892548, 0.22857142984867096, 0.14814814925193787, 0.07999999821186066, 0.1764705777168274, 0.32258063554763794, 0.25925925374031067, 0.11764705181121826, 0.13793103396892548, 0.1538461446762085, 0.1538461446762085, 0.1875, 0.17391303181648254, 0.06896550953388214, 0.13333332538604736, 0.19999998807907104, 0.2222222238779068, 0, 0.4848484694957733, 0.13793103396892548, 0.1463414579629898, 0.12903225421905518, 0.12903225421905518, 0.10344827175140381, 0.11764705181121826, 0.1818181723356247, 0.12765957415103912, 0.07999999821186066, 0.25806450843811035, 0.21739129722118378, 0.05882352590560913, 0.25, 0.20408162474632263 ]
B1KFAGWAZ
true
[ "We revisit the idea of the master-slave architecture in multi-agent deep reinforcement learning and outperforms state-of-the-arts." ]
[ "We study the implicit bias of gradient descent methods in solving a binary classification problem over a linearly separable dataset.", "The classifier is described by a nonlinear ReLU model and the objective function adopts the exponential loss function.", "We first characterize the landscape of the loss function and show that there can exist spurious asymptotic local minima besides asymptotic global minima.", "We then show that gradient descent (GD) can converge to either a global or a local max-margin direction, or may diverge from the desired max-margin direction in a general context.", "For stochastic gradient descent (SGD), we show that it converges in expectation to either the global or the local max-margin direction if SGD converges.", "We further explore the implicit bias of these algorithms in learning a multi-neuron network under certain stationary conditions, and show that the learned classifier maximizes the margins of each sample pattern partition under the ReLU activation.", "It has been observed in various machine learning problems recently that the gradient descent (GD) algorithm and the stochastic gradient descent (SGD) algorithm converge to solutions with certain properties even without explicit regularization in the objective function.", "Correspondingly, theoretical analysis has been developed to explain such implicit regularization property.", "For example, it has been shown in Gunasekar et al. (2018; 2017) that GD converges to the solution with the minimum norm under certain initialization for regression problems, even without an explicit norm constraint.Another type of implicit regularization, where GD converges to the max-margin classifier, has been recently studied in Gunasekar et al. (2018) ; Ji & Telgarsky (2018) ; Nacson et al. (2018a) ; Soudry et al. (2017; 2018) for classification problems as we describe below.", "Given a set of training samples z i = (x i , y i ) for i = 1, . . . , n, where x i denotes a feature vector and y i ∈ {−1, +1} denotes the corresponding label, the goal is to find a desirable linear model (i.e., a classifier) by solving the following empirical risk minimization problem It has been shown in Nacson et al. (2018a) ; Soudry et al. (2017; 2018) that if the loss function (·) is monotonically strictly decreasing and satisfies proper tail conditions (e.g., the exponential loss), and the data are linearly separable, then GD converges to the solution w with infinite norm and the maximum margin direction of the data, although there is no explicit regularization towards the maxmargin direction in the objective function.", "Such a phenomenon is referred to as the implicit bias of GD, and can help to explain some experimental results.", "For example, even when the training error achieves zero (i.e., the resulting model enters into the linearly separable region that correctly classifies the data), the testing error continues to decrease, because the direction of the model parameter continues to have an improved margin.", "Such a study has been further generalized to hold for various other types of gradient-based algorithms Gunasekar et al. (2018) .", "Moreover, Ji & Telgarsky (2018) analyzed the convergence of GD with no assumption on the data separability, and characterized the implicit regularization to be in a subspace-based form.The focus of this paper is on the following two fundamental issues, which have not been well addressed by existing studies.•", "Existing studies so far focused only on the linear classifier model. An", "important question one naturally asks is what happens for the more general nonlinear leaky ReLU and ReLU models. Will", "GD still converge, and if so will it converge to the max-margin direction? Our", "study here provides new insights for the ReLU model that have not been observed for the linear model in the previous studies.• Existing", "studies mainly analyzed the convergence of GD with the only exceptions Ji & Telgarsky (2018) ; Nacson et al. (2018b) on SGD. However, Ji", "& Telgarsky (2018) did not establish the convergence to the max-margin direction for SGD, and Nacson et al. (2018b) established the convergence to the max-margin solution only epochwisely for cyclic SGD (not iterationwise for SGD under random sampling with replacement). Moreover, both", "studies considered only the linear model. Here, our interest", "is to explore the iterationwise convergence of SGD under random sampling with replacement to the max-margin direction, and our result can shed insights for online SGD. Furthermore, our study", "provides new understanding for the nonlinear ReLU and leaky ReLU models.", "In this paper, we study the problem of learning a ReLU neural network via gradient descent methods, and establish the corresponding risk and parameter convergence under the exponential loss function.In particular, we show that due to the possible existence of spurious asymptotic local minima, GD and SGD can converge either to the global or local max-margin direction, which in the nature of convergence is very different from that under the linear model in the previous studies.", "We also discuss the extensions of our analysis to the more general leaky ReLU model and multi-neuron networks.", "In the future, it is worthy to explore the implicit bias of GD and SGD in learning multilayer neural network models and under more general (not necessarily linearly separable) datasets." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.7567567229270935, 0.23529411852359772, 0.15789473056793213, 0.22727271914482117, 0.14999999105930328, 0.3265306055545807, 0.16326530277729034, 0.06666666269302368, 0.15789473056793213, 0.11965811997652054, 0.2702702581882477, 0.07547169178724289, 0.15789473056793213, 0.1904761791229248, 0.06666666269302368, 0.2222222238779068, 0.0624999962747097, 0.21052631735801697, 0.14999999105930328, 0.07843136787414551, 0.07407406717538834, 0.1860465109348297, 0.2857142686843872, 0.21052631735801697, 0.22857142984867096, 0.260869562625885 ]
Hygv0sC5F7
true
[ "We study the implicit bias of gradient methods in solving a binary classification problem with nonlinear ReLU models." ]
[ "Model pruning has become a useful technique that improves the computational efficiency of deep learning, making it possible to deploy solutions in resource-limited scenarios.", "A widely-used practice in relevant work assumes that a smaller-norm parameter or feature plays a less informative role at the inference time.", "In this paper, we propose a channel pruning technique for accelerating the computations of deep convolutional neural networks (CNNs) that does not critically rely on this assumption.", "Instead, it focuses on direct simplification of the channel-to-channel computation graph of a CNN without the need of performing a computationally difficult and not-always-useful task of making high-dimensional tensors of CNN structured sparse.", "Our approach takes two stages: first to adopt an end-to-end stochastic training method that eventually forces the outputs of some channels to be constant, and then to prune those constant channels from the original neural network by adjusting the biases of their impacting layers such that the resulting compact model can be quickly fine-tuned.", "Our approach is mathematically appealing from an optimization perspective and easy to reproduce.", "We experimented our approach through several image learning benchmarks and demonstrate its interest- ing aspects and competitive performance.", "Not all computations in a deep neural network are of equal importance.", "In a typical deep learning pipeline, an expert crafts a neural architecture, which is trained using a prepared dataset.", "The success of training a deep model often requires trial and error, and such loop usually has little control on prioritizing the computations happening in the neural network.", "Recently researchers started to develop model-simplification methods for convolutional neural networks (CNNs), bearing in mind that some computations are indeed non-critical or redundant and hence can be safely removed from a trained model without substantially degrading the model's performance.", "Such methods not only accelerate computational efficiency but also possibly alleviate the model's overfitting effects.Discovering which subsets of the computations of a trained CNN are more reasonable to prune, however, is nontrivial.", "Existing methods can be categorized from either the learning perspective or from the computational perspective.", "From the learning perspective, some methods use a dataindependent approach where the training data does not assist in determining which part of a trained CNN should be pruned, e.g. BID7 and , while others use a datadependent approach through typically a joint optimization in generating pruning decisions, e.g., BID4 and BID1 .", "From the computational perspective, while most approaches focus on setting the dense weights of convolutions or linear maps to be structured sparse, we propose here a method adopting a new conception to achieve in effect the same goal.Instead of regarding the computations of a CNN as a collection of separate computations sitting at different layers, we view it as a network flow that delivers information from the input to the output through different channels across different layers.", "We believe saving computations of a CNN is not only about reducing what are calculated in an individual layer, but perhaps more importantly also about understanding how each channel is contributing to the entire information flow in the underlying passing graph as well as removing channels that are less responsible to such process.", "Inspired by this new conception, we propose to design a \"gate\" at each channel of a CNN, controlling whether its received information is actually sent out to other channels after processing.", "If a channel \"gate\" closes, its output will always be a constant.", "In fact, each designed \"gate\" will have a prior intention to close, unless it has a \"strong\" duty in sending some of its received information from the input to subsequent layers.", "We find that implementing this idea in pruning CNNs is unsophisticated, as will be detailed in Sec 4.Our method neither introduces any extra parameters to the existing CNN, nor changes its computation graph.", "In fact, it only introduces marginal overheads to existing gradient training of CNNs.", "It also possess an attractive feature that one can successively build multiple compact models with different inference performances in a single round of resource-intensive training (as in our experiments).", "This eases the process to choose a balanced model to deploy in production.", "Probably, the only applicability constraint of our method is that all convolutional layers and fully-connected layer (except the last layer) in the CNN should be batch normalized BID9 .", "Given batch normalization has becomes a widely adopted ingredient in designing state-of-the-art deep learning models, and many successful CNN models are using it, we believe our approach has a wide scope of potential impacts.", "We proposed a model pruning technique that focuses on simplifying the computation graph of a deep convolutional neural network.", "Our approach adopts ISTA to update the γ parameter in batch normalization operator embedded in each convolution.", "To accelerate the progress of model pruning, we use a γ-W rescaling trick before and after stochastic training.", "Our method cleverly avoids some possible numerical difficulties such as mentioned in other regularization-based related work, hence is easier to apply for practitioners.", "We empirically validated our method through several benchmarks and showed its usefulness and competitiveness in building compact CNN models.", "Figure 1 : Visualization of the number of pruned channels at each convolution in the inception branch.", "Colored regions represents the number of channels kept.", "The height of each bar represents the size of feature map, and the width of each bar represents the size of channels.", "It is observed that most of channels in the bottom layers are kept while most of channels in the top layers are pruned." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1860465109348297, 0.09999999403953552, 0.08888888359069824, 0.13333332538604736, 0.15625, 0.1249999925494194, 0.0555555522441864, 0.12903225421905518, 0.0555555522441864, 0.17777776718139648, 0.13793103396892548, 0.11999999731779099, 0, 0.16129031777381897, 0.12820512056350708, 0.1230769157409668, 0.0833333283662796, 0, 0.1249999925494194, 0.19230768084526062, 0.1249999925494194, 0.08510638028383255, 0.19354838132858276, 0.2666666507720947, 0.2745097875595093, 0.1621621549129486, 0.2857142686843872, 0.2702702581882477, 0.1428571343421936, 0.21621620655059814, 0.11764705181121826, 0.07407406717538834, 0.1249999925494194, 0.11428570747375488 ]
HJ94fqApW
true
[ "A CNN model pruning method using ISTA and rescaling trick to enforce sparsity of scaling parameters in batch normalization." ]
[ "Stochastic gradient descent (SGD) has been the dominant optimization method for training deep neural networks due to its many desirable properties.", "One of the more remarkable and least understood quality of SGD is that it generalizes relatively well\n", "on unseen data even when the neural network has millions of parameters.", "We hypothesize that in certain cases it is desirable to relax its intrinsic generalization properties and introduce an extension of SGD called deep gradient boosting (DGB).", "The key idea of DGB is that back-propagated gradients inferred using the chain rule can be viewed as pseudo-residual targets of a gradient boosting problem.", "Thus at each layer of a neural network the weight update is calculated by solving the corresponding boosting problem using a linear base learner.", "The resulting weight update formula can also be viewed as a normalization procedure of the data that arrives at each layer during the forward pass.", "When implemented as a separate input normalization layer (INN) the new architecture shows improved performance on image recognition tasks when compared to the same architecture without normalization layers.", "As opposed to batch normalization (BN), INN has no learnable parameters however it matches its performance on CIFAR10 and ImageNet classification tasks.", "Boosting, along side deep learning, has been a very successful machine learning technique that consistently outperforms other methods on numerous data science challenges.", "In a nutshell, the basic idea of boosting is to sequentially combine many simple predictors in such a way that their combined performance is better than each individual predictor.", "Frequently, these so called weak learners are implemented as simple decision trees and one of the first successful embodiment of this idea was AdaBoost proposed by Freund & Schapire (1997) .", "No too long after this, Breiman et al. (1998) and Friedman (2001) made the important observation that AdaBoost performs in fact a gradient descent in functional space and re-derived it as such.", "Friedman (2001) went on to define a general statistical framework for training boosting-like classifiers and regressors using arbitrary loss functions.", "Together with Mason et al. (2000) they showed that boosting minimizes a loss function by iteratively choosing a weak learner that approximately points in the negative gradient direction of a functional space.", "Neural networks, in particular deep neural nets with many layers, are also trained using a form of gradient descent.", "Stochastic gradient descent (SGD) (Robbins & Monro, 1951) has been the main optimization method for deep neural nets due to its many desirable properties like good generalization error and ability to scale well with large data sets.", "At a basic level, neural networks are composed of stacked linear layers with differentiable non-linearities in between.", "The output of the last layer is then compared to a target value using a differentiable loss function.", "Training such a model using SGD involves updating the network parameters in the direction of the negative gradient of the loss function.", "The crucial step of this algorithm is calculating the parameter gradients and this is efficiently done by the backpropagation algorithm (Rumelhart et al., 1988; Werbos, 1974) .", "Backpropagation has many variations that try to achieve either faster convergence or better generalization through some form of regularization.", "However, despite superior training outcomes, accelerated optimization methods such as Adam (Kingma & Ba, 2015) , Adagrad (Duchi et al., 2011) or RMSprop (Graves, 2013) have been found to generalize poorly compared to stochastic gradient descent (Wilson et al., 2017) .", "Therefore even before using an explicit regularization method, like dropout (Srivastava et al., 2014) or batch normalization (Ioffe & Szegedy, 2015) , SGD shows very good performance on validation data sets when compared to other methods.", "The prevalent explanation for this empirical observation has been that SGD prefers \"flat\" over \"sharp\" minima, which in turn makes these states robust to perturbations.", "Despite its intuitive appeal, recent work by (Dinh et al., 2017) cast doubt on this explanation.", "This work introduces a simple extension of SGD by combining backpropagation with gradient boosting.", "We propose that each iteration of the backpropagation algorithm can be reinterpreted by solving, at each layer, a regularized linear regression problem where the independent variables are the layer inputs and the dependent variables are gradients at the output of each layer, before non-linearity is applied.", "We call this approach deep gradient boosting (DGB), since it is effectively a layer-wise boosting approach where the typical decision trees are replaced by linear regressors.", "Under this model, SGD naturally emerges as an extreme case where the network weights are highly regularized, in the L2 norm sense.", "We hypothesize that for some learning problems the regularization criteria doesn't need to be too strict.", "These could be cases where the data domain is more restricted or the learning task is posed as a matrix decomposition or another coding problem.", "Based on this idea we further introduce INN, a novel layer normalization method free of learnable parameters and show that it achieves competitive results on benchmark image recognition problems when compared to batch normalization (BN).", "This work introduces Deep Gradient Boosting (DGB), a simple extension of Stochastic Gradient Descent (SGD) that allows for finer control over the intrinsic generalization properties of SGD.", "We empirically show how DGB can outperform SGD in certain cases among a variety of classification and regression tasks.", "We then propose a faster approximation of DGB and extend it to convolutional layers (FDGB).", "Finally, we reinterpret DGB as a layer-wise algebraic manipulation of the input data and implement it as a separate normalization layer (INN).", "We then test INN on image classification tasks where its performance proves to be on par with batch normalization without the need for additional parameters.", "A APPENDIX Table A4 : Performance on the Air data set measured as root mean squared error.", "has singular values of the form", "Let X = U ΣV T be the singular value decomposition of X then:", "values of the form", "Let X = U ΣV T be the singular value decomposition of X then:", "For this experiment we used a version of the VGG11 network introduced by Simonyan & Zisserman (2014) that has 8 convolutional layers followed by a linear layer with 512 ReLU nodes, a dropout layer with probability 0.5 and then a final softmax layer for assigning the classification probabilities.", "A second version of this architecture (VGG11 BN) has batch normalization applied at the output of each convolutional layer, before the ReLU activation as recommended by Ioffe & Szegedy (2015) We modified this architecture by first removing all the batch normalization and dropout layers.", "We then either replaced all convolutional and linear layers with ones that implement the fast version of DGB for the FDGB(l) architecture or added INN(l) layers in front of each of the original convolutional and linear layers.", "Both FDGB(l) and INN(l) models implement input normalization based on the left pseudo-inverse (see Eq. 12 & 16) in order to take advantage of its regularization effect.", "All weights were initialized according to Simonyan & Zisserman (2014) and were trained using stochastic gradient descent with momentum 0.9 and batch size 128.", "For the FDGB(l) model the gradients were calculated according to Eq. 12 for linear and 13 for convolutional layers.", "Training was started with learning rate 0.1 and reduced to 0.01 after 250 epochs and continued for 350 epochs.", "All experiments were repeated 10 times with different random seeds and performance was reported on the validation set as mean accuracy ± standard deviation." ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.21052631735801697, 0, 0.06896550953388214, 0.09302324801683426, 0.24390242993831635, 0.25641024112701416, 0.24390242993831635, 0.1428571343421936, 0, 0.04999999329447746, 0.13636362552642822, 0.043478257954120636, 0.12765957415103912, 0.10810810327529907, 0.1304347813129425, 0.1666666567325592, 0.07547169178724289, 0.1764705777168274, 0.11764705181121826, 0.11428570747375488, 0, 0, 0.1111111044883728, 0, 0, 0, 0.19354838132858276, 0.1538461446762085, 0.1463414579629898, 0.052631575614213943, 0, 0.10256409645080566, 0.11999999731779099, 0.0476190410554409, 0.1111111044883728, 0.0624999962747097, 0.21621620655059814, 0, 0.05882352590560913, 0, 0, 0, 0, 0.10344827175140381, 0.07547169178724289, 0.04444443807005882, 0, 0.04999999329447746, 0, 0, 0.04878048226237297 ]
BkxzsT4Yvr
true
[ "What can we learn about training neural networks if we treat each layer as a gradient boosting problem?" ]
[ "We propose to extend existing deep reinforcement learning (Deep RL) algorithms by allowing them to additionally choose sequences of actions as a part of their policy. ", "This modification forces the network to anticipate the reward of action sequences, which, as we show, improves the exploration leading to better convergence.", "Our proposal is simple, flexible, and can be easily incorporated into any Deep RL framework.", "We show the power of our scheme by consistently outperforming the state-of-the-art GA3C algorithm on several popular Atari Games.", "Basic reinforcement learning has an environment and an agent.", "The agent interacts with the environment by taking some actions and observing some states and rewards.", "At each time step t, the agent observes a state s t and performs an action a t based on a policy π(a t |s t ; θ).", "In return to the action, the environment provides a reward r t and the next state s t+1 .", "This process goes on until the agent reaches a terminal state.", "The learning goal is to find a policy that gives the best overall reward.", "The main challenges here are that the agent does not have information about the reward and the next state until the action is performed.", "Also, a certain action may yield low instant reward, but it may pave the way for a good reward in the future.Deep Reinforcement Learning BID6 has taken the success of deep supervised learning a step further.", "Prior work on reinforcement learning suffered from myopic handcrafted designs.", "The introduction of Deep Q-Learning Networks (DQN) was the major advancement in showing that Deep Neural Networks (DNNs) can approximate value and policy functions.", "By storing the agent's data in an experience replay memory, the data can be batched BID8 BID9 or randomly sampled BID4 BID12 from different time-steps and learning the deep network becomes a standard supervised learning task with several input-output pairs to train the parameters.", "As a consequence, several video games could be played by directly observing raw image pixels BID1 and demonstrating super-human performance on the ancient board game Go .In", "order to solve the problem of heavy computational requirements in training DQN, several followups have emerged leading to useful changes in training formulations and DNN architectures. Methods", "that increase parallelism while decreasing the computational cost and memory footprint were also proposed BID7 BID6 , which showed impressive performance.A breakthrough was shown in BID6 , where the authors propose a novel lightweight and parallel method called Asynchronous Advantage Actor-Critic (A3C). A3C achieves", "the stateof-the-art results on many gaming tasks. When the proper", "learning rate is used, A3C learns to play an Atari game from raw screen inputs more quickly and efficiently than previous methods. In a remarkable", "followup to A3C, BID0 proposed a careful implementation of A3C on GPUs(called GA3C) and showed the A3C can accelerated significantly over GPUs, leading to the best publicly available Deep RL implementation, known till date.Slow Progress with Deep RL: However, even for very simple Atari games, existing methods take several hours to reach good performance. There is still", "a major fundamental barrier in the current Deep RL algorithms, which is slow progress due to poor exploration. During the early", "phases, when the network is just initialized, the policy is nearly random. Thus, the initial", "experience are primarily several random sequences of actions with very low rewards. Once, we observe", "sequences which gives high rewards, the network starts to observe actions and associate them with positive rewards and starts learning. Unfortunately, finding", "a good sequence via network exploration can take a significantly long time, especially when the network is far from convergence and the taken actions are near random. The problem becomes more", "severe if there are only very rare sequence of actions which gives high rewards, while most others give on low or zero rewards. The exploration can take", "a significantly long time to hit on those rare combinations of good moves.In this work, we show that there is an unusual, and surprising, opportunity of improving the convergence of deep reinforcement learning. In particular, we show that", "instead of learning to map the reward over a basic action space A for each state, we should force the network to anticipate the rewards over an enlarged action space A + = K k=1 A k which contains sequential actions like (a 1 , a 2 , ..., a k ). Our proposal is a strict generalization", "of existing Deep RL framework where we allow to take a premeditated sequence of action at a given state s t , rather than only taking a single action and re-deciding the next action based on the outcome of the first action and so on. Thus the algorithm can pre-decide on a", "sequence of actions, instead of just the next best action, if the anticipated reward of the sequence is good enough.Our experiments shows that by simply making the network anticipate the reward for a sequence of action, instead of just the next best actions, the network shows significantly better convergence behavior consistently. We even outperform the fastest known implementation", ", the GPU accelerated version of A3C (GA3C). The most exciting part is that that anticipation can", "be naturally incorporated in any existing implementation, including Deep Q Network and A3C. We simply have to extend the action set to also include", "extra sequences of actions and calculate rewards with them for training, which is quite straightforward.", "We propose a simple yet effective technique of adding anticipatory actions to the state-of-the-art GA3C method for reinforcement learning and achieve significant improvements in convergence and overall scores on several popular Atari-2600 games.", "We also identify issues that challenge the sustainability of our approach and propose simple workarounds to leverage most of the information from higher-order action space.There is scope for even higher order actions.", "However, the action space grows exponentially with the order of anticipation.", "Addressing large action space, therefore, remains a pressing concern for future work.", "We believe human behavior information will help us select the best higher order actions." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.25, 0.2222222238779068, 0, 0.07999999821186066, 0.2666666507720947, 0, 0, 0, 0, 0.0952380895614624, 0, 0.1538461446762085, 0.23529411852359772, 0.06896551698446274, 0.08695651590824127, 0, 0.06451612710952759, 0, 0, 0.0624999962747097, 0.03333333134651184, 0, 0, 0.09090908616781235, 0.07407406717538834, 0.05714285373687744, 0.05882352590560913, 0.25, 0.07843136787414551, 0.04444444179534912, 0.09090908616781235, 0.08695651590824127, 0, 0.09090908616781235, 0.20512820780277252, 0.052631575614213943, 0.11764705181121826, 0, 0 ]
rylfg-DNM
true
[ "Anticipation improves convergence of deep reinforcement learning." ]
[ "To optimize a neural network one often thinks of optimizing its parameters, but it is ultimately a matter of optimizing the function that maps inputs to outputs.", "Since a change in the parameters might serve as a poor proxy for the change in the function, it is of some concern that primacy is given to parameters but that the correspondence has not been tested.", "Here, we show that it is simple and computationally feasible to calculate distances between functions in a $L^2$ Hilbert space.", "We examine how typical networks behave in this space, and compare how parameter $\\ell^2$ distances compare to function $L^2$ distances between various points of an optimization trajectory.", "We find that the two distances are nontrivially related.", "In particular, the $L^2/\\ell^2$ ratio decreases throughout optimization, reaching a steady value around when test error plateaus.", "We then investigate how the $L^2$ distance could be applied directly to optimization.", "We first propose that in multitask learning, one can avoid catastrophic forgetting by directly limiting how much the input/output function changes between tasks.", "Secondly, we propose a new learning rule that constrains the distance a network can travel through $L^2$-space in any one update.", "This allows new examples to be learned in a way that minimally interferes with what has previously been learned.", "These applications demonstrate how one can measure and regularize function distances directly, without relying on parameters or local approximations like loss curvature.", "A neural network's parameters collectively encode a function that maps inputs to outputs.", "The goal of learning is to converge upon a good input/output function.", "In analysis, then, a researcher should ideally consider how a network's input/output function changes relative to the space of possible functions.", "However, since this space is not often considered tractable, most techniques and analyses consider the parameters of neural networks.", "Most regularization techniques, for example, act directly on the parameters (e.g. weight decay, or the implicit constraints stochastic gradient descent (SGD) places upon movement).", "These techniques are valuable to the extent that parameter space can be taken as a proxy for function space.", "Since the two might not always be easily related, and since we ultimately care most about the input/output function, it is important to develop metrics that are directly applicable in function space.In this work we show that it is relatively straightforward to measure the distance between two networks in function space, at least if one chooses the right space.", "Here we examine L 2 -space, which is a Hilbert space.", "Distance in L 2 space is simply the expected 2 distance between the outputs of two functions when given the same inputs.", "This computation relies only on function inference.Using this idea of function space, we first focus on characterizing how networks move in function space during optimization with SGD.", "Do random initializations track similar trajectories?", "What happens in the overfitting regime?", "We are particularly interested in the relationship between trajectories in function space and parameter space.", "If the two are tightly coupled, then parameter change can be taken as a proxy for function change.", "This common assumption (e.g. Lipschitz bounds) might not always be the case.Next, we demonstrate two possibilities as to how a function space metric could assist optimization.", "In the first setting we consider multitask learning, and the phenomenon of catastrophic forgetting that makes it difficult.", "Many well-known methods prevent forgetting by regularizing how much the parameters are allowed to shift due to retraining (usually scaled by a precision matrix calculated on previous tasks).", "We show that one can instead directly regularize changes in the input/output function of early tasks.", "Though this requires a \"working memory\" of earlier examples, this scheme turns out to be quite data-efficient (and more so than actually retraining on examples from old tasks).In", "the second setting we propose a learning rule for supervised learning that constrains how much a network's function can change any one update. This", "rule, which we call Hilbert-constrained gradient descent (HCGD), penalizes each step of SGD to reduce the magnitude of the resulting step in L 2 -space. This", "learning rule thus changes the course of learning to track a shorter path in function space. If SGD", "generalizes in part because large changes to the function are prohibited, then this rule will have advantages over SGD. Interestingly", ", HCGD is conceptually related to the natural gradient. As we derive", "in §3.2.1, the natural gradient can be viewed as resulting from constrains changes in a function space measured by the Kullbeck-Leibler divergence.", "Neural networks encode functions, and it is important that analyses discuss the empirical relationship between function space and the more direct parameter space.", "Here, we argued that the L 2 Hilbert space defined over an input distribution is a tractable and useful space for analysis.", "We found that networks traverse this function space qualitatively differently than they do parameter space.", "Depending on the situation, a distance of parameters cannot be taken to represent a proportional distance between functions.We proposed two possibilities for how the L 2 distance could be used directly in applications.", "The first addresses multitask learning.", "By remembering enough examples in a working memory to accurately Figure 6 : Results of a singlelayer LSTM with 128 hidden units trained on the sequential MNIST task with permuted pixels.", "Shown are the traces for SGD and Adam (both with learning rate 0.01).", "We then take variants of the HCGD algorithm in which the first proposed step is taken to be an SGD step (SGD+HC) or an Adam step (Adam+HC).", "For SGD+HC we also show the effect of introducing more iterations n in the SGD+HC step.estimate an L 2 distance, we can ensure that the function (as defined on old tasks) does not change as a new task is learned.", "This regularization term is agnostic to the architecture or parameterization of the network.", "We found that this scheme outperforms simply retraining on the same number of stored examples.", "For large networks with millions of parameters, this approach may be more appealing than comparable methods like EWC and SI, which require storing large diagonal matrices.We also proposed a learning rule that reduces movement in function space during single-task optimization.", "Hilbert-constrained gradient descent (HCGD) constrains the change in L 2 space between successive updates.", "This approach limits the movement of the encoded function in a similar way as gradient descent limits movement of the parameters.", "It also carries a similar intuition as the forgetting application: to learn from current examples only in ways that will not affect what has already been learned from other examples.", "HCGD can increase test performance at image classification in recurrent situations, indicating both that the locality of function movement is important to SGD and that it can be improved upon.", "However, HCGD did not always improve results, indicating either that SGD is stable in those regimes or that other principles are more important to generalization.", "This is by no means the only possibility for using an L 2 norm to improve optimization.", "It may be possible, for example, to use the norm to regularize the confidence of the output function (e.g. BID18 ).", "We are particularly interested in exploring if more implicit, architectural methods, like normalization layers, could be designed with the L 2 norm in mind.It interesting to ask if there is support in neuroscience for learning rules that diminish the size of changes when that change would have a large effect on other tasks.", "One otherwise perplexing finding is that behavioral learning rates in motor tasks are dependent on the direction of an error but independent of the magnitude of that error BID4 .", "This result is not expected by most models of gradient descent, but would be expected if the size of the change in the output distribution (i.e. behavior) were regulated to be constant.", "Regularization upon behavioral change (rather than synaptic change) would predict that neurons central to many actions, like neurons in motor pools of the spinal cord, would learn very slowly after early development, despite the fact that their gradient to the error on any one task (if indeed it is calculated) is likely to be quite large.", "Given our general resistance to overfitting during learning, and the great variety of roles of neurons, it is likely that some type of regularization of behavioral and perceptual change is at play.", "Figure A.5: Same as above, but for a network trained without Batch Normalization and also without weight decay.", "Weight decay has a strong effect.", "The main effect is that decreases the 2 distance traveled at all three scales (from last update, last epoch, and initialization), especially at late optimization.", "This explains the left column, and some of the middle and right columns.", "(It is helpful to look at the \"white point\" on the color scale, which indicates the point halfway through training. Note that parameter distances continue to change after the white point when WD is not used).", "An additional and counterintuitive property is that the L 2 distance from the last epoch increases in scale during optimization when WD is not used, but decreases if it is.", "These comparisons show that WD has a strong effect on the L 2 / 2 ratio, but that this ratio still changes considerable throughout training.", "This is in line with this paper's motivation to consider L 2 distances directly.", "Figure B .6: Here we reproduce the results of FIG0 and FIG4 for the MNIST task, again using a CNN with batch normalization trained with SGD with momentum.", "It can be seen first that the majority of function space movement occurs very early in optimization, mostly within the first epoch." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.17777776718139648, 0.20408162474632263, 0.24390242993831635, 0.31111109256744385, 0.13333332538604736, 0.052631575614213943, 0.1764705777168274, 0.1818181723356247, 0.19512194395065308, 0.20512819290161133, 0.04651162400841713, 0.1764705777168274, 0.24242423474788666, 0.19512194395065308, 0.19999998807907104, 0, 0.25641024112701416, 0.2028985470533371, 0.1875, 0.14999999105930328, 0.260869562625885, 0, 0.07407406717538834, 0.29411762952804565, 0.15789473056793213, 0.2448979616165161, 0, 0.08510638028383255, 0.1621621549129486, 0.12244897335767746, 0.1395348757505417, 0.09090908616781235, 0.2631579041481018, 0.19512194395065308, 0.12121211737394333, 0.1860465109348297, 0.19512194395065308, 0.1428571343421936, 0.2857142686843872, 0.19999998807907104, 0, 0.11999999731779099, 0, 0.1818181723356247, 0.20689654350280762, 0.12121211737394333, 0.1111111044883728, 0.2950819730758667, 0.11428570747375488, 0.21621620655059814, 0.16326530277729034, 0.20408162474632263, 0.17777776718139648, 0.15789473056793213, 0.09999999403953552, 0.1428571343421936, 0.08888888359069824, 0.16326530277729034, 0.08695651590824127, 0.1249999925494194, 0.051282044500112534, 0.07407406717538834, 0.09090908616781235, 0, 0.15686273574829102, 0.2083333283662796, 0.09090908616781235, 0.22857142984867096, 0.04347825422883034, 0.19512194395065308 ]
SkMwpiR9Y7
true
[ "We find movement in function space is not proportional to movement in parameter space during optimization. We propose a new natural-gradient style optimizer to address this." ]
[ "The information bottleneck (IB) problem tackles the issue of obtaining relevant compressed representations T of some random variable X for the task of predicting Y. It is defined as a constrained optimization problem which maximizes the information the representation has about the task, I(T;Y), while ensuring that a minimum level of compression r is achieved (i.e., I(X;T) <= r).", "For practical reasons the problem is usually solved by maximizing the IB Lagrangian for many values of the Lagrange multiplier, therefore drawing the IB curve (i.e., the curve of maximal I(T;Y) for a given I(X;Y)) and selecting the representation of desired predictability and compression.", "It is known when Y is a deterministic function of X, the IB curve cannot be explored and other Lagrangians have been proposed to tackle this problem (e.g., the squared IB Lagrangian).", "In this paper we", "(i) present a general family of Lagrangians which allow for the exploration of the IB curve in all scenarios;", "(ii) prove that if these Lagrangians are used, there is a one-to-one mapping between the Lagrange multiplier and the desired compression rate r for known IB curve shapes, hence, freeing from the burden of solving the optimization problem for many values of the Lagrange multiplier.", "Let X and Y be two statistically dependent random variables with joint distribution p(x, y).", "The information bottleneck (IB) (Tishby et al., 2000) investigates the problem of extracting the relevant information from X for the task of predicting Y .", "For this purpose, the IB defines a bottleneck variable T obeying the Markov chain Y ↔ X ↔ T so that T acts as a representation of X. Tishby et al. (2000) define the relevant information as the information the representation keeps from Y after the compression of X (i.e., I(T ; Y )), provided a minimum level of compression (i.e, I(X; T ) ≤ r).", "Therefore, we select the representation which yields the value of the IB curve that best fits our requirements.", "Definition 1 (IB functional).", "Let X and Y be statistically dependent variables.", "Let ∆ be the set of random variables T obeying the Markov condition Y ↔ X ↔ T .", "Then the IB functional is F IB,max (r) = max T ∈∆ {I(T ; Y )} s.t. I(X; T ) ≤ r, ∀r ∈ [0, ∞).", "(1)", "Definition 2 (IB curve).", "The IB curve is the set of points defined by the solutions of F IB,max (r) for varying values of r ∈ [0, ∞).", "Definition 3 (Information plane).", "The plane is defined by the axes I(T ; Y ) and I(X; T ).", "In practice, solving a constrained optimization problem such as the IB functional is difficult.", "Thus, in order to avoid the non-linear constraints from the IB functional the IB Lagrangian is defined.", "Definition 4 (IB Lagrangian).", "Let X and Y be statistically dependent variables.", "Let ∆ be the set of random variables T obeying the Markov condition Y ↔ X ↔ T .", "Then we define the IB Lagrangian as L β IB (T ) = I(T ; Y ) − βI(X; T ).", "Here β ∈ [0, 1] is the Lagrange multiplier which controls the trade-off between the information of Y retained and the compression of X. Note we consider β ∈ [0, 1] because", "(i) for β ≤ 0 many uncompressed solutions such as T = X maximizes L β IB , and", "(ii) for β ≥ 1 the IB Lagrangian is non-positive due to the data processing inequality (DPI) (Theorem 2.8.1 from Cover & Thomas (2012) ) and trivial solutions like T = const are maximizers with L β IB = 0 (Kolchinsky et al., 2019) .", "We know the solutions of the IB Lagrangian optimization (if existent) are solutions of the IB functional by the Lagrange's sufficiency theorem (Theorem 5 in Appendix A of Courcoubetis (2003) ).", "Moreover, since the IB functional is concave (Lemma 5 of Gilad-Bachrach et al. (2003) ) we know they exist (Theorem 6 in Appendix A of Courcoubetis (2003) ).", "Therefore, the problem is usually solved by maximizing the IB Lagrangian with adaptations of the Blahut-Arimoto algorithm (Tishby et al., 2000) , deterministic annealing approaches (Tishby & Slonim, 2001 ) or a bottom-up greedy agglomerative clustering (Slonim & Tishby, 2000) or its improved sequential counterpart (Slonim et al., 2002) .", "However, when provided with high-dimensional random variables X such as images, these algorithms do not scale well and deep learning based techniques, where the IB Lagrangian is used as the objective function, prevailed (Alemi et al., 2017; Chalk et al., 2016; .", "Note the IB Lagrangian optimization yields a representation T with a given performance (I(X; T ), I(T ; Y )) for a given β.", "However there is no one-to-one mapping between β and I(X; T ).", "Hence, we cannot directly optimize for a desired compression level r but we need to perform several optimizations for different values of β and select the representation with the desired performance (e.g., Alemi et al. (2017) ).", "The Lagrange multiplier selection is important since", "(i) sometimes even choices of β < 1 lead to trivial representations such that p T |X (t|x) = p T (t), and", "(ii) there exist some discontinuities on the performance level w.r.t. the values of β (Wu et al., 2019).", "Moreover, recently Kolchinsky et al. (2019) showed how in deterministic scenarios (such as many classification problems where an input x i belongs to a single particluar class y i ) the IB Lagrangian could not explore the IB curve.", "Particularly, they showed that multiple β yielded the same performance level and that a single value of β could result in different performance levels.", "To solve this issue, they introduced the squared IB Lagrangian, L βsq sq-IB = I(T ; Y ) − β sq I(X; T ) 2 , which is able to explore the IB curve in any scenario by optimizing for different values of β sq .", "However, even though they realized a one-to-one mapping between β s q existed, they did not find such mapping.", "Hence, multiple optimizations of the Lagrangian were still required to fing the best traded-off solution.", "The main contributions of this article are:", "1. We introduce a general family of Lagrangians (the convex IB Lagrangians) which are able to explore the IB curve in any scenario for which the squared IB Lagrangian (Kolchinsky et al., 2019 ) is a particular case of.", "More importantly, the analysis made for deriving this family of Lagrangians can serve as inspiration for obtaining new Lagrangian families which solve other objective functions with intrinsic trade-off such as the IB Lagrangian.", "2. We show that in deterministic scenarios (and other scenarios where the IB curve shape is known) one can use the convex IB Lagrangian to obtain a desired level of performance with a single optimization.", "That is, there is a one-to-one mapping between the Lagrange multiplier used for the optmization and the level of compression and informativeness obtained, and we know such mapping.", "Therefore, eliminating the need of multiple optimizations to select a suitable representation.", "Furthermore, we provide some insight for explaining why there are discontinuities in the performance levels w.r.t. the values of the Lagrange multipliers.", "In a classification setting, we connect those discontinuities with the intrinsic clusterization of the representations when optimizing the IB bottleneck objective.", "The structure of the article is the following: in Section 2 we motivate the usage of the IB in supervised learning settings.", "Then, in Section 3 we outline the important results used about the IB curve in deterministic scenarios.", "Later, in Section 4 we introduce the convex IB Lagrangian and explain some of its properties.", "After that, we support our (proved) claims with some empirical evidence on the MNIST dataset (LeCun et al., 1998) in Section 5.", "The reader can download the PyTorch (Paszke et al., 2017) implementation at https://gofile.io/?c=G9Dl1L", ".", "The information bottleneck is a widely used and studied technique.", "However, it is known that the IB Lagrangian cannot be used to achieve varying levels of performance in deterministic scenarios.", "Moreover, in order to achieve a particular level of performance multiple optimizations with different Lagrange multipliers must be done to draw the IB curve and select the best traded-off representation.", "In this article we introduced a general family of Lagrangians which allow to", "(i) achieve varying levels of performance in any scenario, and", "(ii) pinpoint a specific Lagrange multiplier β h to optimize for a specific performance level in known IB curve scenarios (e.g., deterministic).", "Furthermore, we showed the β h domain when the IB curve is known and a β h domain bound for exploring the IB curve when it is unkown.", "This way we can reduce and/or avoid multiple optimizations and, hence, reduce the computational effort for finding well traded-off representations.", "Finally,", "(iii) we provided some insight to the discontinuities on the performance levels w.r.t. the Lagange multipliers by connecting those with the intrinsic clusterization of the bottleneck variable." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.17499999701976776, 0.2461538463830948, 0.26229506731033325, 0, 0.5106382966041565, 0.38805970549583435, 0.04444444179534912, 0.11764705181121826, 0.15584415197372437, 0.21739129722118378, 0, 0.052631575614213943, 0.08888888359069824, 0.1071428507566452, 0, 0.23529411852359772, 0, 0.13636362552642822, 0.1818181723356247, 0.1818181723356247, 0, 0.052631575614213943, 0.08888888359069824, 0.0833333283662796, 0.14814814925193787, 0.1249999925494194, 0.1666666567325592, 0.2222222238779068, 0.1818181723356247, 0.1428571343421936, 0.14705881476402283, 0.1599999964237213, 0.0952380895614624, 0.2461538463830948, 0.054054051637649536, 0.11764705181121826, 0.11999999731779099, 0.1818181723356247, 0.2745097875595093, 0.19999998807907104, 0.04255318641662598, 0.09090908616781235, 0.054054051637649536, 0.4375, 0.23728813230991364, 0.4262295067310333, 0.2641509473323822, 0.1428571343421936, 0.19230768084526062, 0.16326530277729034, 0.21276594698429108, 0.2222222238779068, 0.260869562625885, 0.07547169178724289, 0.08888888359069824, 0.14999999105930328, 0.2800000011920929, 0.27586206793785095, 0.2790697515010834, 0.14999999105930328, 0.307692289352417, 0.3265306055545807, 0.12244897335767746, 0.072727270424366 ]
SkxhS6EYvH
true
[ "We introduce a general family of Lagrangians that allow exploring the IB curve in all scenarios. When these are used, and the IB curve is known, one can optimize directly for a performance/compression level directly." ]
[ "We propose the Neural Logic Machine (NLM), a neural-symbolic architecture for both inductive learning and logic reasoning.", "NLMs exploit the power of both neural networks---as function approximators, and logic programming---as a symbolic processor for objects with properties, relations, logic connectives, and quantifiers. ", "After being trained on small-scale tasks (such as sorting short arrays), NLMs can recover lifted rules, and generalize to large-scale tasks (such as sorting longer arrays).", "In our experiments, NLMs achieve perfect generalization in a number of tasks, from relational reasoning tasks on the family tree and general graphs, to decision making tasks including sorting arrays, finding shortest paths, and playing the blocks world.", "Most of these tasks are hard to accomplish for neural networks or inductive logic programming alone.", "Deep learning has achieved great success in various applications such as speech recognition (Hinton et al., 2012) , image classification (Krizhevsky et al., 2012; He et al., 2016) , machine translation BID21 Bahdanau et al., 2015; Wu et al., 2016; Vaswani et al., 2017) , and game playing (Mnih et al., 2015; BID17 .", "Starting from Fodor & Pylyshyn (1988) , however, there has been a debate over the problem of systematicity (such as understanding recursive systems) in connectionist models (Fodor & McLaughlin, 1990; Hadley, 1994; Jansen & Watter, 2012) .Logic", "systems can naturally process symbolic rules in language understanding and reasoning. Inductive", "logic programming (ILP) BID7 BID8 Friedman et al., 1999) has been developed for learning logic rules from examples. Roughly speaking", ", given a collection of positive and negative examples, ILP systems learn a set of rules (with uncertainty) that entails all of the positive examples but none of the negative examples. Combining both", "symbols and probabilities, many problems arose from high-level cognitive abilities, such as systematicity, can be naturally resolved. However, due to", "an exponentially large searching space of the compositional rules, it is difficult for ILP to scale beyond small-sized rule sets (Dantsin et al., 2001; Lin et al., 2014; Evans & Grefenstette, 2018) .To make the discussion", "concrete, let us consider the classic blocks world problem BID10 Gupta & Nau, 1992) . As shown in Figure 1 ,", "we are given a set of blocks on the ground. We can move a block x", "and place it on the top of another block y or the ground, as long as x is moveable and y is placeable. We call this operation", "Move(x, y). A block is said to be", "moveable or placeable if there are no other blocks on it. The ground is always", "placeable, implying that we can place all blocks on the ground. Given an initial configuration", "of blocks world, our goal is to transform it into a target configuration by taking a sequence of Move operations.Although the blocks world problem may appear simple at first glance, four major challenges exist in building a learning system to automatically accomplish this task:1. The learning system should recover", "a set of lifted rules (i.e., rules that apply to objects uniformly instead of being tied with specific ones) and generalize to blocks worlds which contain more blocks than those encountered during training. To get an intuition on this, we refer", "the readers who are not familiar with the blocks world domain to the task of learning to sort arrays (e.g.,In this paper, we propose Neural Logic Machines (NLMs) to address the aforementioned challenges. In a nutshell, NLMs offer a neural-symbolic", "architecture which realizes Horn clauses (Horn, 1951) in first-order logic (FOL). The key intuition behind NLMs is that logic", "operations such as logical ANDs and ORs can be efficiently approximated by neural networks, and the wiring among neural modules can realize the logic quantifiers.The rest of the paper is organized as follows. We first revisit some useful definitions in", "symbolic logic systems and define our neural implementation of a rule induction system in Section 2. As a supplementary, we refer interested readers", "to Appendix A for implementation details. In Section 3 we evaluate the effectiveness of NLM", "on a broad set of tasks ranging from relational reasoning to decision making. We discuss related works in Section 4, and conclude", "the paper in Section 5.", "ILP and relational reasoning.", "Inductive logic programming (ILP) BID7 BID8 Friedman et al., 1999 ) is a paradigm for learning logic rules derived from a limited set of rule templates from examples.", "Being a powerful way of reasoning over discrete symbols, it is successfully applied to various language-related problems, and has been integrated into modern learning frameworks (Kersting et al., 2000; BID14 Kimmig et al., 2012) .", "Recently, Evans & Grefenstette (2018) introduces a differentiable implementation of ILP which works with connectionist models such as CNNs.", "Sharing a similar spirit, BID15 introduces an end-to-end differentiable logic proving system for knowledge base (KB) reasoning.", "A major challenge of these approaches is to scale up to a large number of complex rules.", "Searching a rule as complex as our ShouldMove example in Appendix E from scratch is beyond the scope of most systems that use weighted symbolic rules generated from templates.As shown in Section 2.4, both computational complexity and parameter size of the NLM grow polynomially w.r.t. the number of allowed predicates (in contrast to the exponential dependence in ∂ILP (Evans & Grefenstette, 2018)), but factorially w.r.t. the breadth (max arity, same as ∂ILP).", "Therefore, our method can deal with more complex tasks such as the blocks world which requires using a large number of intermediate predicates, while ∂ILP fails to search in such a large space.Our paper also differs from existing approaches on using neural networks to augment symbolic rule induction BID1 BID3 .", "Specifically, we have no rule designed by humans as the input or the knowledge base for the model.", "NLMs are general neural architectures for learning lifted rules from only input-output pairs.Our work is also related to symbolic relational reasoning, which has a wide application in processing discrete data structures such as knowledge graphs and social graphs (Zhu et al., 2014; Kipf & Welling, 2017; Zeng et al., 2017; Yang et al., 2017) .", "Most symbolic relational reasoning approaches (e.g., Yang et al., 2017; BID15 are developed for KB reasoning, in which the predicates on both sides of a rule is known in the KB.", "Otherwise, the complexity grows exponentially in the number of used rules for a conclusion, which is the case in the blocks world.", "Moreover, Yang et al. FORMULA2 considers rues of the form query(Y, X) ← R n (Y, Z n ) ∧ · · · ∧ R 1 (Z 1 , X), which is not for general reasoning.", "The key of BID15 and Campero et al. (2018) is to learn subsymbolic embeddings of entities and predicates for efficient KB completion, which differs from our focus.", "While NLMs can scale up to complex rules, the number of objects/entities or relations should be bounded as a small value (e.g., < 1000), since all predicates are represented as tensors.", "This is, to some extent, in contrast with the systems developed for knowledge base reasoning.", "We leave the scalability of NLMs to large entity sets as future works.Besides, modular networks BID0 BID4 are proposed for the reasoning over subsymbolic data such as images and natural language question answering.", "BID16 implements a visual reasoning system based on \"virtual\" objects brought by receptive fields in CNNs.", "Wu et al. (2017) tackles the problem of deriving structured representation from raw pixel-level inputs.", "Dai et al. (2018) combines structured visual representation and theorem proving.Graph neural networks and relational inductive bias.", "Graph convolution networks (GCNs) (Bruna et al., 2014; Li et al., 2016; Defferrard et al., 2016; Kipf & Welling, 2017 ) is a family of neural architectures working on graphs.", "As a representative, Gilmer et al. FORMULA2 proposes a message passing modeling for unifying various graph neural networks and graph convolution networks.", "GCNs achieved great success in tasks with intrinsic relational structures.", "However, most of the GCNs operate on pre-defined graphs with only nodes and binary connections.", "This restricts the expressive power of models in general-purpose reasoning tasks (Li et al., 2016) .In", "contrast, this work removes such restrictions and introduces a neural architecture to capture lifted rules defined on any set of objects. Quantitative", "results support the effectiveness of the proposed model in a broad set of tasks ranging from relational reasoning to modeling general algorithms (as decision-making process). Moreover, being", "fully differentiable, NLMs can be plugged into existing convolutional or recurrent neural architectures for logic reasoning.Relational decision making. Logic-driven decision", "making is also related to Relational RL (Van Otterlo, 2009 ), which models the environment as a collection of objects and their relations. State transition and", "policies are both defined over objects and their interactions. Examples include OO-MDP", "(Diuk et al., 2008; Kansky et al., 2017) , symbolic models for learning in interactive domains BID12 , structured task definition by object-oriented instructions (Denil et al., 2017) , and structured policy learning (Garnelo et al., 2016) . General planning methods", "solve these tasks via planning based on rules (Hu & De Giacomo, 2011; BID18 Jiménez et al., 2019) . The goal of our paper is", "to introduce a neural architecture which learns lifted rules and handle relational data with multiple orders. We leave its application", "in other RL and planning tasks as future work.Neural abstraction machines and program induction. Neural Turing Machine (NTM", ") (Graves et al., 2014; enables general-purpose neural problem solving such as sorting by introducing an external memory that mimics the execution of Turing Machine. Neural program induction", "and synthesis BID9 BID13 Kaiser & Sutskever, 2016; BID11 Devlin et al., 2017; Bunel et al., 2018; BID20 are recently introduced to solve problems by synthesizing computer programs with neural augmentations. Some works tackle the issue", "of the systematical generalization by introducing extra supervision (Cai et al., 2017) . In Chen et al. (2018) , more", "complex programs such as language parsing are studied. However, the neural programming", "and program induction approaches are usually hard to optimize in an end-to-end manner, and often require strong supervisions (such as ground-truth programs).", "In this paper, we propose a novel neural-symbolic architecture called Neural Logic Machines (NLMs) which can conduct first-order logic deduction.", "Our model is fully differentiable, and can be trained in an end-to-end fashion.", "Empirical evaluations show that our method is able to learn the underlying logical rules from small-scale tasks, and generalize to large-scale tasks.The promising results open the door for several research directions.", "First, the maximum depth of the NLMs is a hyperparameter to be specified for individual problems.", "Future works may investigate how to extend the model, so that it can adaptively select the right depth for the problem at hand.", "Second, it is interesting to extend NLMs to handle vector inputs with real-valued components.", "Currently, NLM requires symbolic input that may not be easily available in applications like health care where many inputs (e.g., blood pressure) are real numbers.", "Third, training NLMs remains nontrivial, and techniques like curriculum learning have to be used.", "It is important to find an effective yet simpler alternative to optimize NLMs.", "Last but not least, unlike ILP methods that learn a set of rules in an explainable format, the learned rules of NLMs are implicitly encoded as weights of the neural networks.", "Extracting human-readable rules from NLMs would be a meaningful future direction.", "We cannot directly prove the accuracy of NLM by looking at the induced rules as in traditional ILP systems.", "Alternatively, we take an empirical way to estimate its accuracy by sampling testing examples.", "Throughout the experiments section, all accuracy statistics are reported in 1000 random generated data.To show the confidence of this result, we test a specific trained model of Blocks World task with 100,000 samples.", "We get no fail cases in the testing.", "According to the multiplicative form of Chernoff Bound 6 , We are 99.7% confident that the accuracy is at least 99.98%." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 1, 0.2926829159259796, 0.05128204822540283, 0.1538461446762085, 0.1818181723356247, 0.072727270424366, 0.11538460850715637, 0.13793103396892548, 0.1621621549129486, 0.1904761791229248, 0.05405404791235924, 0.07843136787414551, 0.05405404791235924, 0.1875, 0.1538461446762085, 0, 0, 0.0624999962747097, 0.09999999403953552, 0.0714285671710968, 0.2641509473323822, 0.11428570747375488, 0.1538461446762085, 0.1538461446762085, 0.1249999925494194, 0.20512819290161133, 0.09090908616781235, 0.190476194024086, 0.1860465109348297, 0.1599999964237213, 0.0555555522441864, 0.23529411852359772, 0.0624999962747097, 0.09876542538404465, 0.0634920597076416, 0.12121211737394333, 0.11940298229455948, 0.21276594698429108, 0.17142856121063232, 0.12765957415103912, 0.0952380895614624, 0.08163265138864517, 0.1875, 0.20408162474632263, 0.12121211737394333, 0.0624999962747097, 0.11764705181121826, 0.04651162400841713, 0.1666666567325592, 0, 0.1249999925494194, 0.11764705181121826, 0.1538461446762085, 0.1428571343421936, 0.1621621549129486, 0.1428571343421936, 0.13793103396892548, 0.1304347813129425, 0, 0.21621620655059814, 0.1764705777168274, 0.1304347813129425, 0.07692307233810425, 0.05882352590560913, 0.06896550953388214, 0.052631575614213943, 0.37837836146354675, 0.06666666269302368, 0.12765957415103912, 0.1875, 0.10526315122842789, 0, 0, 0.12903225421905518, 0, 0.09090908616781235, 0.0714285671710968, 0.11428570747375488, 0, 0.08163265138864517, 0.1599999964237213, 0.10526315122842789 ]
B1xY-hRctX
true
[ "We propose the Neural Logic Machine (NLM), a neural-symbolic architecture for both inductive learning and logic reasoning." ]
[ "Sequence-to-sequence (seq2seq) neural models have been actively investigated for abstractive summarization.", "Nevertheless, existing neural abstractive systems frequently generate factually incorrect summaries and are vulnerable to adversarial information, suggesting a crucial lack of semantic understanding.", "In this paper, we propose a novel semantic-aware neural abstractive summarization model that learns to generate high quality summaries through semantic interpretation over salient content.", "A novel evaluation scheme with adversarial samples is introduced to measure how well a model identifies off-topic information, where our model yields significantly better performance than the popular pointer-generator summarizer.", "Human evaluation also confirms that our system summaries are uniformly more informative and faithful as well as less redundant than the seq2seq model.", "Automatic text summarization holds the promise of alleviating the information overload problem BID13 .", "Considerable progress has been made over decades, but existing summarization systems are still largely extractive-important sentences or phrases are identified from the original text for inclusion in the output BID22 .", "Extractive summaries thus unavoidably suffer from redundancy and incoherence, leading to the need for abstractive summarization methods.", "Built on the success of sequence-to-sequence (seq2seq) learning models BID35 , there has been a growing interest in utilizing a neural framework for abstractive summarization BID28 BID20 BID41 BID36 BID5 .Although", "current state-of-the-art neural models naturally excel at generating grammatically correct sentences, the model structure and learning objectives have intrinsic difficulty in acquiring semantic interpretation of the input text, which is crucial for summarization. Importantly", ", the lack of semantic understanding causes existing systems to produce unfaithful generations. BID3 report", "that about 30% of the summaries generated from a seq2seq model contain fabricated or nonsensical information.Furthermore, current neural summarization systems can be easily fooled by off-topic information. For instance", ", FIG0 shows one example where irrelevant sentences are added into an article about \"David Collenette's resignation\". Both the seq2seq", "attentional model BID20 and the popular pointer-generator model BID31 are particularly susceptible to unfaithful generation, partially because these models tend to rely on sentences at the beginning of the articles for summarization while being ignorant about their content. Therefore, we design", "a novel adversarial evaluation metric to measure the robustness of each summarizer against small amounts of randomly inserted topic-irrelevant information. The intuition is that", "if a summarization system truly understands the salient entities and events, it would ignore unrelated content.32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada.Article Snippet: For years Joe DiMaggio was always introduced at Yankee Stadium as \"baseball's greatest living player.\" But with his memory joining", "those of Babe Ruth, Lou Gehrig, Mickey Mantle and Miller Huggins. Canada's Minister of Defense", "resigned today, a day after an army official testified that top military officials had altered documents to cover up responsibility for the beating death of a Somali teen-ager at the hands of Canadian peacekeeping troops in 1993. Defense minister David Collenette", "insisted that his resignation had nothing to do with the Somalia scandal. Ted Williams was the first name to", "come to mind, and he's the greatest living hitter. ... Seq2seq: George Vecsey sports", "of", "The Times column on New York State's naming of late baseball legend Joe DiMaggio as \"baseball's greatest living player,\" but with his memory joining those of Babe Ruth, Lou Gehrig, Mickey Mantle and Miller dens. Pointer-generator: Joe DiMaggio", "is first name to come to mind, and Ted Williams is first name to come to mind, and he's greatest living hitter; he will be replaced by human resources minister, Doug Young, and will keep his Parliament seat for governing Liberal Party. Our Model: Former Canadian Defense", "Min David Collenette resigns day after army official testifies that top military officials altered documents to cover up responsibility for beating death of Somali teen-ager at hands of Canadian peacekeeping troops in 1993. To address the above issues, we propose", "a novel semantic-aware abstractive summarization model, inspired by the human process of writing summaries-important events and entities are first identified, and then used for summary construction. Concretely, taking an article as input,", "our model first generates a set of summary-worthy semantic structures consisting of predicates and corresponding arguments (as in semantic parsing), then constructs a fluent summary reflecting the semantic information. Both tasks are learned under an encoder-decoder", "architecture with new learning objectives. A dual attention mechanism for summary decoding", "is designed to consider information from both the input article and the generated predicate-argument structures. We further present a novel decoder with a segment-based", "reranking strategy to produce diverse hypotheses and reduce redundancy under the guidance of generated semantic information.Evaluation against adversarial samples shows that while performance by the seq2seq attentional model and the pointer-generator model is impacted severely by even a small addition of topic-irrelevant information to the input, our model is significantly more robust and consistently produces more on-topic summaries (i.e. higher ROUGE and METEOR scores for standard automatic evaluation). Our model also achieves significantly better ROUGE and", "METEOR scores than both models on the benchmark dataset CNN/Daily Mail BID11 . Specifically, our model's summaries use substantially", "fewer and shorter extractive fragments than the comparisons and have less redundancy, alleviating another common problem for the seq2seq framework. Human evaluation demonstrates that our model generates", "more informative and faithful summaries than the seq2seq model.", "Usage of Semantic Roles in Summaries.", "We examine the utility of the generated semantic roles.", "Across all models, approximately 44% of the generated predicates are part of the reference summary, indicating the adequacy of our semantic decoder.", "Furthermore, across all models, approximately 65% of the generated predicates are reused by the generated summary, and approximately 53% of the SRL structures are reused by the system using a strict matching constraint, in which the predicate and head words for all arguments must match in the summary.", "When gold-standard semantic roles are used for dual attention in place of our system generations, ROUGE scores increase by about half a point, indicating that improving semantic decoder in future work will further enhance the summaries.Coverage.", "We also conduct experiments using a coverage mechanism similar to the one used in BID31 .", "We apply our coverage in two places: (1) over the input to handle redundancy, and (2) over the generated semantics to promote its reuse in the summary.", "However, no significant difference is observed.", "Our proposed reranker handles both issues in a more explicit way, and does not require the additional training time used to learn coverage parameters.Alternative Semantic Representation.", "Our summarization model can be trained with other types of semantic information.", "For example, in addition to using the salient semantic roles from the input article, we also explore using SRL parses of the reference abstracts as training signals, but the higher level of abstraction required for semantic generation hurts performance by two ROUGE points for almost all models, indicating the type of semantic structure matters greatly for the ultimate summarization task.For future work, other semantic representation along with novel model architecture will be explored.", "For instance, other forms of semantic representation can be considered, such as frame semantics BID1 or Abstract Meaning Representation (AMR) BID2 .", "Although previous work by has shown that seq2seq models are able to successfully generate linearized tree structures, we may also consider generating semantic roles with a hierarchical semantic decoder BID34 .", "We presented a novel semantic-aware neural abstractive summarization model that jointly learns summarization and semantic parsing.", "A novel dual attention mechanism was designed to better capture the semantic information for summarization.", "A reranking-based decoder was proposed to promote the content coverage.", "Our proposed adversarial evaluation demonstrated that our model was more adept at handling irrelevant information compared to popular neural summarization models.", "Experiments on two large-scale news corpora showed that our model yielded significantly more informative, less redundant, and less extractive summaries.", "Human evaluation further confirmed that our summaries were more informative and faithful than comparisons." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.1764705777168274, 0.21739129722118378, 0.375, 0.42307692766189575, 0.2222222238779068, 0.11428570747375488, 0.07843136787414551, 0.19999998807907104, 0.15094339847564697, 0.1428571343421936, 0, 0.307692289352417, 0, 0.09999999403953552, 0.260869562625885, 0.08219178020954132, 0.05405404791235924, 0.06557376682758331, 0.04999999329447746, 0.05405404791235924, 0.03448275476694107, 0.03389830142259598, 0.06451612710952759, 0.2222222238779068, 0.145454540848732, 0, 0.2666666507720947, 0.1927710771560669, 0, 0.1666666567325592, 0.1249999925494194, 0, 0.06451612710952759, 0, 0.07017543166875839, 0.06896550953388214, 0.10526315122842789, 0.08888888359069824, 0, 0.07999999821186066, 0.17142856121063232, 0.09638553857803345, 0, 0.07692307233810425, 0.5263158082962036, 0.15789473056793213, 0, 0.3181818127632141, 0.1428571343421936, 0.1621621549129486 ]
rkgsd5ebjQ
true
[ "We propose a semantic-aware neural abstractive summarization model and a novel automatic summarization evaluation scheme that measures how well a model identifies off-topic information from adversarial samples." ]
[ "The recent work of Super Characters method using two-dimensional word embedding achieved state-of-the-art results in text classification tasks, showcasing the promise of this new approach.", "This paper borrows the idea of Super Characters method and two-dimensional embedding, and proposes a method of generating conversational response for open domain dialogues.", "The experimental results on a public dataset shows that the proposed SuperChat method generates high quality responses.", "An interactive demo is ready to show at the workshop.", "And code will be available at github soon.", "Dialogue systems are important to enable machine to communicate with human through natural language.", "Given an input sentence, the dialogue system outputs the response sentence in a natural way which reads like humantalking.", "Previous work adopts an encoder-decoder architecture BID9 , and also the improved architectures with attention scheme added BID0 BID10 BID11 .", "In architectures with attention, the input sentence are encoded into vectors first, and then the encoded vectors are weighted by the attention score to get the context vector.", "The concatenation of the context vector and the previous output vector of the decoder, is fed into the decoder to predict the next words iteratively.", "Generally, the encoded vectors, the context vector, and the decoder output vector are all one-dimensional embedding, i.e. an array of real-valued numbers.", "The models used in decoder and encoder usually adopt RNN networks, such as bidirectional GRU BID0 Preliminary work.", "Under review by the International Conference on Machine Learning (ICML).", "Do not distribute.", "BID3 , LSTM BID6 , and bidirectional LSTM BID11 .", "However, the time complexity of the encoding part is very expensive.The recent work of Super Characters method has obtained state-of-the-art result for text classification on benchmark datasets in different languages, including English, Chinese, Japanese, and Korean.", "The Super Characters method is a two-step method.", "In the first step, the characters of the input text are drawn onto a blank image.", "Each character is represented by the two-dimensional embedding, i.e. an matrix of real-valued numbers.", "And the resulting image is called a Super Characters image.", "In the second step, Super Characters images are fed into a twodimensional CNN models for classification.", "Examples of two-dimensional CNN models are used in Computer Vison (CV) tasks, such as VGG BID7 , ResNet BID4 , SE-net BID5 and etc. in ImageNet BID1 .In", "this paper, we propose the SuperChat method for dialogue generation using the two-dimensional embedding. It", "has no encoding phase, but only has the decoding phase. The", "decoder is fine-tuned from the pretrained two-dimensional CNN models in the ImageNet competition. For", "each iteration of the decoding, the image of text through two-dimensional embedding of both the input sentence and the partial response sentence is directly fed into the decoder, without any compression into a concatenated vector as done in the previous work.", "In this paper, we propose the SuperChat method for dialogue response generation.", "It has no encoding, but only decodes the two-dimensional embedding of the input sentence and partial response sentence to predict the next response word iteratively.", "The pretrained two-dimensional CNN model is fine-tuned with the generated SuperChat images.", "The experimental results shows high quality response.", "An interactive demonstration is to show at the workshop TAB0 Submission and Formatting Instructions for ICML 2019 TAB0 .", "Sample response sentences generated by the SuperChat method on the Simsimi data set." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ 0.09302324801683426, 0.14999999105930328, 0.0555555522441864, 0.13793103396892548, 0, 0.0624999962747097, 0.2702702581882477, 0.1538461446762085, 0.24390242993831635, 0.2631579041481018, 0.14999999105930328, 0.05405404791235924, 0.06896550953388214, 0, 0.07692307233810425, 0.07407406717538834, 0, 0.24242423474788666, 0.11764705181121826, 0.1428571343421936, 0.11428570747375488, 0.13333332538604736, 0.060606054961681366, 0.06896550953388214, 0.25, 0.23529411852359772, 0.12903225421905518, 0.44999998807907104, 0.25806450843811035, 0.07692307233810425, 0.1666666567325592, 0.12903225421905518 ]
HyebHcBs3V
true
[ "Print the input sentence and current response sentence onto an image and use fine-tuned ImageNet CNN model to predict the next response word." ]
[ "Convolutional neural networks (CNNs) have been generally acknowledged as one of the driving forces for the advancement of computer vision.", "Despite their promising performances on many tasks, CNNs still face major obstacles on the road to achieving ideal machine intelligence.", "One is that CNNs are complex and hard to interpret.", "Another is that standard CNNs require large amounts of annotated data, which is sometimes very hard to obtain, and it is desirable to be able to learn them from few examples.", "In this work, we address these limitations of CNNs by developing novel, simple, and interpretable models for few-shot learn- ing. Our models are based on the idea of encoding objects in terms of visual concepts, which are interpretable visual cues represented by the feature vectors within CNNs.", "We first adapt the learning of visual concepts to the few-shot setting, and then uncover two key properties of feature encoding using visual concepts, which we call category sensitivity and spatial pattern.", "Motivated by these properties, we present two intuitive models for the problem of few-shot learning.", "Experiments show that our models achieve competitive performances, while being much more flexible and interpretable than alternative state-of-the-art few-shot learning methods.", "We conclude that using visual concepts helps expose the natural capability of CNNs for few-shot learning.", "After their debut BID13 BID13 have played an ever increasing role in computer vision, particularly after their triumph BID11 on the ImageNet challenge BID3 .", "Some researchers have even claimed that CNNs have surpassed human-level performance BID8 , although other work suggests otherwise .", "Recent studies also show that CNNs are vulnerable to adversarial attacks BID6 .", "Nevertheless, the successes of CNNs have inspired the computer vision community to develop more sophisticated models BID9 BID21 .But", "despite the impressive achievements of CNNs we only have limited insights into why CNNs are effective. The", "ever-increasing depth and complicated structures of CNNs makes them very difficult to interpret while the non-linear nature of CNNs makes it very hard to perform theoretical analysis. In", "addition, CNNs traditionally require large annotated datasets which is problematic for many real world applications. We", "argue that the ability to learn from a few examples, or few-shot learning, is a characteristic of human intelligence and is strongly desirable for an ideal machine learning system. The", "goal of this paper is to develop an approach to few-shot learning which builds on the successes of CNNs but which is simple and easy to interpret. We", "start from the intuition that objects can be represented in terms of spatial patterns of parts which implies that new objects can be learned from a few examples if they are built from parts that are already known, or which can be learned from a few examples. We", "recall that previous researchers have argued that object parts are represented by the convolutional layers of CNNs BID29 BID14 provided the CNNs are trained for object detection. More", "specifically, we will build on recent work BID23 which learns a dictionary of Visual Concepts (VCs) from CNNs representing object parts, see FIG0 . It has", "been shown that these VCs can be combined to detect semantic parts BID24 and, in work in preparation, can be used to represent objects using VC-Encoding (where In general, these patches roughly correspond to semantic parts of objects, e.g., the cushion of a sofa (a), the", "side windows of trains (b) and", "the wheels of bicycles (c). All", "VCs", "are referred to by their indices (e.g., VC 139). We stress", "that VCs are learned in an unsupervised manner and terms like\"sofa cushion\" are inferred by observing the closest image patches and are used to describe them informally.objects are represented by binary codes of VCs). This suggests", "that we can use VCs to represent new objects in terms of parts hence enabling few-shot learning.But it is not obvious that VCs, as described in BID24 , can be applied to few-shot learning. Firstly, these", "VCs were learned independently for each object category (e.g., for cars or for airplanes) using deep network features from CNNs which had already been trained on data which included these categories. Secondly, the", "VCs were learned using large numbers of examples of the object category, ranging from hundreds to thousands. By contrast,", "for few-shot learning we have to learn the VCs from a much smaller number of examples (by an order of magnitude or more). Moreover, we", "can only use deep network features which are trained on datasets which do not include the new object categories which we hope to learn. This means that", "although we will extract VCs using very similar algorithms to those in BID23 our motivation and problem domain are very different. To summarize, in", "this paper we use VCs to learn models of new object categories from existing models of other categories, while BID23 uses VCs to help understand CNNs and to perform unsupervised part detection.In Section 3, we will review VCs in detail. Briefly speaking", ", VCs are extracted by clustering intermediate-level raw features of CNNs, e.g., features produced by the Pool-4 layer of VGG16 BID19 . Serving as the", "cluster centers in feature space, VCs divide intermediate-level deep network features into a discrete dictionary. We show that VCs", "can be learned in the few-shot learning setting and they have two desirable properties when used for image encoding, which we call category sensitivity and spatial patterns.More specifically, we develop an approach to few-shot learning which is simple, interpretable, and flexible. We learn a dictionary", "of VCs as described above which enables us to represent novel objects by their VC-Encoding. Then we propose two intuitive", "models: (i) nearest neighbor and (ii)", "a factorizable likelihood", "model based on the VC-Encoding. The nearest neighbor model uses", "a similarity measure to capture the difference between two VC-Encodings. The factorizable likelihood model", "learns a likelihood function of the VC-Encoding which, by assuming spatial independence, can be learned form a few examples. We emphasize that both these models", "are very flexible, in the sense that they can be applied directly to any few-shot learning scenarios. This differs from other approaches", "which are trained specifically for scenarios such as 5-way 5-shot (where there are 5 new object categories with 5 examples of each). This flexibility is attractive for", "real world applications where the numbers of new object categories, and the number of examples of each category, will be variable. Despite their simplicity, these models", "achieve comparable results to the state-of-theart few-shot learning methods (using only the simplest versions of our approach), such as learning a metric and learning to learn. From a deeper perspective, our results", "show that CNNs have the potential for few-shot learning on novel categories but to achieve this potential requires studying the internal structures of CNNs to re-express them in simpler and more interpretable terms.Overall, our major contributions are two-fold:(1) We show that VCs can be learned in the few-shot setting using CNNs trained on other object categories. By encoding images using VCs, we observe", "two desirable properties, i.e., category sensitivity and spatial patterns. (2) Based on these properties, we present", "two simple, interpretable, and flexible models for fewshot learning. These models yield competitive results compared", "to the state-of-the-art methods on specific few-shot learning tasks and can also be applied directly, without additional training, to other few-shot scenarios.", "In this paper we address the challenge of developing simple interpretable models for few-shot learning exploiting the internal representations of CNNs.", "We are motivated by VCs BID23 which enable us to represent objects in terms of VC-Encodings.", "We show that VCs can be adapted to the few-shot learning setting where the VCs are extracted from a small set of images of novel object categories using features from CNNs trained on other object categories.", "We observe two properties of VC-Encoding, namely category sensitivity and spatial pattern, which leads us to propose two novel, but closely related, methods for few-shot learning which are simple, interpretable, and flexible.", "Our methods show comparable performances to the current state-of-the-art methods which are specialized for specific few-shot learning scenarios.", "We demonstrate the flexibility of our two models by showing that they can be applied to a range of different few-shot scenarios with minimal re-training.", "In summary, we show that VCs and VC-Encodings enable ordinary CNNs to perform few-shot learning.", "We emphasize that in this paper we have concentrated on developing the core ideas of our two few-shot learning models and that we have not explored variants of our ideas which could lead to better performance by exploiting standard performance enhancing tricks, or by specializing to specific few-shot challenges.", "Future work includes improving the quality of the extracted VCs and extending our approach to few-shot detection." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.05714285373687744, 0.0555555522441864, 0.14814814925193787, 0.09090908616781235, 0.3636363446712494, 0.2666666507720947, 0.25, 0.15789473056793213, 0.42424240708351135, 0, 0.05882352590560913, 0.13793103396892548, 0.05714285373687744, 0.12121211737394333, 0.04999999329447746, 0.24242423474788666, 0.13333332538604736, 0.25, 0.12765957415103912, 0.19999998807907104, 0.0952380895614624, 0, 0, 0, 0.19999998807907104, 0.08163265138864517, 0.0833333283662796, 0.12244897335767746, 0, 0.14999999105930328, 0.0952380895614624, 0.05128204822540283, 0.037735845893621445, 0.10526315122842789, 0.05714285373687744, 0.17543859779834747, 0.10810810327529907, 0, 0, 0, 0, 0.09756097197532654, 0.1538461446762085, 0.1463414579629898, 0, 0.0952380895614624, 0.20588235557079315, 0, 0.12903225421905518, 0.10810810327529907, 0.3333333134651184, 0.3030303120613098, 0.21276594698429108, 0.260869562625885, 0.29411762952804565, 0.1463414579629898, 0.3125, 0.2142857164144516, 0.060606054961681366 ]
BJ_QxP1AZ
true
[ "We enable ordinary CNNs for few-shot learning by exploiting visual concepts which are interpretable visual cues learnt within CNNs." ]
[ "Recently various neural networks have been proposed for irregularly structured data such as graphs and manifolds.", "To our knowledge, all existing graph networks have discrete depth.", "Inspired by neural ordinary differential equation (NODE) for data in the Euclidean domain, we extend the idea of continuous-depth models to graph data, and propose graph ordinary differential equation (GODE).", "The derivative of hidden node states are parameterized with a graph neural network, and the output states are the solution to this ordinary differential equation.", "We demonstrate two end-to-end methods for efficient training of GODE: (1) indirect back-propagation with the adjoint method; (2) direct back-propagation through the ODE solver, which accurately computes the gradient.", "We demonstrate that direct backprop outperforms the adjoint method in experiments.", "We then introduce a family of bijective blocks, which enables $\\mathcal{O}(1)$ memory consumption.", "We demonstrate that GODE can be easily adapted to different existing graph neural networks and improve accuracy.", "We validate the performance of GODE in both semi-supervised node classification tasks and graph classification tasks.", "Our GODE model achieves a continuous model in time, memory efficiency, accurate gradient estimation, and generalizability with different graph networks.", "Convolutional neural networks (CNN) have achieved great success in various tasks, such as image classification (He et al., 2016) and segmentation (Long et al., 2015) , video processing (Deng et al., 2014) and machine translation (Sutskever et al., 2014) .", "However, CNNs are limited to data that can be represented by a grid in the Euclidean domain, such as images (2D grid) and text (1D grid), which hinders their application in irregularly structured datasets.", "A graph data structure represents objects as nodes and relations between objects as edges.", "Graphs are widely used to model irregularly structured data, such as social networks (Kipf & Welling, 2016) , protein interaction networks (Fout et al., 2017) , citation and knowledge graphs (Hamaguchi et al., 2017) .", "Early works use traditional methods such as random walk (Lovász et al., 1993) , independent component analysis (ICA) (Hyvärinen & Oja, 2000) and graph embedding (Yan et al., 2006) to model graphs, however their performance is inferior due to the low expressive capacity.", "Recently a new class of models called graph neural networks (GNN) (Scarselli et al., 2008) were proposed.", "Inspired by the success of CNNs, researchers generalize convolution operations to graphs to capture the local information.", "There are mainly two types of methods to perform convolution on a graph: spectral methods and non-spectral methods.", "Spectral methods typically first compute the graph Laplacian, then perform filtering in the spectral domain (Bruna et al., 2013) .", "Other methods aim to approximate the filters without computing the graph Laplacian for faster speed (Defferrard et al., 2016) .", "For non-spectral methods, the convolution operation is directly performed in the graph domain, aggregating information only from the neighbors of a node (Duvenaud et al., 2015; Atwood & Towsley, 2016) .", "The recently proposed GraphSAGE (Hamilton et al., 2017 ) learns a convolution kernel in an inductive manner.", "To our knowledge, all existing GNN models mentioned above have a structure of discrete layers.", "The discrete structure makes it hard for the GNN to model continuous diffusion processes (Freidlin & Wentzell, 1993; Kondor & Lafferty, 2002) in graphs.", "The recently proposed neural ordinary differential equation (NODE) ) views a neural network as an ordinary differential equation (ODE), whose derivative is parameterized by the network, and the output is the solution to this ODE.", "We extend NODE from the Euclidean domain to graphs and propose graph ordinary differential equations (GODE), where the message propagation on a graph is modeled as an ODE.", "NODEs are typically trained with adjoint method.", "NODEs have the advantages of adaptive evaluation, accuracy-speed control by changing error tolerance, and are free-form continuous invertible models Grathwohl et al., 2018) .", "However, to our knowledge, in benchmark image classification tasks, NODEs are significantly inferior to state-of-the-art discrete-layer models (error rate: 19% for NODE vs 7% for ResNet18 on CIFAR10) (Dupont et al., 2019; Gholami et al., 2019) .", "In this work, we show this is caused by error in gradient estimation during training of NODE, and propose a memory-efficient framework for accurate gradient estimation.", "We demonstrate our framework for free-form ODEs generalizes to various model structures, and achieves high accuracy for both NODE and GODE in benchmark tasks.", "Our contribution can be summarized as follows:", "1. We propose a framework for free-form NODEs to accurately estimate the gradient, which is fundamental to deep-learning models.", "Our method significantly improves the performance on benchmark classification (reduces test error from 19% to 5% on CIFAR10).", "2. Our framework is memory-efficient for free-form ODEs.", "When applied to restricted-form invertible blocks, the model achieves constant memory usage.", "3. We generalize ODE to graph data and propose GODE models.", "4. We demonstrate improved performance on different graph models and various datasets.", "We propose GODE, which enables us to model continuous diffusion process on graphs.", "We propose a memory-efficient direct back-propagation method to accurately determine the gradient for general free-form NODEs, and validate its superior performance on both image classification tasks and graph data.", "Furthermore, we related the over-smoothing of GNN to asymptotic stability of ODE.", "Our paper tackles the fundamental problem of gradient estimation for NODE; to our knowledge, it's the first paper to improve accuracy on benchmark tasks to comparable with state-of-the-art discrete layer models.", "It's an important step to apply NODE from theory to practice.", "A DATASETS", "We perform experiments on various datasets, including citation networks (Cora, CiteSeer, PubMed), social networks (COLLAB, IMDB-BINARY, REDDIT-BINARY), and bioinformatics datasets (MUTAG, PROTEINS).", "Details of each dataset are summarized in Table 1 .", "We explain the structure and conduct experiments for the invertible block here.", "Structure of invertible blocks Structure of invertible blocks are shown in Fig. 1 .", "We follow the work of Gomez et al. (2017) with two important modifications: (1) We generalize to a family of bijective blocks with different ψ in Eq. 8 in the main paper, while Gomez et al. (2017) restrict the form of ψ to be sum.", "(2) We propose a parameter state checkpoint method, which enables bijective blocks to be called more than once, while still generating accurate inversion.", "The algorithm is summarized in Algo.", "2.", "We write the pseudo code for forward and backward function as in PyTorch.", "Note that we use \"inversion\" to represent reconstructing input from the output, and use \"backward\" to denote calculation of the gradient.", "To reduce memory consumption, in the forward function, we only keep the outputs y 1 , y 2 and delete all other variables and computation graphs.", "In the backward function, we first \"inverse\" the block to calculate x 1 , x 2 from y 1 , y 2 , then perform a local forward and calculate the gradient x1,x2] ." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1599999964237213, 0.10526315122842789, 0.29411762952804565, 0.25806450843811035, 0, 0, 0, 0.07692307233810425, 0.08695651590824127, 0.1428571343421936, 0, 0.0952380895614624, 0.1904761791229248, 0.10526315122842789, 0.08163265138864517, 0.07407406717538834, 0, 0.07999999821186066, 0.0714285671710968, 0.0714285671710968, 0.052631575614213943, 0, 0, 0.0624999962747097, 0.1621621549129486, 0.22857142984867096, 0, 0, 0.0476190447807312, 0, 0.06451612710952759, 0, 0, 0.07692307233810425, 0, 0.0952380895614624, 0.19999998807907104, 0.1904761791229248, 0.1818181723356247, 0.1621621549129486, 0, 0.0555555522441864, 0, 0.06666666269302368, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJg9z6VFDr
true
[ "Apply ordinary differential equation model on graph structured data" ]
[ "Unsupervised image-to-image translation is a recently proposed task of translating an image to a different style or domain given only unpaired image examples at training time.", "In this paper, we formulate a new task of unsupervised video-to-video translation, which poses its own unique challenges.", "Translating video implies learning not only the appearance of objects and scenes but also realistic motion and transitions between consecutive frames.", "We investigate the performance of per-frame video-to-video translation using existing image-to-image translation networks, and propose a spatio-temporal 3D translator as an alternative solution to this problem.", "We evaluate our 3D method on multiple synthetic datasets, such as moving colorized digits, as well as the realistic segmentation-to-video GTA dataset and a new CT-to-MRI volumetric images translation dataset.", "Our results show that frame-wise translation produces realistic results on a single frame level but underperforms significantly on the scale of the whole video compared to our three-dimensional translation approach, which is better able to learn the complex structure of video and motion and continuity of object appearance.", "Recent work on unsupervised image-to-image translation BID10 BID12 has shown astonishing results on tasks like style transfer, aerial photo to map translation, day-to-night photo translation, unsupervised semantic image segmentation and others.", "Such methods learn from unpaired examples, avoiding tedious data alignment by humans.", "In this paper, we propose a new task of unsupervised video-to-video translation, i.e. learning a mapping from one video domain to another while preserving high-level semantic information of the original video using large numbers of unpaired videos from both domains.", "Many computer vision tasks can be formulated as video-to-video translation, e.g., semantic segmentation, video colorization or quality enhancement, or translating between MRI and CT volumetric data (illustrated in FIG0 ).", "Moreover, motion-centered tasks such as action recognition and tracking can greatly benefit from the development of robust unsupervised video-to-video translation methods that can be used out-of-the-box for domain adaptation.Since a video can be viewed as a sequence of images, one natural approach is to use an image-toimage translation method on each frame, e.g., applying a state-of-art method such as CycleGAN , CoGAN BID10 or UNIT BID12 .", "Unfortunately, these methods cannot preserve continuity and consistency of a video when applied frame-wise.", "For example, colorization of an object may have multiple correct solutions for a single input frame, since some objects such as cars can have different colors.", "Therefore, there is no guarantee that an object would preserve its color if translation is performed on the frame level frame.In this paper, we propose to translate an entire video as a three-dimensional tensor to preserve its cross-frame consistency and spatio-temporal structure.", "We employ multiple datasets and metrics to evaluate the performance of our proposed video-to-video translation model.", "Our synthetic datasets include videos of moving digits of different colors and volumetric images of digits imitating medical scans.", "We also perform more realistic segmentation-to-RGB and colorization experiments on the GTA dataset BID14 , and propose a new MRI-to-CT dataset for medical volumetric image translation, which to our knowledge is the first open medical dataset for unsupervised volumeto-volume translation.", "We propose the task of unsupervised video-to-video translation.", "Left: Results of MR-to-CT translation.", "Right: moving MNIST digits colorization.", "Rows show per-frame CycleGAN (2D) and our spatio-temporal extension (3D).", "Since CycleGAN takes into account information only from the current image, it produces reasonable results on the image level but fails to preserve the shape and color of an object throughout the video.", "Best viewed in color.Figure 2: Results of GTA video colorization show that per-frame translation of videos does not preserve constant colours of objects within the whole sequence.", "We provide more results and videos in the supplementary video: https://bit.ly/2R5aGgo.", "Best viewed in color.Our extensive experiments show that the proposed 3D convolutional model provides more accurate and stable video-to-video translation compared to framewise translation with various settings.", "We also investigate how the structure of individual batches affects the training of framewise translation models, and find that structure of a batch is very important for stable translation contrary to an established practice of shuffling training data to avoid overfitting in deep models BID3 .To", "summarize, we make the following main contributions: 1)", "a new unsupervised video-to-video translation task together with both realistic and synthetic proof-of-concept datasets; 2)", "a spatiotemporal video translation model based on a 3D convnet that outperforms per-frame methods in Figure 3 : Our model consists of two generator networks (F and G) that learn to translate input volumetric images from one domain to another, and two discriminator networks (D A and D B ) that aim to distinguish between real and fake inputs. Additional", "cycle consistency property requires that the result of translation to the other domain and back is equal to the input video, DISPLAYFORM0 all experiments, according to human and automatic metrics, and 3) an additional", "analysis of how performance of per-frame methods depends on the structure of training batches.", "We proposed a new computer vision task of unsupervised video-to-video translation as well as datasets, metrics and multiple baselines: multiple approaches to framewise translation using imageto-image CycleGAN and its spatio-temporal extension 3D CycleGAN.", "The results of exhaustive experiments show that per-frame approaches cannot capture the essential properties of videos, such as global motion patterns and shape and texture consistency of translated objects.", "However, contrary to the previous practice, sequential batch selection helps to reduce motion artifacts." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0, 0.052631575614213943, 0.04999999329447746, 0.08888888359069824, 0.12765957415103912, 0.10344827175140381, 0.04255318641662598, 0.0624999962747097, 0.0357142798602581, 0.07843136787414551, 0.07792207598686218, 0.11764705181121826, 0.04444443807005882, 0.10526315122842789, 0.1111111044883728, 0.1111111044883728, 0.07407406717538834, 0, 0, 0, 0.13333332538604736, 0.11999999731779099, 0.04347825422883034, 0.1249999925494194, 0.12765957415103912, 0.13793103396892548, 0, 0.11428570747375488, 0.11594202369451523, 0.0416666604578495, 0.1249999925494194, 0.1666666567325592, 0.08695651590824127, 0.060606054961681366 ]
SkgKzh0cY7
true
[ "Proposed new task, datasets and baselines; 3D Conv CycleGAN preserves object properties across frames; batch structure in frame-level methods matters." ]
[ "Capsule networks are constrained by the parameter-expensive nature of their layers, and the general lack of provable equivariance guarantees.", "We present a variation of capsule networks that aims to remedy this.", "We identify that learning all pair-wise part-whole relationships between capsules of successive layers is inefficient.", "Further, we also realise that the choice of prediction networks and the routing mechanism are both key to equivariance.", "Based on these, we propose an alternative framework for capsule networks that learns to projectively encode the manifold of pose-variations, termed the space-of-variation (SOV), for every capsule-type of each layer.", "This is done using a trainable, equivariant function defined over a grid of group-transformations.", "Thus, the prediction-phase of routing involves projection into the SOV of a deeper capsule using the corresponding function.", "As a specific instantiation of this idea, and also in order to reap the benefits of increased parameter-sharing, we use type-homogeneous group-equivariant convolutions of shallower capsules in this phase.", "We also introduce an equivariant routing mechanism based on degree-centrality.", "We show that this particular instance of our general model is equivariant, and hence preserves the compositional representation of an input under transformations.", "We conduct several experiments on standard object-classification datasets that showcase the increased transformation-robustness, as well as general performance, of our model to several capsule baselines.", "The hierarchical component-structure of visual objects motivates their description as instances of class-dependent spatial grammars.", "The production-rules of such grammars specify this structure by laying out valid type-combinations for components of an object, their inter-geometry, as well as the behaviour of these with respect to transformations on the input.", "A system that aims to truly understand a visual scene must accurately learn such grammars for all constituent objects -in effect, learning their aggregational structures.", "One means of doing so is to have the internal representation of a model serve as a component-parsing of an input across several semantic resolutions.", "Further, in order to mimic latent compositionalities in objects, such a representation must be reflective of detected strengths of possible spatial relationships.", "A natural structure for such a representation is a parse-tree whose nodes denote components, and whose weighted parent-child edges denote the strengths of detected aggregational relationships.", "Capsule networks (Hinton et al., 2011) , (Sabour et al., 2017) are a family of deep neural networks that aim to build such distributed, spatially-aware representations in a multi-class setting.", "Each layer of a capsule network represents and detects instances of a set of components (of a visual scene) at a particular semantic resolution.", "It does this by using vector-valued activations, termed 'capsules'.", "Each capsule is meant to be interpreted as being representative of a set of generalised pose-coordinates for a visual object.", "Each layer consists of capsules of several types that may be instantiated at all spatial locations depending on the nature of the image.", "Thus, given an image, a capsule network provides a description of its components at various 'levels' of semantics.", "In order that this distributed representation across layers be an accurate component-parsing of a visual scene, and capture meaningful and inherent spatial relationships, deeper capsules are constructed from shallower capsules using a mechanism that combines backpropagation-based learning, and consensus-based heuristics.", "Briefly, the mechanism of creating deeper capsules from a set of shallower capsules is as follows.", "Each deeper capsule of a particular type receives a set of predictions for its pose from a local pool of shallower capsules.", "This happens by using a set of trainable neural networks that the shallower capsules are given as input into.", "These networks can be interpreted as aiming to capture possible part-whole relationships between the corresponding deeper and shallower capsules.", "The predictions thus obtained are then combined in a manner that ensures that the result reflects agreement among them.", "This is so that capsules are activated only when their component-capsules are in the right spatial relationship to form an instance of the object-type it represents.", "The agreement-based aggregation described just now is termed 'routing'.", "Multiple routing algorithms exist, for example dynamic routing (Sabour et al., 2017) , EM-routing (Hinton et al., 2018) , SVD-based routing (Bahadori, 2018) , and routing based on a clustering-like objective function (Wang & Liu, 2018) .", "Based on their explicit learning of compositional structures, capsule networks can be seen as an alternative (to CNNs) for better learning of compositional representations.", "Indeed, CNN-based models do not have an inherent mechanism to explicitly learn or use spatial relationships in a visual scene.", "Further, the common use of layers that enforce local transformation-invariance, such as pooling, further limit their ability to accurately detect compositional structures by allowing for relaxations in otherwise strict spatial relations (Hinton et al., 2011) .", "Thus, despite some manner of hierarchical learning -as seen in their layers capturing simpler to more complex features as a function of depth -CNNs do not form the ideal representational model we seek.", "It is our belief that capsule-based models may serve us better in this regard.", "This much said, research in capsule networks is still in its infancy, and several issues have to be overcome before capsule networks can become universally applicable like CNNs.", "We focus on two of these that we consider as fundamental to building better capsule network models.", "First, most capsule-network models, in their current form, do not scale well to deep architectures.", "A significant factor is the fact that all pair-wise relationships between capsules of two layers (upto a local pool) are explicitly modelled by a unique neural network.", "Thus, for a 'convolutional capsule' layer -the number of trainable neural networks depends on the product of the spatial extent of the windowing and the product of the number of capsule-types of each the two layers.", "We argue that this design is not only expensive, but also inefficient.", "Given two successive capsule-layers, not all pairs of capsule-types have significant relationships.", "This is due to them either representing object-components that are part of different classes, or being just incompatible in compositional structures.", "The consequences of this inefficiency go beyond poor scalability.", "For example, due to the large number of prediction-networks in this design, only simple functions -often just matrices -are used to model part-whole relationships.", "While building deep capsule networks, such a linear inductive bias can be inaccurate in layers where complex objects are represented.", "Thus, for the purpose of building deeper architectures, as well as more expressive layers, this inefficiency in the prediction phase must be handled.", "The second issue with capsule networks is more theoretical, but nonetheless has implications in practice.", "This is the lack, in general, of theoretical guarantees on equivariance.", "Most capsule networks only use intuitive heuristics to learn transformation-robust spatial relations among components.", "This is acceptable, but not ideal.", "A capsule network model that can detect compositionalities in a provablyinvariant manner are more useful, and more in line with the basic motivations for capsules.", "Both of the above issues are remedied in the following description of our model.", "First, instead of learning pair-wise relationships among capsules, we learn to projectively encode a description of each capsule-type for every layer.", "This we do by associating each capsule-type with a vector-valued function, given by a trainable neural network.", "This network assumes the role of the prediction mechanism in capsule networks.", "We interpret the role of this network as a means of encoding the manifold of legal pose-variations for its associated capsule-type.", "It is expected that, given proper training, shallower capsules that have no relationship with a particular capsule-type will project themselves to a vector of low activation (for example, 2-norm), when input to the corresponding network.", "As an aside, it is this mechanism that gives the name to our model.", "We term this manifold the 'space-of-variation' of a capsule-type.", "Since, we attempt to learn such spaces at each layer, we name our model 'space-of-variation' networks (SOVNET).", "In this design, the number of trainable networks for a given layer depend on the number of capsule-types of that layer.", "As mentioned earlier, the choice of prediction networks and routing algorithm is important to having guarantees on learning transformation-invariant compositional relationships.", "Thus, in order to ensure equivariance, which we show is sufficient for the above, we use group-equivariant convolutions (GCNN) (Cohen & Welling, 2016) in the prediction phase.", "Thus, shallower capsules of a fixed type are input to a GCNN associated with a deeper capsule-type to obtain predictions for it.", "Apart from ensuring equivariance to transformations, GCNNs also allow for greater parameter-sharing (across a set of transformations), resulting in greater awareness of local object-structures.", "We argue that this could potentially improve the quality of predictions when compared to isolated predictions made by convolutional capsule layers, such as those of (Hinton et al., 2018) .", "The last contribution of this paper is an equivariant degree-centrality based routing algorithm.", "The main idea of this method is to treat each prediction for a capsule as a vertex of a graph, whose weighted edges are given by a similarity measure on the predictions themselves.", "Our method uses the softmaxed values of the degree scores of the affinity matrix of this graph as a set of weights for aggregating predictions.", "The key idea being that predictions that agree with a majority of other predictions for the same capsule get a larger weight -following the principle of routing-by-agreement.", "While this method is only heuristic in the sense of optimality, it is provably equivariant and preserves the capsule-decomposition of an input.", "We summarise the contributions of this paper in the following:", "1. A general framework for a scalable capsule-network model.", "A number of insights can be drawn from an observation of the accuracies obtained from the experiments.", "First, the most obvious, is that SOVNET is significantly more robust to train and test-time geometric transformations of the input.", "Indeed, SOVNET learns to use even extreme transformations of the training data and generalises better to test-time transformations in a majority of the cases.", "However, in certain splits, some baselines perform better than SOVNET.", "These cases are briefly discussed below.", "On the CIFAR-10 experiments, DeepCaps performs significantly better than SOVNET on the untransformed case -generalising to test-time transformations better.", "However, SOVNET learns from train-time transformations better than DeepCaps -outperforming it in a large majority of the other cases.", "We hypothesize that the first observation is due to the increased (almost double) number of parameters of DeepCaps that allows it to learn features that generalise better to transformations.", "Further, as p4-convolutions (the prediction-mechanisms used) are equivariant only to rotations in multiples of 90°, its performance is significantly lower for test-time transformations of 30°and 60°for the untransformed case.", "However, the equivariance of SOVNET allows it to learn better from train-time geometric transforms than DeepCaps, explaining the second observation.", "The second case is that GCaps outperforms SOVNET on generalising to extreme transformations on (mainly) MNIST, and once on FashionMNIST, under mild train-time conditions.", "However, it is unable to sustain this under more extreme train-time perturbations.", "We infer that this is caused largely by the explicit geometric parameterisation of capsules in G-Caps.", "While under mild-tomoderate train-time conditions, and on simple datasets, this approach could yield better results, this parameterisation, especially with very simple prediction-mechanisms, can prove detrimental.", "Thus, the convolutional nature of the prediction-mechanisms, which can capture more complex features, and also the greater depth of SOVNET allows it to learn better from more complex training scenarios.", "This makes the case for deeper models with more expressive and equivariant prediction-mechanisms.", "A related point of interest is that G-Caps performs very poorly on the CIFAR-10 dataset -achieving the least accuracy on most cases on this dataset -despite provable guarantees on equivariance.", "We argue that this is significantly due to the nature of the capsules of this model itself.", "In GCaps, each capsule is explicitly modelled as an element of a Lie group.", "Thus, capsules capture exclusively geometric information, and use only this information for routing.", "In contrast, other capsule models have no such parameterisation.", "In the case of CIFAR-10, where non-geometric features such as texture are important, we see that purely spatio-geometric based routing is not effective.", "This observation allows us to make a more general hypothesis that could deal with the fundamentals of capsule networks.", "We propose a trade-off in capsule networks, based on the notion of equivariance.", "To appreciate this, some background is necessary on both equivariance and capsule networks.", "As the body of literature concerning equivariance is quite vast, we only mention a relevant selection of papers.", "Equivariance can be seen as a desirable, if not fundamental, inductive bias for neural networks used in computer vision.", "Indeed, the fact that AlexNet (Krizhevsky et al., 2012) automatically learns representation that are equivariant to flips, rotation and scaling shows the importance of equivariance as well as its natural necessity (Lenc & Vedaldi, 2015) .", "Thus, a neural network model that can formally guarantee this property is essential.", "An early work in this regard is the group-equivariant convolution proposed in (Cohen & Welling, 2016) .", "There, the authors proposed a generalisation of the 2-D spatial convolution operation to act on a general group of symmetry transforms -increasing the parameter-sharing and, thereby, improving performance.", "Since then, several other models exhibiting equivariance to certain groups of transformations have been proposed, for example (Cohen et al., 2018b) , where a spherical correlation operator that exhibits rotationequivariance was introduced; (Carlos Esteves & Daniilidis, 2017) , where a network equivariant to rotation and scale, but invariant to translations was presented, and Worrall & Brostow (2018) , where a model equivariant to translations and 3D right-angled rotations was developed.", "A general theory of equivariant CNNs was developed in (Cohen et al., 2018a) .", "In their paper, they show that convolutions with equivariant kernels are the most general class of equivariant maps between feature spaces.", "A fundamental issue with group-equivariant convolutional networks is the fact that the grid the convolution works with increases exponentially with the type of the transformations considered.", "This was pointed out in (Sabour et al., 2017) ; capsules were proposed as an efficient alternative.", "In a general capsule network model, each capsule is supposed to represent the pose-coordinates of an object-component.", "Thus, to increase the scope of equivariance, only a linear increase in the dimension of each capsule is necessary.", "This was however not formalised in most capsule architectures, which focused on other aspects such as routing (Hinton et al., 2018) , (Bahadori, 2018) , (Wang & Liu, 2018) ; general architecture , (Deliège et al., 2018) , (Rawlinson et al., 2018) , Jeong et al. (2019) , (Phaye et al., 2018) , Rosario et al. (2019) ; or application Afshar et al. (2018) .", "It was only in group-equivariant capsules (Lenssen et al., 2018 ) that this idea of efficient equivariance was formalised.", "Indeed, in that paper, equivariance changed from preserving the action of a group on a vector space to preserving the group-transformation on an element.", "While such models scale well to larger transformation groups in the sense of preserving equivariance guarantees, we argue that they cannot efficiently handle compositionalities that involve more than spatial geometry.", "The direct use of capsules as geometric pose-coordinates could lead to exponential representational inefficiencies in the number of capsules.", "This is the tradeoff we referred to.", "We do not attempt a formalisation of this, and instead make the observation given next.", "While SOVNET (using GCNNs) lacks in transformational efficiency, the use of convolutions allows it to capture non-geometric structures well.", "Further, SOVNET still retains the advantage of learning compositional structures better than CNN models due to the use of routing, placing it in a favourable position between two extremes.", "We presented a scalable, equivariant model for capsule networks that uses group-equivariant convolutions and degree-centrality routing.", "We proved that the model preserves detected compositionalities under transformations.", "We presented the results of experiments on affine variations of various classification datasets, and showed that our model performs better than several capsule network baselines.", "A second set of experiments showed that our model performs comparably to convolutional baselines on two other datasets.", "We also discussed a possible tradeoff between efficiency in the transformational sense and efficiency in the representation of non-geometric compositional relations.", "As future work, we aim at understanding the role of the routing algorithm in the optimality of the capsule-decomposition graph, and various other properties of interest based on it.", "We also note that SOVNET allows other equivariant prediction mechanisms -each of which could result in a wider application of SOVNET to different domains.", "Consider Algorithm 1, which is given below for convenience.", "The role of the GetW eights and Agreement procedures is to evaluate the relative importances of predictions for a deeper capsule, and the extent of consensus among them, respectively.", "The second of these is interpreted as a measure of the activation of the corresponding deeper capsule.", "A formalisation of these concepts to a general framework for even summation-based routing so as to cover all possible notions of relative importance, and consensus is not within the scope of this paper.", "Indeed, to the best of our knowledge, such a formalisation has not been successfully completed.", "Thus, instead of a formal description of a general routing procedure, we provide examples to better understand the role of these two functions.", "We first explain GetW eights, and then Agreement.", "Algorithm A general weighted-summation routing algorithm for SOVNET.", "The first example of GetW eights we provide is from the proposed degree-centrality based routing.", "The algorithm is given below, again.", "In this case, GetW eights is instantiated by the DegreeScore procedure, which assigns weights to predictions based on their normalised degree centrality scores.", "Thus, a prediction that agrees with a significant number of its peers obtains a higher importance than one that does not.", "This scheme follows the principle of routing-by-agreement, that aims to activate a deeper capsule only when its predicting shallower, component-capsules are in an acceptable spatial configuration (Hinton et al., 2011) .", "The above form for the summation-based routing procedure generalises for several existing routing algorithms.", "As an example, we present the dynamic routing algorithm of (Sabour et al., 2017) .", "This differs with our proposed algorithm in that it is a \"attention-based\", rather than \"agreement-based\" routing algorithm.", "That is, the relative importance of a prediction with respect to a fixed deeper capsule is not a direct measure of the extent of its consensus with its peers, but rather a measure of the relative attention it offers to the deeper capsule.", "Thus, the weight associated with a prediction for a fixed deeper capsule by a fixed shallower capsule depends on other deeper capsules.", "In order to accomodate such methods into a general procedure, we modify our formalism by having GetW eights take all the predictions as parameters, and return all the routing weights.", "This modified general procedure is given in Algorithm 5.", "Consider the dynamic routing algorithm of (Sabour et al., 2017) , given in Algorithm 6 -modified to our notation and also the use of group-equivariant convolutions.", "The procedure DynamicRouting is the instantiation for GetW eights.", "Note that the weights c ij (g) depend on the routing weights for the deeper capsules.", "Due to the formulation of capsules in our paper, as in (Sabour et al., 2017) , we use the 2-norm of a capsule to denote its activation.", "Thus, our degree-centrality based procedure, and also dynamic routing, do not use a separate value for this.", "However, examples of algorithms that use a separate activation value exist; for example, spectral routing (Bahadori, 2018) computes the activation score from the sigmoid of the first singular value of the matrix of stacked predictions." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.10256409645080566, 0.23529411852359772, 0.10810810327529907, 0.19999998807907104, 0.20408162474632263, 0.05714285373687744, 0.05405404791235924, 0.12765957415103912, 0, 0.27272728085517883, 0.17777776718139648, 0, 0.07692307233810425, 0.1702127605676651, 0.13636362552642822, 0.0476190410554409, 0.17777776718139648, 0.1249999925494194, 0.1463414579629898, 0, 0.19999998807907104, 0.0476190410554409, 0.10526315122842789, 0.07017543166875839, 0.0555555522441864, 0.09999999403953552, 0.09756097197532654, 0.1463414579629898, 0.04999999329447746, 0.1304347813129425, 0.06451612710952759, 0.08163265138864517, 0.1395348757505417, 0.0952380895614624, 0.10344827175140381, 0.1111111044883728, 0.1666666567325592, 0.21276594698429108, 0.25641024112701416, 0.05405404791235924, 0.1666666567325592, 0.1304347813129425, 0.11764705181121826, 0, 0.1395348757505417, 0, 0.08888888359069824, 0.0476190410554409, 0.09302324801683426, 0.21621620655059814, 0.060606054961681366, 0.2222222238779068, 0.0714285671710968, 0.35555556416511536, 0.05882352590560913, 0.0952380895614624, 0.05405404791235924, 0.1818181723356247, 0.09999999403953552, 0.145454540848732, 0.2222222238779068, 0, 0.15789473056793213, 0.15789473056793213, 0.1860465109348297, 0.17391303181648254, 0.09756097197532654, 0.13636362552642822, 0.11999999731779099, 0.05714285373687744, 0.15686273574829102, 0.0476190410554409, 0.13636362552642822, 0.1463414579629898, 0, 0.19354838132858276, 0.0555555522441864, 0.25, 0.0952380895614624, 0, 0, 0.05128204822540283, 0, 0.13333332538604736, 0.11999999731779099, 0.04878048226237297, 0.22727271914482117, 0.23529411852359772, 0.10526315122842789, 0.08888888359069824, 0.12765957415103912, 0.22857142984867096, 0.12765957415103912, 0.2222222238779068, 0.1111111044883728, 0.11428570747375488, 0.12903225421905518, 0.08888888359069824, 0.24390242993831635, 0.05714285373687744, 0.22857142984867096, 0.05128204822540283, 0.09756097197532654, 0.1090909019112587, 0.22857142984867096, 0.10810810327529907, 0.04347825422883034, 0.18421052396297455, 0.0555555522441864, 0.0476190410554409, 0.2380952388048172, 0, 0.21052631735801697, 0.15789473056793213, 0.03278687968850136, 0.09756097197532654, 0.0952380895614624, 0.15686273574829102, 0.05128204822540283, 0.13793103396892548, 0.05405404791235924, 0.04878048226237297, 0.08163265138864517, 0.42105263471603394, 0.25, 0.21739129722118378, 0.19999998807907104, 0.04999999329447746, 0.04347825422883034, 0.09090908616781235, 0.12903225421905518, 0.17391303181648254, 0.1111111044883728, 0.19230768084526062, 0.05405404791235924, 0.0476190410554409, 0.06666666269302368, 0.13333332538604736, 0.05405404791235924, 0.0714285671710968, 0.08888888359069824, 0.04999999329447746, 0.11320754140615463, 0.05882352590560913, 0, 0.10526315122842789, 0.12244897335767746, 0.10256409645080566, 0.07999999821186066, 0.06451612710952759, 0.1304347813129425, 0.12903225421905518, 0.11428570747375488, 0.08888888359069824, 0.10256409645080566, 0.08163265138864517 ]
BJgNJgSFPS
true
[ "A new scalable, group-equivariant model for capsule networks that preserves compositionality under transformations, and is empirically more transformation-robust to older capsule network models." ]
[ "Recently deep neural networks have shown their capacity to memorize training data, even with noisy labels, which hurts generalization performance.", "To mitigate this issue, we propose a simple but effective method that is robust to noisy labels, even with severe noise. ", "Our objective involves a variance regularization term that implicitly penalizes the Jacobian norm of the neural network on the whole training set (including the noisy-labeled data), which encourages generalization and prevents overfitting to the corrupted labels.", "Experiments on noisy benchmarks demonstrate that our approach achieves state-of-the-art performance with a high tolerance to severe noise.", "Recently deep neural networks (DNNs) have achieved remarkable performance on many tasks, such as speech recognition Amodei et al. (2016) , image classification He et al. (2016) , object detection Ren et al. (2015) .", "However, DNNs usually need a large-scale training dataset to generalize well.", "Such large-scale datasets can be collected by crowd-sourcing, web crawling and machine generation with a relative low price, but the labeling may contain errors.", "Recent studies Zhang et al. (2016) ; Arpit et al. (2017) reveal that mislabeled examples hurt generalization.", "Even worse, DNNs can memorize the training data with completely randomly-flipped labels, which indicates that DNNs are prone to overfit noisy training data.", "Therefore, it is crucial to develop algorithms robust to various amounts of label noise that still obtain good generalization.To address the degraded generalization of training with noisy labels, one direct approach is to reweigh training examples Ren et al. (2018); Jiang et al. (2017) ; Han et al. (2018) ; Ma et al. (2018) , which is related to curriculum learning.", "The general idea is to assign important weights to examples with a high chance of being correct.", "However, there are two major limitations of existing methods.", "First, imagine an ideal weighting mechanism.", "It will only focus on the selected clean examples.", "For those incorrectly labeled data samples, the weights should be near zero.", "If a dataset is under 80% noise corruption, an ideal weighting mechanism assigns nonzero weights to only 20% examples and abandons the information in a large amount of 80% examples.", "This leads to an insufficient usage of training data.", "Second, previous methods usually need some prior knowledge on the noise ratio or the availability of an additional clean unbiased validation dataset.", "But it is usually impractical to get this extra information in real applications.", "Another approach is correction-based, estimating the noisy corruption matrix and correcting the labels Patrini et al. (2017) ; Reed et al. (2014) ; Goldberger & Ben-Reuven (2017) .", "But it is often difficult to estimate the underlying noise corruption matrix when the number of classes is large.", "Further, there may not be an underlying ground truth corruption process but an open set of noisy labels in the real world.", "Although many complex approaches Jiang et al. (2017); Ren et al. (2018); Han et al. (2018) have been proposed to deal with label noise, we find that a simple yet effective baseline can achieve surprisingly good performance compared to the strong competing methods.In this paper, we first analyze the conditions for good generalization.", "A model with simpler hypothesis and smoother decision boundaries can generalize better.", "Then we propose a new algorithm which can satisfy the conditions and take advantage of the whole dataset including the noisy examples to improve the generalization.Our main contributions are:• We build a connection between the generalization of models trained with noisy labels and the smoothness of solutions, which is related to the subspace dimensionality.•", "We propose a novel approach for training with noisy labels, which greatly mitigates overfitting. Our", "method is simple yet effective and can be applied to any neural network architecture. Additional", "knowledge on the clean validation dataset is not required.• A thorough", "empirical evaluation on various datasets (CIFAR-10, CIFAR-100) is conducted and demonstrates a significant improvement over the competing strong baselines.", "We propose a simple but effective algorithm for robust deep learning with noisy labels.", "Our method builds upon a variance regularizer that prevents the model from overfitting to the corrupted labels.", "Extensive experiments given in the paper show that the generalization performance of DNNs trained with corrupted labels can be improved significantly using our method, which can serve as a strong baseline for deep learning with noisy labels." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.12121211737394333, 0.2857142686843872, 0.08888888359069824, 0.19354838132858276, 0, 0.0833333283662796, 0.10810810327529907, 0, 0.12121211737394333, 0.10169491171836853, 0.20689654350280762, 0, 0, 0, 0, 0.04999999701976776, 0, 0, 0, 0.11428570747375488, 0, 0.11764705181121826, 0.2711864411830902, 0.07999999821186066, 0.145454540848732, 0.2857142686843872, 0.2142857164144516, 0, 0.0624999962747097, 0.5925925970077515, 0.13793103396892548, 0.3478260934352875 ]
S1xnKi5BOV
true
[ "The paper proposed a simple yet effective baseline for learning with noisy labels." ]
[ "Recent research suggests that neural machine translation achieves parity with professional human translation on the WMT Chinese--English news translation task.", "We empirically test this claim with alternative evaluation protocols, contrasting the evaluation of single sentences and entire documents.", "In a pairwise ranking experiment, human raters assessing adequacy and fluency show a stronger preference for human over machine translation when evaluating documents as compared to isolated sentences.", "Our findings emphasise the need to shift towards document-level evaluation as machine translation improves to the degree that errors which are hard or impossible to spot at the sentence-level become decisive in discriminating quality of different translation outputs.", "Neural machine translation (Kalchbrenner and Blunsom, 2013; BID10 BID0 has become the de-facto standard in machine translation, outperforming earlier phrasebased approaches in many data settings and shared translation tasks BID7 BID9 Cromieres et al., 2016) .", "Some recent results suggest that neural machine translation \"approaches the accuracy achieved by average bilingual human translators [on some test sets]\" (Wu et al., 2016) , or even that its \"translation quality is at human parity when compared to professional human translators\" (Hassan et al., 2018) .", "Claims of human parity in machine translation are certainly extraordinary, and require extraordinary evidence.", "1 Laudably, Hassan et al. (2018) have released their data publicly to allow external validation of their claims.", "Their claims are further strengthened by the fact that they follow best practices in human machine translation evaluation, using evaluation protocols and tools that are also used at the yearly Conference on Machine Translation (WMT) BID2 , and take great care in guarding against some confounds such as test set selection and rater inconsistency.However, the implications of a statistical tie between two machine translation systems in a shared translation task are less severe than that of a statistical tie between a machine translation system and a professional human translator, so we consider the results worthy of further scrutiny.", "We perform an independent evaluation of the professional translation and best machine translation system that were found to be of equal quality by Hassan et al. (2018) .", "Our main interest lies in the evaluation protocol, and we empirically investigate if the lack of document-level context could explain the inability of human raters to find a quality difference between human and machine translations.", "We test the following hypothesis:A professional translator who is asked to rank the quality of two candidate translations on the document level will prefer a professional human translation over a machine translation.Note that our hypothesis is slightly different from that tested by Hassan et al. (2018) , which could be phrased as follows:A bilingual crowd worker who is asked to directly assess the quality of candidate translations on the sentence level will prefer a professional human translation over a machine translation.As such, our evaluation is not a direct replication of that by Hassan et al. (2018) , and a failure to reproduce their findings does not imply an error on either our or their part.", "Rather, we hope to indirectly assess the accuracy of different evaluation protocols.", "Our underlying assumption is that professional human translation is still superior to neural machine translation, but that the sensitivity of human raters to these quality differences depends on the evaluation protocol.", "Our results emphasise the need for suprasentential context in human evaluation of machine translation.", "Starting with Hassan et al.'s (2018) finding of no statistically significant difference in translation quality between HUMAN and MT for their Chinese-English test set, we set out to test this result with an alternative evaluation protocol which we expected to strengthen the ability of raters to judge translation quality.", "We employed professional translators instead of crowd workers, and pairwise ranking instead of direct assessment, but in a sentence-level evaluation of adequacy, raters still found it hard to discriminate between HUMAN and MT: they did not show a statistically significant preference for either of them.Conversely, we observe a tendency to rate HU-MAN more favourably on the document level than on the sentence level, even within single raters.", "Adequacy raters show a statistically significant preference for HUMAN when evaluating entire documents.", "We hypothesise that document-level evaluation unveils errors such as mistranslation of an ambiguous word, or errors related to textual cohesion and coherence, which remain hard or impossible to spot in a sentence-level evaluation.", "For a subset of articles, we elicited both sentence-level and document-level judgements, and inspected articles for which sentence-level judgements were mixed, but where HUMAN was strongly preferred in document-level evaluation.", "In these articles, we do indeed observe the hypothesised phenomena.", "We find an example of lexical coherence in a 6-sentence article about a new app \"微信挪 车\", which HUMAN consistently translates into \"WeChat Move the Car\".", "In MT, we find three different translations in the same article: \"Twitter Move Car\", \"WeChat mobile\", and \"WeChat Move\".", "Other observations include the use of more appropriate discourse connectives in HU-MAN, a more detailed investigation of which we leave to future work.To our surprise, fluency raters show a stronger preference for HUMAN than adequacy raters FIG0 .", "The main strength of neural machine translation in comparison to previous statistical approaches was found to be increased fluency, while adequacy improvements were less clear (Bojar et al., 2016b; Castilho et al., 2017b) , and we expected a similar pattern in our evaluation.", "Does this indicate that adequacy is in fact a strength of MT, not fluency?", "We are wary to jump to this conclusion.", "An alternative interpretation is that MT, which tends to be more literal than HUMAN, is judged more favourably by raters in the bilingual condition, where the majority of raters are native speakers of the source language, because of L1 interference.", "We note that the availability of document-level context still has a strong impact in the fluency condition (Section 3).", "In response to recent claims of parity between human and machine translation, we have empirically tested the impact of sentence and document level context on human assessment of machine translation.", "Raters showed a markedly stronger preference for human translations when evaluating at the level of documents, as compared to an evaluation of single, isolated sentences.We believe that our findings have several implications for machine translation research.", "Most importantly, if we accept our interpretation that human translation is indeed of higher quality in the dataset we tested, this points to a failure of current best practices in machine translation evaluation.", "As machine translation quality improves, translations will become harder to discriminate in terms of quality, and it may be time to shift towards document-level evaluation, which gives raters more context to understand the original text and its translation, and also exposes translation errors related to discourse phenomena which remain invisible in a sentence-level evaluation.Our evaluation protocol was designed with the aim of providing maximal validity, which is why we chose to use professional translators and pairwise ranking.", "For future work, it would be of high practical relevance to test whether we can also elicit accurate quality judgements on the document-level via crowdsourcing and direct assessment, or via alternative evaluation protocols.", "The data released by Hassan et al. (2018) could serve as a test bed to this end.", "One reason why document-level evaluation widens the quality gap between machine translation and human translation is that the machine translation system we tested still operates on the sentence level, ignoring wider context.", "It will be interesting to explore to what extent existing and future techniques for document-level machine translation can narrow this gap.", "We expect that this will require further efforts in creating document-level training data, designing appropriate models, and supporting research with discourse-aware automatic metrics.", "TAB1 shows detailed results, including those of individual raters, for all four experimental conditions.", "Raters choose between three labels for each item: MT is better than HUMAN", "(a), HUMAN is better than MT", "(b), or tie (t).", "TAB3 lists interrater agreement.", "Besides percent agreement (same label), we calculate Cohen's kappa coefficient DISPLAYFORM0 where P (A) is the proportion of times that two raters agree, and P (E) the likelihood of agreement by chance.", "We calculate Cohen's kappa, and specifically P (E), as in WMT (Bojar et al., 2016b, Section 3.3) , on the basis of all pairwise ratings across all raters.In pairwise rankings of machine translation outputs, κ coefficients typically centre around 0.3 (Bojar et al., 2016b) .", "We observe lower inter-rater agreement in three out of four conditions, and attribute this to two reasons.", "Firstly, the quality of the machine translations produced by Hassan et al. FORMULA0 is high, making it difficult to discriminate from professional translation particularly at the sentence level.", "Secondly, we do not provide guidelines detailing error severity and thus assume that raters have differing interpretations of what constitutes a \"better\" or \"worse\" translation.", "Confusion matrices in TAB4 indicate that raters handle ties very differently: in document-level adequacy, for example, rater E assigns no ties at all, while rater F rates 15 out of 50 items as ties (Table 4g).", "The assignment of ties is more uniform in documents assessed for fluency TAB1 , leading to higher κ in this condition TAB3 .Despite", "low inter-annotator agreement, the quality control we apply shows that raters assess items carefully: they only miss 1 out of 40 and 5 out of 128 spam items in the document-and sentence-level conditions overall, respectively, a very low number compared to crowdsourced work BID5 . All of", "these misses are ties (i. e., not marking spam items as \"better\", but rather equally bad as their counterpart), and 5 out of 9 raters (A, B1, B2, D, F) do not miss a single spam item.A common procedure in situations where interrater agreement is low is to aggregate ratings of different annotators BID2 . As shown", "in TAB2" ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1764705777168274, 0.1818181723356247, 0.380952388048172, 0.12244897335767746, 0.1249999925494194, 0.14035087823867798, 0.2666666507720947, 0, 0.09195402264595032, 0.09756097197532654, 0.1304347813129425, 0.13793103396892548, 0, 0.1904761791229248, 0.2666666507720947, 0.06896550953388214, 0.10958904027938843, 0.20689654350280762, 0.04444443807005882, 0.09302324801683426, 0, 0.04878048226237297, 0.05882352590560913, 0.07999999821186066, 0.145454540848732, 0.19999998807907104, 0, 0.04081632196903229, 0.05882352590560913, 0.1463414579629898, 0.31372547149658203, 0.17777776718139648, 0.07499999552965164, 0, 0, 0.1395348757505417, 0.1111111044883728, 0.05128204822540283, 0, 0.06896550953388214, 0, 0, 0, 0, 0.1071428507566452, 0.060606054961681366, 0.0952380895614624, 0.09756097197532654, 0.0416666604578495, 0.052631575614213943, 0.0357142798602581, 0.11594202369451523 ]
Hygfmc5U-7
true
[ "Raters prefer adequacy in human over machine translation when evaluating entire documents, but not when evaluating single sentences." ]
[ "Imitation learning aims to inversely learn a policy from expert demonstrations, which has been extensively studied in the literature for both single-agent setting with Markov decision process (MDP) model, and multi-agent setting with Markov game (MG) model.", "However, existing approaches for general multi-agent Markov games are not applicable to multi-agent extensive Markov games, where agents make asynchronous decisions following a certain order, rather than simultaneous decisions.", "We propose a novel framework for asynchronous multi-agent generative adversarial imitation learning (AMAGAIL) under general extensive Markov game settings, and the learned expert policies are proven to guarantee subgame perfect equilibrium (SPE), a more general and stronger equilibrium than Nash equilibrium (NE).", "The experiment results demonstrate that compared to state-of-the-art baselines, our AMAGAIL model can better infer the policy of each expert agent using their demonstration data collected from asynchronous decision-making scenarios (i.e., extensive Markov games).", "Imitation learning (IL) also known as learning from demonstrations allows agents to imitate expert demonstrations to make optimal decisions without direct interactions with the environment.", "Especially, inverse reinforcement learning (IRL) (Ng et al. (2000) ) recovers a reward function of an expert from collected demonstrations, where it assumes that the demonstrator follows an (near-)optimal policy that maximizes the underlying reward.", "However, IRL is an ill-posed problem, because a number of reward functions match the demonstrated data (Ziebart et al. (2008; ; Ho & Ermon (2016) ; Boularias et al. (2011) ), where various principles, including maximum entropy, maximum causal entropy, and relative entropy principles, are employed to solve this ambiguity (Ziebart et al. (2008; ; Boularias et al. (2011); Ho & Ermon (2016) ; Zhang et al. (2019) ).", "Going beyond imitation learning with single agents discussed above, recent works including Song et al. (2018) , Yu et al. (2019) , have investigated a more general and challenging scenario with demonstration data from multiple interacting agents.", "Such interactions are modeled by extending Markov decision processes on individual agents to multi-agent Markov games (MGs) (Littman & Szepesvári (1996) ).", "However, these works only work for synchronous MGs, with all agents making simultaneous decisions in each turn, and do not work for general MGs, allowing agents to make asynchronous decisions in different turns, which is common in many real world scenarios.", "For example, in multiplayer games (Knutsson et al. (2004) ), such as Go game, and many card games, players take turns to play, thus influence each other's decision.", "The order in which agents make decisions has a significant impact on the game equilibrium.", "In this paper, we propose a novel framework, asynchronous multi-agent generative adversarial imitation learning (AMAGAIL): A group of experts provide demonstration data when playing a Markov game (MG) with an asynchronous decision-making process, and AMAGAIL inversely learns each expert's decision-making policy.", "We introduce a player function governed by the environment to capture the participation order and dependency of agents when making decisions.", "The participation order could be deterministic (i.e., agents take turns to act) or stochastic (i.e., agents need to take actions by chance).", "A player function of an agent is a probability function: given the perfectly known agent participation history, i.e., at each previous round in the history, we know which agent(s) participated, it provides the probability of the agent participating in the next round.", "With the general MG model, our framework generalizes MAGAIL (Song et al. (2018) ) from the synchronous Markov games to (asynchronous) Markov games, and the learned expert policies are proven to guarantee subgame perfect equilibrium (SPE) (Fudenberg & Levine (1983) ), a stronger equilibrium than the Nash equilibrium (NE) (guaranteed in MAGAIL Song et al. (2018) ).", "The experiment results demonstrate that compared to GAIL (Ho & Ermon (2016) ) and MAGAIL (Song et al. (2018) ), our AMAGAIL model can better infer the policy of each expert agent using their demonstration data collected from asynchronous decision-making scenarios.", "Imitation learning (IL) aims to learn a policy from expert demonstrations, which has been extensively studied in the literature for single agent scenarios (Finn et al. (2016) ; Ho & Ermon (2016) ).", "Behavioral cloning (BC) uses the observed demonstrations to directly learn a policy (Pomerleau (1991); Torabi et al. (2018) ).", "Apprenticeship learning and inverse reinforcement learning (IRL) ((Ng et al. (2000) ; Syed & Schapire (2008); Ziebart et al. (2008; Boularias et al. (2011) )) seek for recovering the underlying reward based on expert trajectories in order to further learn a good policy via reinforcement learning.", "The assumption is that expert trajectories generated by the optimal policy maximize the unknown reward.", "Generative adversarial imitation learning (GAIL) and conditional GAIL (cGAIL) incorporate maximum casual entropy IRL (Ziebart et al. (2010) ) and the generative adversarial networks (Goodfellow et al. (2014) ) to simultaneously learn non-linear policy and reward functions (Ho & Ermon (2016); Zhang et al. (2019) ; Baram et al. (2017) ).", "A few recent studies on multi-agent imitation learning, such as MAGAIL (Song et al. (2018) and MAAIRL (Yu et al. (2019) ), model the interactions among agents as synchronous Markov games, where all agents make simultaneous actions at each step t.", "These works fail to characterize a more general and practical interaction scenario, i.e., Markov games including turn-based games (Chatterjee et al. (2004) ), where agents make asynchronous decisions over steps.", "In this paper, we make the first attempt to propose an asynchronous multi-agent generative adversarial imitation learning (AMAGAIL) framework, which models the asynchronous decision-making process as a Markov game and develops a player function to capture the participation dynamics of agents.", "Experimental results demonstrate that our proposed AMAGAIL can accurately learn the experts' policies from their asynchronous trajectory data, comparing to state-of-the-art baselines.", "Beyond capturing the dynamics of participation vs no-participation (as only two participation choices), our proposed player function Y (and AMAGAIL framework) can also capture a more general case 5 , where Y determines how the agent participates in a particular round, i.e., which action set A" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ 0.21276594698429108, 0.20512820780277252, 0.3199999928474426, 0.12244897335767746, 0.17142856121063232, 0.09090908616781235, 0.06557376682758331, 0.08888888359069824, 0.23529411852359772, 0.04255318641662598, 0.09756097197532654, 0.0714285671710968, 0.23529411852359772, 0.12121211737394333, 0.060606054961681366, 0.043478257954120636, 0.13793103396892548, 0.07407406717538834, 0.13333332538604736, 0.1249999925494194, 0.11538460850715637, 0.07407406717538834, 0.22641509771347046, 0.1599999964237213, 0.13636362552642822, 0.3265306055545807, 0.11428570747375488, 0.0357142835855484 ]
Syx33erYwH
true
[ "This paper extends the multi-agent generative adversarial imitation learning to extensive-form Markov games." ]
[ "Self-training is one of the earliest and simplest semi-supervised methods.", "The key idea is to augment the original labeled dataset with unlabeled data paired with the model’s prediction.", "Self-training has mostly been well-studied to classification problems.", "However, in complex sequence generation tasks such as machine translation, it is still not clear how self-training woks due to the compositionality of the target space.", "In this work, we first show that it is not only possible but recommended to apply self-training in sequence generation.", "Through careful examination of the performance gains, we find that the noise added on the hidden states (e.g. dropout) is critical to the success of self-training, as this acts like a regularizer which forces the model to yield similar predictions for similar inputs from unlabeled data.", "To further encourage this mechanism, we propose to inject noise to the input space, resulting in a “noisy” version of self-training.", "Empirical study on standard benchmarks across machine translation and text summarization tasks under different resource settings shows that noisy self-training is able to effectively utilize unlabeled data and improve the baseline performance by large margin.", "Deep neural networks often require large amounts of labeled data to achieve good performance.", "However, acquiring labels is a costly process, which motivates research on methods that can effectively utilize unlabeled data to improve performance.", "Towards this goal, semi-supervised learning (Chapelle et al., 2009 ) methods that take advantage of both labeled and unlabeled data are a natural starting point.", "In the context of sequence generation problems, semi-supervised approaches have been shown to work well in some cases.", "For example, back-translation (Sennrich et al., 2015) makes use of the monolingual data on the target side to improve machine translation systems, latent variable models are employed to incorporate unlabeled source data to facilitate sentence compression (Miao & Blunsom, 2016) or code generation (Yin et al., 2018) .", "In this work, we revisit a much older and simpler semi-supervised method, self-training (ST, Scudder (1965) ), where a base model trained with labeled data acts as a \"teacher\" to label the unannotated data, which is then used to augment the original small training set.", "Then, a \"student\" model is trained with this new training set to yield the final model.", "Originally designed for classification problems, common wisdom suggests that this method may be effective only when a good fraction of the predictions on unlabeled samples are correct, otherwise mistakes are going to be reinforced (Zhu & Goldberg, 2009 ).", "In the field of natural language processing, some early work have successfully applied self-training to word sense disambiguation (Yarowsky, 1995) and parsing (McClosky et al., 2006; Reichart & Rappoport, 2007; Huang & Harper, 2009 ).", "However, self-training has not been studied extensively when the target output is natural language.", "This is partially because in language generation applications (e.g. machine translation) hypotheses are often very far from the ground-truth target, especially in low-resource settings.", "It is natural to ask whether self-training can be useful at all in this case.", "While Ueffing (2006) and Zhang & Zong (2016) Apply f θ to the unlabeled instances U", "In this paper we revisit self-training for neural sequence generation, and show that it can be an effective method to improve generalization, particularly when labeled data is scarce.", "Through a comprehensive ablation analysis and synthetic experiments, we identify that noise injected during self-training plays a critical role for its success due to its smoothing effect.", "To encourage this behaviour, we explicitly perturb the input to obtain a new variant of self-training, dubbed noisy selftraining.", "Experiments on machine translation and text summarization demonstrate the effectiveness of this approach in both low and high resource settings." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.12121211737394333, 0.05128204822540283, 0, 0.1666666567325592, 0.23255813121795654, 0.158730149269104, 0.1395348757505417, 0.10526315122842789, 0.05405404791235924, 0.13636362552642822, 0.20408162474632263, 0.1463414579629898, 0.03076922707259655, 0.21875, 0.10526315122842789, 0.1666666567325592, 0.06896550953388214, 0.05405404791235924, 0.04255318641662598, 0.15789473056793213, 0.05128204822540283, 0.4313725531101227, 0.2916666567325592, 0.0476190410554409, 0.0476190410554409 ]
SJgdnAVKDH
true
[ "We revisit self-training as a semi-supervised learning method for neural sequence generation problem, and show that self-training can be quite successful with injected noise." ]
[ "We present an end-to-end trained memory system that quickly adapts to new data and generates samples like them.", "Inspired by Kanerva's sparse distributed memory, it has a robust distributed reading and writing mechanism.", "The memory is analytically tractable, which enables optimal on-line compression via a Bayesian update-rule.", "We formulate it as a hierarchical conditional generative model, where memory provides a rich data-dependent prior distribution.", "Consequently, the top-down memory and bottom-up perception are combined to produce the code representing an observation.", "Empirically, we demonstrate that the adaptive memory significantly improves generative models trained on both the Omniglot and CIFAR datasets.", "Compared with the Differentiable Neural Computer (DNC) and its variants, our memory model has greater capacity and is significantly easier to train.", "Recent work in machine learning has examined a variety of novel ways to augment neural networks with fast memory stores.", "However, the basic problem of how to most efficiently use memory remains an open question.", "For instance, the slot-based external memory in models like Differentiable Neural Computers (DNCs BID10 ) often collapses reading and writing into single slots, even though the neural network controller can in principle learn more distributed strategies.", "As as result, information is not shared across memory slots, and additional slots have to be recruited for new inputs, even if they are redundant with existing memories.", "Similarly, Matching Networks BID25 BID4 and the Neural Episodic Controller BID21 directly store embeddings of data.", "They therefore require the volume of memory to increase with the number of samples stored.", "In contrast, the Neural Statistician BID7 summarises a dataset by averaging over their embeddings.", "The resulting \"statistics\" are conveniently small, but a large amount of information may be dropped by the averaging process, which is at odds with the desire to have large memories that can capture details of past experience.Historically developed associative memory architectures provide insight into how to design efficient memory structures that store data in overlapping representations.", "For example, the Hopfield Net BID14 pioneered the idea of storing patterns in low-energy states in a dynamic system.", "This type of model is robust, but its capacity is limited by the number of recurrent connections, which is in turn constrained by the dimensionality of the input patterns.", "The Boltzmann Machine BID1 lifts this constraint by introducing latent variables, but at the cost of requiring slow reading and writing mechanisms (i.e. via Gibbs sampling).", "This issue is resolved by Kanerva's sparse distributed memory model BID15 , which affords fast reads and writes and dissociates capacity from the dimensionality of input by introducing addressing into a distributed memory store whose size is independent of the dimension of the data 1 .In", "this paper, we present a conditional generative memory model inspired by Kanerva's sparse distributed memory. We", "generalise Kanerva's original model through learnable addresses and reparametrised latent variables BID23 BID17 BID5 . We", "solve the challenging problem of learning an effective memory writing operation by exploiting the analytic tractability of our memory model -we derive a Bayesian memory update rule that optimally trades-off preserving old content and storing new content. The", "resulting hierarchical generative model has a memory dependent prior that quickly adapts to new data, providing top-down knowledge in addition to bottom-up perception from the encoder to form the latent code representing data. As", "a generative model, our proposal provides a novel way of enriching the often over-simplified priors in VAE-like models BID22 ) through a adaptive memory. As", "a memory system, our proposal offers an effective way to learn online distributed writing which provides effective compression and storage of complex data.", "In this paper, we present the Kanerva Machine, a novel memory model that combines slow-learning neural networks and a fast-adapting linear Gaussian model as memory.", "While our architecture is inspired by Kanerva's seminal model, we have removed the assumption of a uniform data distribution by training a generative model that flexibly learns the observed data distribution.", "By implementing memory as a generative model, we can retrieve unseen patterns from the memory through sampling.", "This phenomenon is consistent with the observation of constructive memory neuroscience experiments BID12 .Probabilistic", "interpretations of Kanerva's model have been developed in previous works: Anderson (1989) explored a conditional probability interpretation of Kanerva's sparse distributed memory, and generalised binary data to discrete data with more than two values. BID0 provides", "an approximate Bayesian interpretation based on importance sampling. To our knowledge", ", our model is the first to generalise Kanerva's memory model to continuous, non-uniform data while maintaining an analytic form of Bayesian inference. Moreover, we demonstrate", "its potential in modern machine learning through integration with deep neural networks.Other models have combined memory mechanisms with neural networks in a generative setting. For example, BID19 used", "attention to retrieve information from a set of trainable parameters in a memory matrix. Notably, the memory in", "this model is not updated following learning. As a result, the memory", "does not quickly adapt to new data as in our model, and so is not suited to the kind of episode-based learning explored here. BID5 used discrete (categorical", ") random variables to address an external memory, and train the addressing mechanism, together with the rest of the generative model, though a variational objective. However, the memory in their model", "is populated by storing images in the form of raw pixels. Although this provides a mechanism", "for fast adaptation, the cost of storing raw pixels may be overwhelming for large data sets. Our model learns to to store information", "in a compressed form by taking advantage of statistical regularity in the images via the encoder at the perceptual level, the learned addresses, and Bayes' rule for memory updates.Central to an effective memory model is the efficient updating of memory. While various approaches to learning such", "updating mechanisms have been examined recently BID10 BID7 BID24 , we designed our model to employ an exact Bayes' update-rule without compromising the flexibility and expressive power of neural networks. The compelling performance of our model and", "its scalable architecture suggests combining classical statistical models and neural networks may be a promising direction for novel memory models in machine learning." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1818181723356247, 0.13793103396892548, 0.13793103396892548, 0.25806450843811035, 0.13333332538604736, 0.24242423474788666, 0.1666666567325592, 0.22857142984867096, 0.06666666269302368, 0.12244897335767746, 0.1395348757505417, 0.06451612710952759, 0.0714285671710968, 0.06896550953388214, 0.09090908616781235, 0.0624999962747097, 0.05405404791235924, 0.0476190447807312, 0.1538461446762085, 0.2666666507720947, 0.13333332538604736, 0.2083333283662796, 0.21739129722118378, 0.15789473056793213, 0.1621621549129486, 0.7027027010917664, 0.19512194395065308, 0.25806450843811035, 0.06896550953388214, 0.1249999925494194, 0, 0.10256409645080566, 0.25, 0.13333332538604736, 0.2222222238779068, 0.09756097197532654, 0.23255813121795654, 0.06451612710952759, 0.0555555522441864, 0.15094339847564697, 0.1666666567325592, 0.2702702581882477 ]
S1HlA-ZAZ
true
[ "A generative memory model that combines slow-learning neural networks and a fast-adapting linear Gaussian model as memory." ]
[ "Pruning large neural networks while maintaining their performance is often desirable due to the reduced space and time complexity.", "In existing methods, pruning is done within an iterative optimization procedure with either heuristically designed pruning schedules or additional hyperparameters, undermining their utility.", "In this work, we present a new approach that prunes a given network once at initialization prior to training.", "To achieve this, we introduce a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task.", "This eliminates the need for both pretraining and the complex pruning schedule while making it robust to architecture variations.", "After pruning, the sparse network is trained in the standard way.", "Our method obtains extremely sparse networks with virtually the same accuracy as the reference network on the MNIST, CIFAR-10, and Tiny-ImageNet classification tasks and is broadly applicable to various architectures including convolutional, residual and recurrent networks.", "Unlike existing methods, our approach enables us to demonstrate that the retained connections are indeed relevant to the given task.", "Despite the success of deep neural networks in machine learning, they are often found to be highly overparametrized making them computationally expensive with excessive memory requirements.", "Pruning such large networks with minimal loss in performance is appealing for real-time applications, especially on resource-limited devices.", "In addition, compressed neural networks utilize the model capacity efficiently, and this interpretation can be used to derive better generalization bounds for neural networks BID0 ).In", "network pruning, given a large reference neural network, the goal is to learn a much smaller subnetwork that mimics the performance of the reference network. The", "majority of existing methods in the literature attempt to find a subset of weights from the pretrained reference network either based on a saliency criterion BID29 ; BID22 ; BID11 ) or utilizing sparsity enforcing penalties BID3 ; BID2 ). Unfortunately", ", since pruning is included as a part of an iterative optimization procedure, all these methods require many expensive prune -retrain cycles and heuristic design choices with additional hyperparameters, making them non-trivial to extend to new architectures and tasks.In this work, we introduce a saliency criterion that identifies connections in the network that are important to the given task in a data-dependent way before training. Specifically", ", we discover important connections based on their influence on the loss function at a variance scaling initialization, which we call connection sensitivity. Given the desired", "sparsity level, redundant connections are pruned once prior to training (i.e., single-shot), and then the sparse pruned network is trained in the standard way. Our approach has", "several attractive properties:• Simplicity. Since the network", "is pruned once prior to training, there is no need for pretraining and complex pruning schedules. Our method has no", "additional hyperparameters and once pruned, training of the sparse network is performed in the standard way.• Versatility. Since", "our saliency criterion", "chooses structurally important connections, it is robust to architecture variations. Therefore our method can be", "applied to various architectures including convolutional, residual and recurrent networks with no modifications.• Interpretability. Our method", "determines important", "connections with a mini-batch of data at single-shot. By varying this mini-batch used", "for pruning, our method enables us to verify that the retained connections are indeed essential for the given task.We evaluate our method on MNIST, CIFAR-10, and Tiny-ImageNet classification datasets with widely varying architectures. Despite being the simplest, our", "method obtains extremely sparse networks with virtually the same accuracy as the existing baselines across all tested architectures. Furthermore, we investigate the", "relevance of the retained connections as well as the effect of the network initialization and the dataset on the saliency score.", "In this work, we have presented a new approach, SNIP, that is simple, versatile and interpretable; it prunes irrelevant connections for a given task at single-shot prior to training and is applicable to a variety of neural network models without modifications.", "While SNIP results in extremely sparse models, we find that our connection sensitivity measure itself is noteworthy in that it diagnoses important connections in the network from a purely untrained network.", "We believe that this opens up new possibilities beyond pruning in the topics of understanding of neural network architectures, multi-task transfer learning and structural regularization, to name a few.", "In addition to these potential directions, we intend to explore the generalization capabilities of sparse networks.", "Notably, compared to the case of using SNIP FIG1 , the results are different: Firstly, the results on the original (Fashion-)MNIST (i.e.,", "(a) and", "(c) above) are not the same as the ones using SNIP (i.e.,", "(a) and", "(b) in FIG1 .", "Moreover, the pruning patterns are inconsistent with different sparsity levels, either intra-class or inter-class.", "Furthermore, using ∂L/∂w results in different pruning patterns between the original and inverted data in some cases (e.g., the 2 nd columns between", "(c) and ( Figure 7: The effect of varying sparsity levels (κ).", "The lowerκ becomes, the lower training loss is recorded, meaning that a network with more parameters is more vulnerable to fitting random labels.", "Recall, however, that all pruned models are able to learn to perform the classification task without losing much accuracy (see Figure 1 ).", "This potentially indicates that the pruned network does not have sufficient capacity to fit the random labels, but it is capable of performing the classification.C TINY-IMAGENET Table 4 : Pruning results of SNIP on Tiny-ImageNet (before → after).", "Tiny-ImageNet 2 is a subset of the full ImageNet: there are 200 classes in total, each class has 500 and 50 images for training and validation respectively, and each image has the spatial resolution of 64×64.", "Compared to CIFAR-10, the resolution is doubled, and to deal with this, the stride of the first convolution in all architectures is doubled, following the standard practice for this dataset.", "In general, the Tiny-ImageNet classification task is considered much more complex than MNIST or CIFAR-10.", "Even on Tiny-ImageNet, however, SNIP is still able to prune a large amount of parameters with minimal loss in performance.", "AlexNet models lose more accuracies than VGGs, which may be attributed to the fact that the first convolution stride for AlexNet is set to be 4 (by its design of no pooling) which is too large and could lead to high loss of information when pruned." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.15686273574829102, 0.03703703358769417, 0.4399999976158142, 0.2545454502105713, 0.1599999964237213, 0.0952380895614624, 0.158730149269104, 0.19999998807907104, 0.10344827175140381, 0.07999999821186066, 0.1428571343421936, 0.30188679695129395, 0.11940298229455948, 0.2637362480163574, 0.1090909019112587, 0.23728813230991364, 0.05128204822540283, 0.19999998807907104, 0.19999998807907104, 0, 0.12765957415103912, 0.12244897335767746, 0.22727271914482117, 0.2461538463830948, 0, 0.1702127605676651, 0.8823529481887817, 0.20338982343673706, 0.29999998211860657, 0.08510638028383255, 0.07843136787414551, 0, 0, 0, 0.03703703358769417, 0.09090908616781235, 0.22641508281230927, 0.18518517911434174, 0.1764705777168274, 0.19354838132858276, 0.178571417927742, 0.08510638028383255, 0.1538461446762085, 0.19999998807907104 ]
B1VZqjAcYX
true
[ "We present a new approach, SNIP, that is simple, versatile and interpretable; it prunes irrelevant connections for a given task at single-shot prior to training and is applicable to a variety of neural network models without modifications." ]
[ "Transfer learning uses trained weights from a source model as the initial weightsfor the training of a target dataset. ", "A well chosen source with a large numberof labeled data leads to significant improvement in accuracy. ", "We demonstrate atechnique that automatically labels large unlabeled datasets so that they can trainsource models for transfer learning.", "We experimentally evaluate this method, usinga baseline dataset of human-annotated ImageNet1K labels, against five variationsof this technique. ", "We show that the performance of these automatically trainedmodels come within 17% of baseline on average.", "In many domains, the task performance of deep learning techniques is heavily dependent on the number of labeled examples, which are difficult and expensive to acquire.", "This demand for large labeled datasets has inspired alternative techniques, such as weak supervision or automated labeling, whose algorithms create plausible labels to be used to guide supervised training on other tasks.In this work, we develop a content-aware model-selection technique for transfer learning.", "We take an unlabeled data point (here, an unlabeled image), and compute its distance to the average response of a number of specialized deep learning models, such as those trained for \"animal\", \"person\", or \"sport\".", "We then create a \"pseudolabel\" for the point, consisting of a short ordered sequence of the most appropriate model names, like \"animal-plant-building\".", "We use these synthetic labels to augment the ground truth labels.", "We validate the technique by applying it to the ImageNet1K dataset, as well as on a number of other large, unlabeled datasets.", "We have shown that generation of content-aware pseudolabels can provide transfer performance approaching that of human labels, and that models trained on psuedolabels can be used as source models for transfer learning.", "The automated approach presented here suggests that the internal representations of content models trained on specialized datasets contain some descriptive features of those datasets.", "By treating each of these specialized representations as a \"word\" in a longer \"sentence\" that describes a category of images, we can create labels such as a \"music-weapon-person\" to describe a suit of armor, or a \"tree-animal-fungus\" to describe an elephant.", "These rich labels capture features of these objects such as visual information about the materials they are made out of, that better describe the contents than reliance on a single label would produce.", "Using multiple, content-aware models to achieve greater descriptive power may be a valuable future avenue of research." ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.14999999105930328, 0.20512819290161133, 0.6153846383094788, 0.10256409645080566, 0.10810810327529907, 0.08695651590824127, 0.1875, 0.18518517911434174, 0.04878048226237297, 0, 0.1428571343421936, 0.3333333432674408, 0.13636362552642822, 0.07547169178724289, 0.07407406717538834, 0.05128204822540283 ]
rkxJgoRN_V
true
[ "A technique for automatically labeling large unlabeled datasets so that they can train source models for transfer learning and its experimental evaluation. " ]
[ "Recent studies in attention modules have enabled higher performance in computer vision tasks by capturing global contexts and accordingly attending important features.", "In this paper, we propose a simple and highly parametrically efficient module named Tree-structured Attention Module (TAM) which recursively encourages neighboring channels to collaborate in order to produce a spatial attention map as an output.", "Unlike other attention modules which try to capture long-range dependencies at each channel, our module focuses on imposing non-linearities be- tween channels by utilizing point-wise group convolution.", "This module not only strengthens representational power of a model but also acts as a gate which controls signal flow.", "Our module allows a model to achieve higher performance in a highly parameter-efficient manner.", "We empirically validate the effectiveness of our module with extensive experiments on CIFAR-10/100 and SVHN datasets.", "With our proposed attention module employed, ResNet50 and ResNet101 models gain 2.3% and 1.2% accuracy improvement with less than 1.5% parameter over- head.", "Our PyTorch implementation code is publicly available.", "Advancements in attention modules have boosted up the performance where they are employed over broad fields in deep learning such as machine translation, image generation, image and video classification, object detection, segmentation, etc (Vaswani et al., 2017; Hu et al., 2018a; b; c; Wang et al., 2018; Cao et al., 2019; Zhang et al., 2019) .", "In the fields of computer vision tasks, numerous attention modules have been proposed in a way that one can attach it to a backbone network obtaining an efficient trade-off between additional parameters of the attached attention module and the model's performance.", "SENet (Hu et al., 2018b) encodes global spatial information using global average pooling and captures channel-wise dependencies using two fully-connected layers over the previously encoded values at each channel.", "Input feature maps of the SE module are recalibrated with output values corresponding to each channel after applying a sigmoid activation function to produce output feature maps of the module.", "In this manner, the model can distinguish which channels to attend than others.", "GENet (Hu et al., 2018a) shows simply gathering spatial information with depth-wise strided convolution and redistributing each gathered value across all positions with nearest neighbor upsampling can significantly help a network to understand global feature context.", "NLNet (Wang et al., 2018) aggregates query-specific global context and adds values to each corresponding channel.", "GCNet (Cao et al., 2019) simplifies NLNet in a computationally efficient way using the fact that a non-local block used in the NLNet tends to produce attention map independent of query position.", "BAM efficiently enhances backbone networks by placing attention modules in bottleneck regions, which requires few increase in both parameters and computation.", "CBAM incorporates channel and spatial attentions and employs a max descriptor as well as an average descriptor for more precise attention.", "It is clear that proposed modules in aforementioned studies have brought remarkable results, most of their main focus has been on how to capture long-range dependencies across spatial dimension.", "That is, they mainly focus on contextual modeling rather than capturing inter-channel relations both of which are regarded indispensable for an attention module as depicted in Cao et al. (2019) .", "In this work, we propose a module which strengthens model representational power by imposing nonlinearities between neighboring channels in a parameter efficient manner.", "While this work deviates Figure 1 : An instance of our proposed module with group size 2.", "f p denotes a point-wise convolution followed by an activation function which combines neighboring channels.", "C m n denotes a n-th channel after applying m point-wise group convolutions to the input feature map.", "One channel attention map followed by a sigmoid σ is produced.", "A color refers to information a channel contains.", "The repetition of point-wise group convolution yields a tree-like structure.", "from the current trend of capturing long-range dependencies within spatial dimension, we argue that taking consideration of inter-channel relations can also achieve highly competitive results even without capturing any kind of spatial dependencies.", "Our module incorporates all channels to produce a single meaningful attention map as an output whereas most previous studies restore the input channel dimension in order to attend important channels and to suppress less meaningful ones.", "For this, we repeatedly apply light-weight point-wise group convolution with a fixed group size to an input feature map until the number of channels becomes one.", "While the increased parameters and computation are almost negligible, we find this simple design remarkably boosts up the performance of various backbone networks.", "As we see in section 3, the module performance is highly competitive to other attention modules and enhances baseline models with few additional parameter overhead.", "This gives one a clue to another notion for attention deviating from the current trend of taking global context.", "Our contributions are two-fold:", "• we propose Tree-structured Attention Module (TAM) which allows the network to learn inter-channel relationships using light-weight point-wise group convolutions.", "This treestructure enables convolution filters in the mid and later phase of a network to have a higher variance so that it can have more presentation power.", "• by proving validity of TAM with extensive experiments, we highlight the potential importance of inter-channel relations.", "In this paper, we propose Tree-structure Attention module which enables a network to learn interchannel relationships which deviates from the current trend of capturing long-range dependencies in attention literature.", "TAM adopts light-weight point-wise group convolutions to allow communication between neighboring channels.", "Once trained, TAM acts as a static gate controlling signal at a certain location which does not depend on input feature but on the location where it is placed.", "Moreover, TAM permits higher variances in filter weights in the early and mid phase and helps the filters to focus on important ones at the last phase before classifier.", "On top of that, TAM produces favorable performance gains with only a few additional parameters to a backbone network.", "These advantages of TAM shed a light on a new way to attend features." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1666666567325592, 0.2083333283662796, 0.1428571343421936, 0.11764705181121826, 0.2142857164144516, 0.12903225421905518, 0.1538461446762085, 0.09090908616781235, 0.09836065024137497, 0.19607843458652496, 0.09302324801683426, 0.052631575614213943, 0.0714285671710968, 0.039215683937072754, 0.0624999962747097, 0.045454539358615875, 0.17142856121063232, 0.1818181723356247, 0, 0.2222222238779068, 0.10810810327529907, 0.0624999962747097, 0.13333332538604736, 0, 0.07692307233810425, 0, 0, 0.04651162400841713, 0.21276594698429108, 0.04999999701976776, 0.10810810327529907, 0.19999998807907104, 0.05882352590560913, 0.10526315122842789, 0.17142856121063232, 0.04999999701976776, 0.06451612710952759, 0.1860465109348297, 0, 0.04878048226237297, 0.05128204822540283, 0.12121211737394333, 0 ]
r1xBoxBYDH
true
[ "Our paper proposes an attention module which captures inter-channel relationships and offers large performance gains." ]
[ "A significant challenge for the practical application of reinforcement learning toreal world problems is the need to specify an oracle reward function that correctly defines a task.", "Inverse reinforcement learning (IRL) seeks to avoid this challenge by instead inferring a reward function from expert behavior. ", "While appealing, it can be impractically expensive to collect datasets of demonstrations that cover the variation common in the real world (e.g. opening any type of door).", "Thus in practice, IRL must commonly be performed with only a limited set of demonstrations where it can be exceedingly difficult to unambiguously recover a reward function.", "In this work, we exploit the insight that demonstrations from other tasks can be used to constrain the set of possible reward functions by learning a \"prior\" that is specifically optimized for the ability to infer expressive reward functions from limited numbers of demonstrations. ", "We demonstrate that our method can efficiently recover rewards from images for novel tasks and provide intuition as to how our approach is analogous to learning a prior.", "Reinforcement learning (RL) algorithms have the potential to automate a wide range of decisionmaking and control tasks across a variety of different domains, as demonstrated by successful recent applications ranging from robotic control BID19 BID23 to game playing BID28 BID44 .", "A key assumption of the RL problem statement is the availability of a reward function that accurately describes the desired tasks.", "For many real world tasks, reward functions can be challenging to manually specify, while being crucial to good performance BID1 .", "Most real world tasks are multifaceted and require reasoning over multiple factors in a task (e.g. an autonomous vehicle navigating a city at night), while simultaneously providing appropriate reward shaping to make the task feasible with tractable exploration BID32 .", "These challenges are compounded by the inherent difficulty of specifying rewards for tasks with high-dimensional observation spaces such as images.Inverse reinforcement learning (IRL) is an approach that aims to address this problem by instead inferring the reward function from demonstrations of the task BID31 .", "This has the appealing benefit of taking a data-driven approach to reward specification in place of hand engineering.", "In practice however, rewards functions are rarely learned as it can be prohibitively expensive to provide demonstrations that cover the variability common in real world tasks (e.g., collecting demonstrations of opening every type of door knob).", "In addition, while learning a complex function from high dimensional observations might make an expressive function approximator seem like a reasonable modelling assumption, in the \"few-shot\" domain it is notoriously difficult to unambiguously recover a good reward function with expressive function approximators.", "Prior solutions have thus instead relied on low-dimensional linear models with handcrafted features that effectively encode a strong prior on the relevant features of a task.", "This requires engineering a set of features by hand that work well for a specific problem.", "In this work, we propose an approach that instead explicitly learns expressive features that are robust even when learning with limited demonstrations.Our approach relies on the key observation that related tasks share common structure that we can leverage when learning new tasks.", "To illustrate, considering a robot navigating through a home.", "While the exact reward function we provide to the robot may differ depending on the task, there is a structure amid the space of useful behaviours, such as navigating to a series of landmarks, and there are certain behaviors we always want to encourage or discourage, such as avoiding obstacles or staying a reasonable distance from humans.", "This notion agrees with our understanding of why humans can easily infer the intents and goals (i.e., reward functions) of even abstract agents from just one or a few demonstrations BID4 , as humans have access to strong priors about how other humans accomplish similar tasks accrued over many years.", "Similarly, our objective is to discover the common structure among different tasks, and encode the structure in a way that can be used to infer reward functions from a few demonstrations.Figure 1: A diagram of our meta-inverse RL approach.", "Our approach attempts to remedy over-fitting in few-shot IRL by learning a \"prior\" that constraints the set of possible reward functions to lie within a few steps of gradient descent.", "Standard IRL attempts to recover the reward function directly from the available demonstrations.", "The shortcoming of this approach is that there is little reason to expect generalization as it is analogous to training a density model with only a few examples.More specifically, in this work we assume access to a set of tasks, along with demonstrations of the desired behaviors for those tasks, which we refer to as the meta-training set.", "From these tasks, we then learn a reward function parameterization that enables effective few-shot learning when used to initialize IRL in a novel task.", "Our method is summarized in Fig. 1 .", "Our key contribution is an algorithm that enables efficient learning of new reward functions by using meta-training to build a rich \"prior\" for goal inference.", "Using our proposed approach, we show that we can learn deep neural network reward functions from raw pixel observations with substantially better data efficiency than existing methods and standard baselines.", "In this work, we present an approach that enables few-shot learning for reward functions of new tasks.", "We achieve this through a novel formulation of inverse reinforcement learning that learns to encode common structure across tasks.", "Using our meta-IRL approach, we show that we can leverage data from previous tasks to effectively learn deep neural network reward functions from raw pixel observations for new tasks, from only a handful of demonstrations.", "Our work paves the way for futures work that considers environments with unknown dynamics, or more fully probabilistic approaches to reward and goal inference." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2641509473323822, 0.30434781312942505, 0.11320754140615463, 0.07692307233810425, 0.2222222238779068, 0.15094339847564697, 0.158730149269104, 0.13333332538604736, 0.04347825422883034, 0.0923076868057251, 0.2647058665752411, 0.13636362552642822, 0.1269841194152832, 0.1269841194152832, 0.1599999964237213, 0.0952380895614624, 0.09677419066429138, 0.05714285373687744, 0.11267605423927307, 0.07999999821186066, 0.12903225421905518, 0.18518517911434174, 0.10256409645080566, 0.17142856121063232, 0.11999999731779099, 0.05882352590560913, 0.19230768084526062, 0, 0.13636362552642822, 0.30434781312942505, 0.06779660284519196, 0.07999999821186066 ]
SyeLno09Fm
true
[ "The applicability of inverse reinforcement learning is often hampered by the expense of collecting expert demonstrations; this paper seeks to broaden its applicability by incorporating prior task information through meta-learning." ]
[ "Recent work has focused on combining kernel methods and deep learning.", "With this in mind, we introduce Deepström networks -- a new architecture of neural networks which we use to replace top dense layers of standard convolutional architectures with an approximation of a kernel function by relying on the Nyström approximation. \n", "Our approach is easy highly flexible.", "It is compatible with any kernel function and it allows exploiting multiple kernels. \n", "We show that Deepström networks reach state-of-the-art performance on standard datasets like SVHN and CIFAR100.", "One benefit of the method lies in its limited number of learnable parameters which make it particularly suited for small training set sizes, e.g. from 5 to 20 samples per class.", "Finally we illustrate two ways of using multiple kernels, including a multiple Deepström setting, that exploits a kernel on each feature map output by the convolutional part of the model. ", "Kernel machines and deep learning have mostly been investigated separately.", "Both have strengths and weaknesses and appear as complementary family of methods with respect to the settings where they are most relevant.", "Deep learning methods may learn from scratch relevant features from data and may work with huge quantities of data.", "Yet they actually require large amount of data to fully exploit their potential and may not perform well with limited training datasets.", "Moreover deep networks are complex and difficult to design and require lots of computing and memory resources both for training and for inference.", "Kernel machines are powerful tools for learning nonlinear relations in data and are well suited for problems with limited training sets.", "Their power comes from their ability to extend linear methods to nonlinear ones with theoretical guarantees.", "However, they do not scale well to the size of the training datasets and do not learn features from the data.", "They usually require a prior choice of a relevant kernel amongst the well known ones, or even require defining an appropriate kernel for the data at hand.Although most research in the field of deep learning seems to have evolved as a \"parallel learning strategy\" to the field of kernel methods, there are a number of studies at the interface of the two domains which investigated how some concepts can be transferred from one field to another.", "Mainly, there are two types of approaches that have been investigated to mix deep learning and kernels.", "Few works explored the design of deep kernels that would allow working with a hierarchy of representations as the one that has been popularized with deep learning (2; 14; 7; 6; 20; 23) .", "Other studies focused on various ways to plug kernels into deep networks (13; 24; 5; 12; 25) .", "This paper follows this latter line of research, it focuses on convolutional networks.", "Specifically, we propose Deepström networks which are built by replacing dense layers of a convolutional neural network by an adaptive approximation of a kernel function.", "Our work is inspired from Deep Fried Convnets (24) which brings together convolutional neural networks and kernels via Fastfood (9), a kernel approximation technique based on random feature maps.", "We revisit this concept in the context of Nyström kernel approximation BID21 .", "One key advantage of our method is its flexibility that enables the use of any kernel function.", "Indeed, since the Nyström approximation uses an explicit feature map from the data kernel matrix, it is not restricted to a specific kernel function and not limited only to RBF kernels, as in Fastfood approximation.", "This is particularly useful when one wants to use or learn multiple different kernels instead of a single kernel function, as we demonstrate here.", "In particular we investigate two different ways of using multiple kernels, one is a straightforward extension to using multiple kernels while the second is a multiple Deepström variant that exploits a Nyström kernel approximation for each of the feature map output by the convolutional part of the neural network.Furthermore the specific nature of our architecture makes it use only a limited number of parameters, which favours learning with small training sets as we demonstrate on targeted experiments.Our experiments on four datasets (MNIST, SVHN, CIFAR10 and CIFAR100) highlight three important features of our method.", "First our approach compares well to standard approaches in standard settings (using ful training sets) while requiring a reduced number of parameters compared to full deep networks and of the same order of magnitude as Deep Fried Convnets.", "This specific feature of our proposal makes it suitable for dealing with limited training set sizes as we show by considering experiments with tens or even fewer training samples per class.", "Finally the method may exploit multiple kernels, providing a new tool with which to approach the problem of multiple kernel learning (MKL) (4), and enabling taking into account the rich information in multiple feature maps of convolution networks through multiple Deepström layers.The rest of the paper is organized as follows.", "We provide background on kernel approximation via the Nyström and the random Fourier features methods and describe the Deep Fried Convnet architecture in Section", "2. The detailed configuration of the proposed Deepström network is described in Section", "3. We also show in Section 3 how Deepström networks can be used with multiple kernels.", "Section 4 reports experimental results on MNIST, SVHN, CIFAR10 and CIFAR100 datasets to first provide a deeper understanding of the behaviour of our method with respect to the choice of the kernels and the combination of these, and second to compare it to state of the art baselines on classification tasks with respect to accuracy and to complexity issues, in particular in the small training set size setting.", "We proposed Deepström, a new hybrid architecture that mixes deep networks and kernel methods.", "It is based on the Nyström approximation that allow considering any kind of kernel function in contrast to Deep Fried Convnets.", "Our proposal allows reaching state of the art results while significantly reducing the number of parameters on various datasets, enabling in particular learning from few samples.", "Moreover the method allows to easily deal with multiple kernels and with multiple Deepström architectures.", "FIG5 plots the 2-dimensional φ nys representations of some CIFAR10 test samples obtained with a subsample of size equal to 2 (while the number of classes is 10) and two different kernels.", "One may see here that the 10 classes are already significantly well separated in this low dimensional representation space, illustrating that a very small sized subsammple is already powerfull.", "Beside, we experienced that designing Deepström Convnets on lower level features output by lower level convolution blocks may yield state-of-the-art performance as well while requiring larger subsamples." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1111111044883728, 0.699999988079071, 0, 0.1538461446762085, 0.09999999403953552, 0.0714285671710968, 0.26923075318336487, 0, 0.21739129722118378, 0.09756097197532654, 0.08510638028383255, 0.09090908616781235, 0.09090908616781235, 0.04999999701976776, 0.0952380895614624, 0.14814814925193787, 0.0952380895614624, 0.15094339847564697, 0.0476190410554409, 0.15789473056793213, 0.5106382966041565, 0.2222222238779068, 0.2702702581882477, 0.19512194395065308, 0.2545454502105713, 0.12244897335767746, 0.23999999463558197, 0.1355932205915451, 0.1111111044883728, 0.20588235557079315, 0.260869562625885, 0.10526315122842789, 0.04999999701976776, 0.138888880610466, 0.20512820780277252, 0.30434781312942505, 0.12244897335767746, 0.15789473056793213, 0.14814814925193787, 0.11538460850715637, 0.07999999821186066 ]
BJlSQnR5t7
true
[ "A new neural architecture where top dense layers of standard convolutional architectures are replaced with an approximation of a kernel function by relying on the Nyström approximation." ]
[ "The main goal of this short paper is to inform the neural art community at large on the ethical ramifications of using models trained on the imagenet dataset, or using seed images from classes 445 -n02892767- [’bikini, two-piece’] and 459- n02837789- [’brassiere, bra, bandeau’] of the same.", "We discovered that many of the images belong to these classes were verifiably pornographic, shot in a non-consensual setting, voyeuristic and also entailed underage nudity.", "Akin to the \\textit{ivory carving-illegal poaching} and \\textit{diamond jewelry art-blood diamond} nexuses, we posit there is a similar moral conundrum at play here and would like to instigate a conversation amongst the neural artists in the community.", "The emergence of tools such as BigGAN [1] and GAN-breeder [2] has ushered in an exciting new flavor of generative digital art [3] , generated using deep neural networks (See [4] for a survey).", "A cursory search on twitter 1 reveals hundreds of interesting art-works created using BigGANs.", "There are many detailed blog-posts 2 on generating neural art by beginning with seed images and performing nifty experiments in the latent space of BigGANs.", "At the point of authoring this paper, (8 September 2019, 4 :54 PM PST),users on the GanBreeder app 3 had discovered 49652500 images.", "Further, Christie's, the British auction house behemoth, recently hailed the selling of the neural network generated Portrait of Edmond Belamy for an incredible $432, 500 as signalling the arrival of AI art on the world auction stage [5] .", "Given the rapid growth of this field, we believe this to be the right time to have a conversation about a particularly dark ethical consequence of using such frameworks that entail models trained on the ImageNet dataset which has many images that are pornographic, non-consensual, voyeuristic and also entail underage nudity.", "We argue that this lack of consent in the seed images used to train the models trickles down to the final artform in a way similar to the blood-diamond syndrome in jewelry art [6] .", "An example: Consider the neural art image in Fig 1 we generated using the GanBreeder app.", "On first appearance, it is not very evident as to what the constituent seed classes are that went into the creation of this neural artwork image.", "When we solicited volunteers online to critique the artwork (See the collection of responses (Table 2) in the supplementary material), none had an inkling regarding a rather sinister trickle down effect at play here.", "As it turns out, we craftily generated this image using hand-picked specific instances of children images emanating from what we will showcase are two problematic seed image classes: Bikini and Brassiere.", "More specifically, for this particular image, we set the Gene weights to be: [Bikini: 42.35, Brassiere: 31.66, Comic Book -84.84 ].", "We'd like to strongly emphasize at this juncture that the problem does not emanate from a visual patriarchal mindset [7] , whereby we associate female undergarment imagery to be somehow unethical, but the root cause lies in the fact that many of the images were curated into the dataset (at least with regards to the 2 above mentioned classes) were voyeuristic, pornographic, non-consensual and also entailed underage nudity.", "2 Root cause: Absence of referencing consent during the curation of the imagenet dataset", "The emergence of the ImageNet dataset is widely considered to be a pivotal moment 4 in the deep learning revolution that transformed the domain computer vision.", "Two highly cited papers (with more than 10000 citations each), [8] authored by Deng et al in 2009 and [9] authored by Russakovsky et al in 2015, provide deep insights into the procedure used to curate the dataset.", "In the 2009 paper, subsections 3.1-Collecting Candidate Images and 3.2-Cleaning Candidate Images are dedicated towards the algorithms used to collect and clean the dataset and also to elucidate the specific ways in which the Amazon Mechanical Turk (AMT) platform was harnessed to scale the dataset.", "Similarly the entirety of Section-3-Dataset construction at large scale in [9] is dedicated towards extending the procedures for the 2015 release.", "It is indeed disappointing that neither the 2009 nor the 2015 versions of the endeavors required the AMT workers to check if the images they were asked to categorize and draw bounding boxes over, were ethically viable for usage.", "More specifically, in imagery pertaining to anthropocentric content, such as undergarment clothing, there was no attempt made towards assessing if the images entailed explicit consent given by the people in the images.", "In fact, none of the following words in the set [ethics, permission, voyeurism, consent] are mentioned in either of the two papers.", "As such, we have a plethora of images specifically belonging to the two categories detailed in Table 1 , that have serious ethical shortcomings.", "In Fig 2, we showcase the gallery of images from the two classes categorized into four sub-categories: Non-consensual/Voyeuristic, Personal, Verifiably pornographic and Underage / Children.", "In Fig 2, we also include images that were also incorrectly categorized (Specifically there was no brassieres being sported by the subjects in the images) (Sub-figure (a) ) and those that involved male subjects indulging in lecherous tomfoolery (Sub-figure (e) ).", "In this paper, we expose a certain unethical facet of neural art that emerges from usage of nonconsensual images that are present in certain specific classes of the imagenet dataset.", "These images born out of an unethical (and in some cases, illegal) act of voyeuristic non-consensual photography predominantly targeting women (as well as children), might implicitly poison the sanctity of the artworks that eventually emerge.", "This work is complementary to works such as [10; 11] that have explored the unspoken ethical dimensions of harnessing cheap crowd-sourced platforms such as AMT for scientific research in the first place, which we firmly believe is also an important issue to be considered." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.12244897335767746, 0.29411762952804565, 0.1463414579629898, 0.0952380895614624, 0, 0.23529411852359772, 0.12903225421905518, 0.04999999701976776, 0.19230769574642181, 0.1666666567325592, 0.1666666567325592, 0.05882352590560913, 0.09756097197532654, 0.10526315122842789, 0.060606054961681366, 0.1764705926179886, 0.1904761791229248, 0.24242423474788666, 0.19512194395065308, 0.1860465109348297, 0.1428571343421936, 0.1428571343421936, 0.1621621549129486, 0.14814814925193787, 0.1875, 0.24242423474788666, 0.1860465109348297, 0.22857142984867096, 0.19512194395065308, 0.0833333283662796 ]
HJlrwcP9DB
true
[ "There's non-consensual and pornographic images in the ImageNet dataset" ]
[ "Inductive and unsupervised graph learning is a critical technique for predictive or information retrieval tasks where label information is difficult to obtain.", "It is also challenging to make graph learning inductive and unsupervised at the same time, as learning processes guided by reconstruction error based loss functions inevitably demand graph similarity evaluation that is usually computationally intractable.", "In this paper, we propose a general framework SEED (Sampling, Encoding, and Embedding Distributions) for inductive and unsupervised representation learning on graph structured objects.", "Instead of directly dealing with the computational challenges raised by graph similarity evaluation, given an input graph, the SEED framework samples a number of subgraphs whose reconstruction errors could be efficiently evaluated, encodes the subgraph samples into a collection of subgraph vectors, and employs the embedding of the subgraph vector distribution as the output vector representation for the input graph.", "By theoretical analysis, we demonstrate the close connection between SEED and graph isomorphism.", "Using public benchmark datasets, our empirical study suggests the proposed SEED framework is able to achieve up to 10% improvement, compared with competitive baseline methods.", "Representation learning has been the core problem of machine learning tasks on graphs.", "Given a graph structured object, the goal is to represent the input graph as a dense low-dimensional vector so that we are able to feed this vector into off-the-shelf machine learning or data management techniques for a wide spectrum of downstream tasks, such as classification (Niepert et al., 2016) , anomaly detection (Akoglu et al., 2015) , information retrieval (Li et al., 2019) , and many others (Santoro et al., 2017b; Nickel et al., 2015) .", "In this paper, our work focuses on learning graph representations in an inductive and unsupervised manner.", "As inductive methods provide high efficiency and generalization for making inference over unseen data, they are desired in critical applications.", "For example, we could train a model that encodes graphs generated from computer program execution traces into vectors so that we can perform malware detection in a vector space.", "During real-time inference, efficient encoding and the capability of processing unseen programs are expected for practical usage.", "Meanwhile, for real-life applications where labels are expensive or difficult to obtain, such as anomaly detection (Zong et al., 2018) and information retrieval (Yan et al., 2005) , unsupervised methods could provide effective feature representations shared among different tasks.", "Inductive and unsupervised graph learning is challenging, even compared with its transductive or supervised counterparts.", "First, when inductive capability is required, it is inevitable to deal with the problem of node alignment such that we can discover common patterns across graphs.", "Second, in the case of unsupervised learning, we have limited options to design objectives that guide learning processes.", "To evaluate the quality of learned latent representations, reconstruction errors are commonly adopted.", "When node alignment meets reconstruction error, we have to answer a basic question: Given two graphs G 1 and G 2 , are they identical or isomorphic (Chartrand, 1977) ?", "To this end, it could be computationally intractable to compute reconstruction errors (e.g., using graph edit distance (Zeng et al., 2009) as the metric) in order to capture detailed structural information.", "Given an input graph, its vector representation can be obtained by going through the components.", "Previous deep graph learning techniques mainly focus on transductive (Perozzi et al., 2014) or supervised settings (Li et al., 2019) .", "A few recent studies focus on autoencoding specific structures, such as directed acyclic graphs (Zhang et al., 2019) , trees or graphs that can be decomposed into trees (Jin et al., 2018) , and so on.", "From the perspective of graph generation, You et al. (2018) propose to generate graphs of similar graph statistics (e.g., degree distribution), and Bojchevski et al.", "(2018) provide a GAN based method to generate graphs of similar random walks.", "In this paper, we propose a general framework SEED (Sampling, Encoding, and Embedding Distributions) for inductive and unsupervised representation learning on graph structured objects.", "As shown in Figure 1 , SEED consists of three major components: subgraph sampling, subgraph encoding, and embedding subgraph distributions.", "SEED takes arbitrary graphs as input, where nodes and edges could have rich features, or have no features at all.", "By sequentially going through the three components, SEED outputs a vector representation for an input graph.", "One can further feed such vector representations to off-the-shelf machine learning or data management tools for downstream learning or retrieval tasks.", "Instead of directly addressing the computational challenge raised by evaluation of graph reconstruction errors, SEED decomposes the reconstruction problem into the following two sub-problems.", "Q1: How to efficiently autoencode and compare structural data in an unsupervised fashion?", "SEED focuses on a class of subgraphs whose encoding, decoding, and reconstruction errors can be evaluated in polynomial time.", "In particular, we propose random walks with earliest visiting time (WEAVE) serving as the subgraph class, and utilize deep architectures to efficiently autoencode WEAVEs.", "Note that reconstruction errors with respect to WEAVEs are evaluated in linear time.", "Q2: How to measure the difference of two graphs in a tractable way?", "As one subgraph only covers partial information of an input graph, SEED samples a number of subgraphs to enhance information coverage.", "With each subgraph encoded as a vector, an input graph is represented by a collection of vectors.", "If two graphs are similar, their subgraph distribution will also be similar.", "Based on this intuition, we evaluate graph similarity by computing distribution distance between two collections of vectors.", "By embedding distribution of subgraph representations, SEED outputs a vector representation for an input graph, where distance between two graphs' vector representations reflects the distance between their subgraph distributions.", "Unlike existing message-passing based graph learning techniques whose expressive power is upper bounded by Weisfeiler-Lehman graph kernels (Xu et al., 2019; Shervashidze et al., 2011) , we show the direct relationship between SEED and graph isomorphism in Section 3.5.", "We empirically evaluate the effectiveness of the SEED framework via classification and clustering tasks on public benchmark datasets.", "We observe that graph representations generated by SEED are able to effectively capture structural information, and maintain stable performance even when the node attributes are not available.", "Compared with competitive baseline methods, the proposed SEED framework could achieve up to 10% improvement in prediction accuracy.", "In addition, SEED achieves high-quality representations when a reasonable number of small subgraph are sampled.", "By adjusting sample size, we are able to make trade-off between effectiveness and efficiency.", "In this paper, we propose a novel framework SEED (Sampling, Encoding, and Embedding distribution) framework for unsupervised and inductive graph learning.", "Instead of directly dealing with the computational challenges raised by graph similarity evaluation, given an input graph, the SEED framework samples a number of subgraphs whose reconstruction errors could be efficiently evaluated, encodes the subgraph samples into a collection of subgraph vectors, and employs the embedding of the subgraph vector distribution as the output vector representation for the input graph.", "By theoretical analysis, we demonstrate the close connection between SEED and graph isomorphism.", "Our experimental results suggest the SEED framework is effective, and achieves state-of-the-art predictive performance on public benchmark datasets.", "Proof.", "We will use induction on |E(G)| to complete the proof.", "Basic case: Let |E(G)| = 1, the only possible graph is a line graph of length 1.", "For such a graph, the walk from one node to another can cover the only edge on the graph, which has length 1 ≥ 2 · 1 − 1.", "Induction: Suppose that for all the connected graphs on less than m edges (i.e., |E(G)| ≤ m − 1), there exist a walk of length k which can visit all the edges if k ≥ 2|E(G)| − 1.", "Then we will show for any connected graph with m edges, there also exists a walk which can cover all the edges on the graph with length k ≥ 2|E(G)| − 1.", "Let G = (V (G), E(G)) be a connected graph with |E(G)| = m.", "Firstly, we assume G is not a tree, which means there exist a cycle on G. By removing an edge e = (v i , v j ) from the cycle, we can get a graph G on m − 1 edges which is still connected.", "This is because any edge on a cycle is not bridge.", "Then according to the induction hypothesis, there exists a walk w = v 1 v 2 . . . v i . . . v j . . . v t of length k ≥ 2(m − 1) + 1 which can visit all the edges on G (The walk does not necessarily start from node 1, v 1 just represents the first node appears in this walk).", "Next, we will go back to our graph G, as G is a subgraph of G, w is also a walk on G. By replacing the first appeared node v i on walk w with a walk v i v j v i , we can obtain a new walk", "As w can cover all the edges on G and the edge e with length k = k + 2 ≥ 2(m − 1) − 1 + 2 = 2m − 1, which means it can cover all the edges on G with length k ≥ 2|E(G)| − 1.", "Next, consider graph G which is a tree.", "In this case, we can remove a leaf v j and its incident edge e = (v i , v j ) from G, then we can also obtain a connected graph G with |E(G )| = m − 1.", "Similarly, according to the induction hypothesis, we can find a walk w = v 1 v 2 . . . v i . . . v t on G which can visit all the m − 1 edges of G of length k , where k ≥ 2(m − 1) − 1.", "As G is a subgraph of G, any walk on G is also a walk on G including walk w .", "Then we can also extend walk w on G by replacing the first appeared v i with a walk v i v j v i , which produce a new walk", "w can visit all the edges of G as well as the edge e with length k = k + 2 ≥ 2(m − 1) − 1 + 2 = 2m − 1.", "In other words, w can visit all the edges on G with length k ≥ 2|E(G)| − 1.", "Now, we have verified our assumption works for all the connected graphs with m edges, hence we complete our proof.", "(To give an intuition for our proof of lemma 1, we provide an example of 5 edges in Figure 5 Figure 5 (b1) shows an example graph G which is a tree on 5 edges.", "By removing the leaf v 4 and its incident edge (v 4 , v 3 ), we can get a tree G with 4 edges (Figure 5 (b2) ).", "G has a walk w = v 1 v 2 v 3 v 5 which covers all the edges of G , as w is also a walk on G, by replacing v 3 with v 3 v 4 v 3 in w we can get a walk w = v 1 v 2 v 3 v 4 v 3 v 5 which can cover all the edges of G." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.34285715222358704, 0.25531914830207825, 0.42105263471603394, 0.20338982343673706, 0.1428571343421936, 0.10256409645080566, 0.07407406717538834, 0.138888880610466, 0.3870967626571655, 0.22857142984867096, 0.09756097197532654, 0.1249999925494194, 0.11538460850715637, 0.2666666507720947, 0.04999999701976776, 0.1818181723356247, 0, 0.09090908616781235, 0.08510638028383255, 0, 0.11764705181121826, 0.04444444179534912, 0.10526315122842789, 0.0714285671710968, 0.42105263471603394, 0.12121211737394333, 0.05882352590560913, 0.19354838132858276, 0.11764705181121826, 0.05714285373687744, 0.2142857164144516, 0.1764705777168274, 0.05128204822540283, 0.0714285671710968, 0.1428571343421936, 0.05882352590560913, 0.12903225421905518, 0, 0.1249999925494194, 0.09999999403953552, 0.15686273574829102, 0.1249999925494194, 0.09756097197532654, 0.1818181723356247, 0.06666666269302368, 0.06896550953388214, 0.5294117331504822, 0.20338982343673706, 0.1428571343421936, 0.12121211737394333, 0, 0.12903225421905518, 0.05128204822540283, 0.08163265138864517, 0.13636362552642822, 0.1428571343421936, 0.07547169178724289, 0.1599999964237213, 0.0624999962747097, 0.0833333283662796, 0.045454539358615875, 0.17391303181648254, 0.12244897335767746, 0.03999999538064003, 0.0714285671710968, 0.052631575614213943, 0, 0, 0.060606054961681366, 0.19512194395065308, 0.09756097197532654, 0.08510638028383255 ]
rkem91rtDB
true
[ "This paper proposed a novel framework for graph similarity learning in inductive and unsupervised scenario." ]
[ "Neural population responses to sensory stimuli can exhibit both nonlinear stimulus- dependence and richly structured shared variability.", "Here, we show how adversarial training can be used to optimize neural encoding models to capture both the deterministic and stochastic components of neural population data.", "To account for the discrete nature of neural spike trains, we use the REBAR method to estimate unbiased gradients for adversarial optimization of neural encoding models.", "We illustrate our approach on population recordings from primary visual cortex.", "We show that adding latent noise-sources to a convolutional neural network yields a model which captures both the stimulus-dependence and noise correlations of the population activity.", "Neural population activity contains both nonlinear stimulus-dependence and richly structured neural variability.", "An important challenge for neural encoding models is to generate spike trains that match the statistics of experimentally measured neural population spike trains.", "Such synthetic spike trains can be used to explore limitations of a model, or as realistic inputs for simulation or stimulation experiments.", "Most encoding models either focus on modelling the relationship between stimuli and mean-firing rates e.g. [1] [2] [3] , or on the statistics of correlated variability ('noise correlations'), e.g. [4] [5] [6] .", "They are typically fit with likelihood-based approaches (e.g. maximum likelihood estimation MLE, or variational methods for latent variable models).", "While this approach is very flexible and powerful, it has mostly been applied to simple models of variability (e.g. Gaussian inputs).", "Furthermore, MLE-based models are not guaranteed to yield synthetic data that matches the statistics of empirical data, particularly in the presence of latent variables.", "Generative adversarial networks (GANs) [7] are an alternative to fitting the parameters of probabilistic models.", "In adversarial training, the objective is to find parameters which match the statistics of empirical data, using a pair of competing neural networks -a generator and discriminator.", "The generator maps the distribution of some input random variable onto the empirical data distribution to try and fool the discriminator.", "The discriminator attempts to classify input data as samples from the true data distribution or from the generator.", "This approach has been used extensively to produce realistic images [8] and for text generation [9] .", "Recently, Molano-Mazon et al. [10] trained a generative model of spike trains, and Arakaki et al. [11] , rate models of neural populations, using GANs.", "However, to the best of our knowledge, adversarial training has not yet been used to train spiking models which produce discrete outputs and which aim to capture both the stimulusdependence of firing rates and shared variability.", "We propose to use conditional GANs [12] for training neural encoding models, as an alternative to likelihood-based approaches.", "A key difficulty in using GANs for neural population data is the discrete nature of neural spike trains: Adversarial training requires calculation of gradients through the generative model, which is not possible for models with a discrete sampling step, and hence, requires the application of gradient estimators.", "While many applications of discrete GANs use biased gradient estimators based on the concrete relaxation technique [13], we find that unbiased gradient estimators REINFORCE [14] and REBAR [15] lead to better fitting performance.", "We demonstrate our approach by fitting a convolutional neural network model with shared noise sources to multi-electrode recordings from V1 [16] .", "We here showed how adversarial training of conditional generative models that produce discrete outputs (i.e. neural spike trains) can be used to generate data that matches the distribution of spike trains recorded in-vivo, and in particular, its firing rates and correlations.", "We used unbiased gradient estimators to train conditional GANs on discrete spike trains and spectral normalisation to stabilise training.", "However, training of discrete GANs remains sensitive to the architecture of the discriminator, as well as hyper-parameter settings.", "We showed that we are able to successfully train adversarial models in cases where supervised and Dichotomised Gaussian models fail.", "In future, adversarial training could be used to capture higher-order structure in neural data, and could be combined with discriminators that target certain statistics of the data that might be of particular interest, in a spirit similar to maximum entropy models [4] .", "Similarly, this approach could also be extended to capture temporal features in neural population data [20] such as spike-history dependence or adaptation effects.", "Since we condition the discriminator on the input stimulus, adversarial training could be used for transfer learning across multiple datasets.", "Generative models trained this way to produce realistic spike trains to various input stimuli, may be used to probe the range of spiking behaviour in a neural population under different kinds of stimulus or noise perturbations." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.307692289352417, 0.6521739363670349, 0.27272728085517883, 0.12121211737394333, 0.3913043439388275, 0.29411762952804565, 0.3333333134651184, 0.1860465109348297, 0.23076923191547394, 0, 0.22727271914482117, 0.22727271914482117, 0.21621620655059814, 0.25531914830207825, 0.25, 0.1621621549129486, 0.10526315122842789, 0.3181818127632141, 0.3461538553237915, 0.25641024112701416, 0.29999998211860657, 0.18867923319339752, 0.1395348757505417, 0.36666667461395264, 0.19999998807907104, 0.21621620655059814, 0.19512194395065308, 0.31578946113586426, 0.2666666507720947, 0.09756097197532654, 0.3272727131843567 ]
S1xxRoLKLH
true
[ "We show how neural encoding models can be trained to capture both the signal and spiking variability of neural population data using GANs." ]
[ "A weakly supervised learning based clustering framework is proposed in this paper.", "As the core of this framework, we introduce a novel multiple instance learning task based on a bag level label called unique class count (ucc), which is the number of unique classes among all instances inside the bag.", "In this task, no annotations on individual instances inside the bag are needed during training of the models.", "We mathematically prove that with a perfect ucc classifier, perfect clustering of individual instances inside the bags is possible even when no annotations on individual instances are given during training.", "We have constructed a neural network based ucc classifier and experimentally shown that the clustering performance of our framework with our weakly supervised ucc classifier is comparable to that of fully supervised learning models where labels for all instances are known.", "Furthermore, we have tested the applicability of our framework to a real world task of semantic segmentation of breast cancer metastases in histological lymph node sections and shown that the performance of our weakly supervised framework is comparable to the performance of a fully supervised Unet model.", "In machine learning, there are two main learning tasks on two ends of scale bar: unsupervised learning and supervised learning.", "Generally, performance of supervised models is better than that of unsupervised models since the mapping between data and associated labels is provided explicitly in supervised learning.", "This performance advantage of supervised learning requires a lot of labelled data, which is expensive.", "Any other learning tasks reside in between these two tasks, so are their performances.", "Weakly supervised learning is an example of such tasks.", "There are three types of supervision in weakly supervised learning: incomplete, inexact and inaccurate supervision.", "Multiple instance learning (MIL) is a special type of weakly supervised learning and a typical example of inexact supervision (Zhou, 2017) .", "In MIL, data consists of bags of instances and their corresponding bag level labels.", "Although the labels are somehow related to instances inside the bags, the instances are not explicitly labeled.", "In traditional MIL, given the bags and corresponding bag level labels, task is to learn the mapping between bags and labels while the goal is to predict labels of unseen bags (Dietterich et al., 1997; Foulds & Frank, 2010) .", "In this paper, we explore the feasibility of finding out labels of individual instances inside the bags only given the bag level labels, i.e. there is no individual instance level labels.", "One important application of this task is semantic segmentation of breast cancer metastases in histological lymph node sections, which is a crucial step in staging of breast cancer (Brierley et al., 2016) .", "In this task, each pathology image of a lymph node section is a bag and each pixel inside that image is an instance.", "Then, given the bag level label that whether the image contains metastases or not, the task is to label each pixel as either metastases or normal.", "This task can be achieved by asking experts to exhaustively annotate each metastases region in each image.", "However, this exhaustive annotation process is tedious, time consuming and more importantly not a part of clinical workflow.", "In many complex systems, such as in many types of cancers, measurements can only be obtained at coarse level (bag level), but information at fine level (individual instance level) is of paramount importance.", "To achieve this, we propose a weakly supervised learning based clustering framework.", "Given a dataset consisting of instances with unknown labels, our ultimate objective is to cluster the instances in this dataset.", "To achieve this objective, we introduce a novel MIL task based on a new kind of bag level label called unique class count (ucc) , which is the number of unique classes or the number of clusters among all the instances inside the bag.", "We organize the dataset into non-empty bags, where each bag is a subset of individual instances from this dataset.", "Each bag is associated with a bag level ucc label.", "Then, our MIL task is to learn mapping between the bags and their associated bag level ucc labels and then to predict the ucc labels of unseen bags.", "We mathematically show that a ucc classifier trained on this task can be used to perform unsupervised clustering on individual instances in the dataset.", "Intuitively, for a ucc classifier to count the number of unique classes in a bag, it has to first learn discriminant features for underlying classes.", "Then, it can group the features obtained from the bag and count the number of groups, so the number of unique classes.", "Our weakly supervised clustering framework is illustrated in Figure 1 .", "It consists of a neural network based ucc classifier, which is called as Unique Class Count (U CC) model, and an unsupervised clustering branch.", "The U CC model accepts any bag of instances as input and uses ucc labels for supervised training.", "Then, the trained U CC model is used as a feature extractor and unsupervised clustering is performed on the extracted features of individual instances inside the bags in the clustering branch.", "One application of our framework is the semantic segmentation of breast cancer metastases in lymph node sections (see Figure 4) .", "The problem can be formulated as follows.", "The input is a set of images.", "Each image (bag) has a label of ucc1 (image is fully normal or fully metastases) or ucc2 (image is a mixture of normal and metastases).", "Our aim is to segment the pixels (instances) in the image into normal and metastases.", "A U CC model can be trained to predict ucc labels of individual images in a fully supervised manner; and the trained model can be used to extract features of pixels (intances) inside the images (bags).", "Then, semantic segmentation masks can be obtained by unsupervised clustering of the pixels (each is represented by the extracted features) into two clusters (metastases or normal).", "Note that ucc does not directly provide an exact label for each individual instance.", "Therefore, our framework is a weakly supervised clustering framework.", "Finally, we have constructed ucc classifiers and experimentally shown that clustering performance of our framework with our ucc classifiers is better than the performance of unsupervised models and comparable to performance of fully supervised learning models.", "We have also tested the performance of our model on the real world task of semantic segmentation of breast cancer metastases in lymph node sections.", "We have compared the performance of our model with the performance of popular medical image segmentation architecture of U net (Ronneberger et al., 2015) and shown that our weakly supervised model approximates the performance of fully supervised U net model 1 .", "Hence, there are three main contributions of this paper:", "1. We have defined unique class count as a bag level label in MIL setup and mathematically proved that a perfect ucc classifier, in principle, can be used to perfectly cluster the individual instances inside the bags.", "2. We have constructed a neural network based ucc classifier by incorporating kernel density estimation (KDE) (Parzen, 1962) as a layer into our model architecture, which provided us with end-to-end training capability.", "3. We have experimentally shown that clustering performance of our framework is better than the performance of unsupervised models and comparable to performance of fully supervised learning models.", "The rest of the paper is organized such that related work is in Section 2, details of our weakly supervised clustering framework are in Section 3, results of the experiments on MNIST, CIFAR10 and CIFAR100 datasets are in Section 4, the results of the experiments in semantic segmentation of breast cancer metastases are in Section 5, and Section 6 concludes the paper.", "In this paper, we proposed a weakly supervised learning based clustering framework and introduce a novel MIL task as the core of this framework.", "We defined ucc as a bag level label in MIL setup and mathematically proved that a perfect ucc classifier can be used to perfectly cluster individual instances inside the bags.", "We designed a neural network based ucc classifer and experimentally showed that clustering performance of our framework with our ucc classifiers are better than the performance of unsupervised models and comparable to performance of fully supervised learning models.", "Finally, we showed that our weakly supervised unique class count model, U CC segment , can be used for semantic segmentation of breast cancer metastases in histological lymph node sections.", "We compared the performance of our model U CC segment with the performance of a U net model and showed that our weakly supervised model approximates the performance of fully supervised U net model.", "In the future, we want to check the performance of our U CC segment model with other medical image datasets and use it to discover new morphological patterns in cancer that had been overlooked in traditional pathology workflow." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.4516128897666931, 0.23529411852359772, 0.1111111044883728, 0.1304347813129425, 0.4444444477558136, 0.29629629850387573, 0.1666666567325592, 0.24390242993831635, 0.1818181723356247, 0.060606054961681366, 0.2142857164144516, 0.1818181723356247, 0.21621620655059814, 0.0624999962747097, 0.0624999962747097, 0.07843136787414551, 0.04444443807005882, 0.04347825422883034, 0.10526315122842789, 0.09999999403953552, 0.11428570747375488, 0.05405404791235924, 0.0416666604578495, 0.3870967626571655, 0.10810810327529907, 0.18518517911434174, 0.05405404791235924, 0, 0.09756097197532654, 0.1428571343421936, 0.19999998807907104, 0.1666666567325592, 0.27586206793785095, 0.1395348757505417, 0.10810810327529907, 0.08888888359069824, 0.10526315122842789, 0, 0.07692307233810425, 0.1111111044883728, 0.060606054961681366, 0.21276594698429108, 0.1395348757505417, 0.060606054961681366, 0.29629629850387573, 0.43478259444236755, 0.04878048226237297, 0.2083333283662796, 0.0714285671710968, 0.18867924809455872, 0.07999999821186066, 0.4761904776096344, 0.20689654350280762, 0.3499999940395355, 0.08510638028383255, 0.44897958636283875, 0.2857142686843872, 0.25641024112701416, 0.1111111044883728 ]
B1xIj3VYvr
true
[ "A weakly supervised learning based clustering framework performs comparable to that of fully supervised learning models by exploiting unique class count." ]
[ "Model-free reinforcement learning (RL) methods are succeeding in a growing number of tasks, aided by recent advances in deep learning. ", "However, they tend to suffer from high sample complexity, which hinders their use in real-world domains. ", "Alternatively, model-based reinforcement learning promises to reduce sample complexity, but tends to require careful tuning and to date have succeeded mainly in restrictive domains where simple models are sufficient for learning.", "In this paper, we analyze the behavior of vanilla model-based reinforcement learning methods when deep neural networks are used to learn both the model and the policy, and show that the learned policy tends to exploit regions where insufficient data is available for the model to be learned, causing instability in training.", "To overcome this issue, we propose to use an ensemble of models to maintain the model uncertainty and regularize the learning process.", "We further show that the use of likelihood ratio derivatives yields much more stable learning than backpropagation through time.", "Altogether, our approach Model-Ensemble Trust-Region Policy Optimization (ME-TRPO) significantly reduces the sample complexity compared to model-free deep RL methods on challenging continuous control benchmark tasks.", "Deep reinforcement learning has achieved many impressive results in recent years, including learning to play Atari games from raw-pixel inputs BID0 , mastering the game of Go BID1 , as well as learning advanced locomotion and manipulation skills from raw sensory inputs BID3 BID4 .", "Many of these results were achieved using model-free reinforcement learning algorithms, which do not attempt to build a model of the environment.", "These algorithms are generally applicable, require relatively little tuning, and can easily incorporate powerful function approximators such as deep neural networks.", "However, they tend to suffer from high sample complexity, especially when such powerful function approximators are used, and hence their applications have been mostly limited to simulated environments.", "In comparison, model-based reinforcement learning algorithms utilize a learned model of the environment to assist learning.", "These methods can potentially be much more sample efficient than model-free algorithms, and hence can be applied to real-world tasks where low sample complexity is crucial BID7 BID3 BID8 .", "However, so far such methods have required very restrictive forms of the learned models, as well as careful tuning for them to be applicable.", "Although it is a straightforward idea to extend model-based algorithms to deep neural network models, so far there has been comparatively fewer successful applications.The standard approach for model-based reinforcement learning alternates between model learning and policy optimization.", "In the model learning stage, samples are collected from interaction with the environment, and supervised learning is used to fit a dynamics model to the observations.", "In the policy optimization stage, the learned model is used to search for an improved policy.", "The underlying assumption in this approach, henceforth termed vanilla model-based RL, is that with enough data, the learned model will be accurate enough, such that a policy optimized on it will also perform well in the real environment.Although vanilla model-based RL can work well on low-dimensional tasks with relatively simple dynamics, we find that on more challenging continuous control tasks, performance was highly unstable.", "The reason is that the policy optimization tends to exploit regions where insufficient data is available to train the model, leading to catastrophic failures.", "Previous work has pointed out this issue as model bias, i.e. BID7 BID9 BID10 .", "While this issue can be regarded as a form of overfitting, we emphasize that standard countermeasures from the supervised learning literature, such as regularization or cross validation, are not sufficient here -supervised learning can guarantee generalization to states from the same distribution as the data, but the policy optimization stage steers the optimization exactly towards areas where data is scarce and the model is inaccurate.", "This problem is severely aggravated when expressive models such as deep neural networks are employed.To resolve this issue, we propose to use an ensemble of deep neural networks to maintain model uncertainty given the data collected from the environment.", "During model learning, we differentiate the neural networks by varying their weight initialization and training input sequences.", "Then, during policy learning, we regularize the policy updates by combining the gradients from the imagined stochastic roll-outs.", "Each imagined step is uniformly sampled from the ensemble predictions.", "Using this technique, the policy learns to become robust against various possible scenarios it may encounter in the real environment.", "To avoid overfitting to this regularized objective, we use the model ensemble for early stopping policy training.Standard model-based techniques require differentiating through the model over many time steps, a procedure known as backpropagation through time (BPTT).", "It is well-known in the literature that BPTT can lead to exploding and vanishing gradients BID11 .", "Even when gradient clipping is applied, BPTT can still get stuck in bad local optima.", "We propose to use likelihood ratio methods instead of BPTT to estimate the gradient, which only make use of the model as a simulator rather than for direct gradient computation.", "In particular, we use Trust Region Policy Optimization (TRPO) BID4 , which imposes a trust region constraint on the policy to further stabilize learning.In this work, we propose Model-Ensemble Trust-Region Policy Optimization (ME-TRPO), a modelbased algorithm that achieves the same level of performance as state-of-the-art model-free algorithms with 100× reduction in sample complexity.", "We show that the model ensemble technique is an effective approach to overcome the challenge of model bias in model-based reinforcement learning.", "We demonstrate that replacing BPTT by TRPO yields significantly more stable learning and much better final performance.", "Finally, we provide an empirical analysis of vanilla model-based RL using neural networks as function approximators, and identify its flaws when applied to challenging continuous control tasks.", "In this work, we present a simple and robust model-based reinforcement learning algorithm that is able to learn neural network policies across different challenging domains.", "We show that our approach significantly reduces the sample complexity compared to state-of-the-art methods while reaching the same level of performance.", "In comparison, our analyses suggests that vanilla model-based RL tends to suffer from model bias and numerical instability, and fails to learn a good policy.", "We further evaluate the effect of each key component of our algorithm, showing that both using TRPO and model ensemble are essential for successful applications of deep model-based RL.", "We also confirm the results of previous work BID7 BID32 BID22 that using model uncertainty is a principled way to reduce model bias.One question that merits future investigation is how to use the model ensemble to encourage the policy to explore the state space where the different models disagree, so that more data can be collected to resolve their disagreement.", "Another enticing direction for future work would be the application of ME-TRPO to real-world robotics systems." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0, 0, 0, 0.03999999910593033, 0, 0.07999999821186066, 0.06451612710952759, 0.09090908616781235, 0, 0, 0, 0, 0, 0.06896551698446274, 0, 0, 0, 0.10000000149011612, 0.07692307233810425, 0, 0.03389830142259598, 0, 0, 0, 0, 0, 0, 0.09090908616781235, 0, 0, 0.03703703358769417, 0.07692307233810425, 0.08695651590824127, 0.060606058686971664, 0.06451612710952759, 0.07692307233810425, 0.13793103396892548, 0.12121211737394333, 0.03703703358769417, 0 ]
SJJinbWRZ
true
[ "Deep Model-Based RL that works well." ]
[ "The high computational and parameter complexity of neural networks makes their training very slow and difficult to deploy on energy and storage-constrained comput- ing systems.", "Many network complexity reduction techniques have been proposed including fixed-point implementation.", "However, a systematic approach for design- ing full fixed-point training and inference of deep neural networks remains elusive.", "We describe a precision assignment methodology for neural network training in which all network parameters, i.e., activations and weights in the feedforward path, gradients and weight accumulators in the feedback path, are assigned close to minimal precision.", "The precision assignment is derived analytically and enables tracking the convergence behavior of the full precision training, known to converge a priori.", "Thus, our work leads to a systematic methodology of determining suit- able precision for fixed-point training.", "The near optimality (minimality) of the resulting precision assignment is validated empirically for four networks on the CIFAR-10, CIFAR-100, and SVHN datasets.", "The complexity reduction arising from our approach is compared with other fixed-point neural network designs.", "Though deep neural networks (DNNs) have established themselves as powerful predictive models achieving human-level accuracy on many machine learning tasks BID12 , their excellent performance has been achieved at the expense of a very high computational and parameter complexity.", "For instance, AlexNet BID17 requires over 800 × 10 6 multiply-accumulates (MACs) per image and has 60 million parameters, while Deepface (Taigman et al., 2014) requires over 500 × 10 6 MACs/image and involves more than 120 million parameters.", "DNNs' enormous computational and parameter complexity leads to high energy consumption BID4 , makes their training via the stochastic gradient descent (SGD) algorithm very slow often requiring hours and days BID9 , and inhibits their deployment on energy and resource-constrained platforms such as mobile devices and autonomous agents.A fundamental problem contributing to the high computational and parameter complexity of DNNs is their realization using 32-b floating-point (FL) arithmetic in GPUs and CPUs.", "Reduced-precision representations such as quantized FL (QFL) and fixed-point (FX) have been employed in various combinations to both training and inference.", "Many employ FX during inference but train in FL, e.g., fully binarized neural networks BID13 use 1-b FX in the forward inference path but the network is trained in 32-b FL.", "Similarly, BID10 employs 16-b FX for all tensors except for the internal accumulators which use 32-b FL, and 3-level QFL gradients were employed (Wen et al., 2017; BID0 to accelerate training in a distributed setting. Note that while QFL reduces storage and communication costs, it does not reduce the computational complexity as the arithmetic remains in 32-b FL.Thus, none of the previous works address the fundamental problem of realizing true fixed-point DNN training, i.e., an SGD algorithm in which all parameters/variables and all computations are implemented in FX with minimum precision required to guarantee the network's inference/prediction accuracy and training convergence. The reasons for this gap are numerous including: 1) quantization Step 1: Forward PropagationStep 2: Back PropagationStep 3: Update errors propagate to the network output thereby directly affecting its accuracy (Lin et al., 2016) ; 2) precision requirements of different variables in a network are interdependent and involve hard-toquantify trade-offs (Sakr et al., 2017) ; 3) proper quantization requires the knowledge of the dynamic range which may not be available (Pascanu et al., 2013) ; and 4) quantization errors may accumulate during training and can lead to stability issues BID10 .Our", "work makes a major advance in closing this gap by proposing a systematic methodology to obtain close-to-minimum per-layer precision requirements of an FX network that guarantees statistical similarity with full precision training. In", "particular, we jointly address the challenges of quantization noise, inter-layer and intra-layer precision trade-offs, dynamic range, and stability. As", "in (Sakr et al., 2017) , we do assume that a fully-trained baseline FL network exists and one can observe its learning behavior. While", ", in principle, such assumption requires extra FL computation prior to FX training, it is to be noted that much of training is done in FL anyway. For", "instance, FL training is used in order to establish benchmarking baselines such as AlexNet BID17 , VGG-Net (Simonyan and Zisserman, 2014) , and ResNet BID12 , to name a few. Even", "if that is not the case, in practice, this assumption can be accounted for via a warm-up FL training on a small held-out portion of the dataset BID6 .Applying", "our methodology to three benchmarks reveals several lessons. First and", "foremost, our work shows that it is possible to FX quantize all variables including back-propagated gradients even though their dynamic range is unknown BID15 . Second, we", "find that the per-layer weight precision requirements decrease from the input to the output while those of the activation gradients and weight accumulators increase. Furthermore", ", the precision requirements for residual networks are found to be uniform across layers. Finally, hyper-precision", "reduction techniques such as weight and activation binarization BID13 or gradient ternarization (Wen et al., 2017) are not as efficient as our methodology since these do not address the fundamental problem of realizing true fixed-point DNN training.We demonstrate FX training on three deep learning benchmarks (CIFAR-10, CIFAR-100, SVHN) achieving high fidelity to our FL baseline in that we observe no loss of accuracy higher then 0.56% in all of our experiments. Our precision assignment", "is further shown to be within 1-b per-tensor of the minimum. We show that our precision", "assignment methodology reduces representational, computational, and communication costs of training by up to 6×, 8×, and 4×, respectively, compared to the FL baseline and related works.", "In this paper, we have presented a study of precision requirements in a typical back-propagation based training procedure of neural networks.", "Using a set of quantization criteria, we have presented a precision assignment methodology for which FX training is made statistically similar to the FL baseline, known to converge a priori.", "We realized FX training of four networks on the CIFAR-10, CIFAR-100, and SVHN datasets and quantified the associated complexity reduction gains in terms costs of training.", "We also showed that our precision assignment is nearly minimal.The presented work relies on the statistics of all tensors being quantized during training.", "This necessitates an initial baseline run in floating-point which can be costly.", "An open problem is to predict a suitable precision configuration by only observing the data statistics and the network architecture.", "Future work can leverage the analysis presented in this paper to enhance the effectiveness of other network complexity reduction approaches.", "For instance, weight pruning can be viewed as a coarse quantization process (quantize to zero) and thus can potentially be done in a targeted manner by leveraging the information provided by noise gains.", "Furthermore, parameter sharing and clustering can be viewed as a form of vector quantization which presents yet another opportunity to leverage our method for complexity reduction." ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.21276594698429108, 0.11428570747375488, 0.2857142686843872, 0.3928571343421936, 0.1818181723356247, 0.25, 0.2222222238779068, 0.10256409645080566, 0.1269841194152832, 0.035087715834379196, 0.09876542538404465, 0.22727271914482117, 0.11764705181121826, 0.12269938737154007, 0.145454540848732, 0.1428571343421936, 0.0416666604578495, 0.0833333283662796, 0.11764705181121826, 0.11764705181121826, 0.11764705181121826, 0.16326530277729034, 0.2666666507720947, 0.3499999940395355, 0.21739129722118378, 0.19999998807907104, 0.1702127605676651, 0.23255813121795654, 0.19607841968536377, 0.21739129722118378, 0.25, 0, 0.1860465109348297, 0.09302324801683426, 0.15094339847564697, 0.11999999731779099 ]
rkxaNjA9Ym
true
[ "We analyze and determine the precision requirements for training neural networks when all tensors, including back-propagated signals and weight accumulators, are quantized to fixed-point format." ]
[ "Machine learning (ML) research has investigated prototypes: examples that are representative of the behavior to be learned.", "We systematically evaluate five methods for identifying prototypes, both ones previously introduced as well as new ones we propose, finding all of them to provide meaningful but different interpretations.", "Through a human study, we confirm that all five metrics are well matched to human intuition.", "Examining cases where the metrics disagree offers an informative perspective on the properties of data and algorithms used in learning, with implications for data-corpus construction, efficiency, adversarial robustness, interpretability, and other ML aspects.", "In particular, we confirm that the \"train on hard\" curriculum approach can improve accuracy on many datasets and tasks, but that it is strictly worse when there are many mislabeled or ambiguous examples.", "When reasoning about ML tasks, it is natural to look for a set of training or test examples that is somehow prototypical-i.e., that is representative of the desired learned behavior.", "Although such prototypical examples have been central to several research efforts, e.g., in interpretability BID5 and curriculum learning BID3 , no generally-agreed-upon definition seems to exist for prototypes, or their characteristics.", "For modern deep-learning models, whose behavior is often inscrutable, even the very existence and usefulness of prototypical examples has seemed uncertain until the recent work of Stock & Cisse (2017) .Inspired", "by that work we (1) identify a set of desirable properties for prototypicality definitions; (2) systematically explore different metrics used in prior work, as well as new metrics we develop, for identifying prototypical examples in both training and test data; (3) study the characteristics of those metrics' prototypes and their complement set-the outliers-using both quantitative measures and a qualitative human study; and, (4) evaluate the usefulness of prototypes for machine-learning purposes such as reducing sample complexity or improving adversarial robustness and interpretability.Our prototypicality metrics are based on adversarial robustness, retraining stability, ensemble agreement, and differentially-private learning. As an independent", "result, we show that predictive stability under retraining strongly correlates with adversarial distance, and may be used as an approximation.Unequivocally, we find that distinct sets of prototypical and outlier examples exist for the datasets we consider: MNIST (LeCun et al., 2010) , Fashion-MNIST (Xiao et al., 2017) , CIFAR-10 (Krizhevsky & Hinton, 2009), and ImageNet (Russakovsky et al., 2015) . Between all of our", "metrics, as well as human evaluators, there is overall agreement on the examples that are prototypes and those that are outliers. Furthermore, the differences", "between metrics constitute informative exceptions, e.g., identifying uncommon submodes in the data as well as spurious, ambiguous, or misleading examples.Usefully, there are advantages to training models using only prototypical examples: the models learn much faster, their accuracy loss is not great and occurs almost entirely on outlier test examples, and the models are both easier to interpret and more adversarially robust. Conversely, at the same sample", "complexity, significantly higher overall accuracy can be achieved by training models exclusively on outliers-once erroneous and misleading examples have been eliminated from the dataset.", "This paper explores prototypes: starting with the properties we would like them to satisfy, then evaluating metrics for computing them, and discussing how we can utilize them during training.", "The five metrics we study all are highly correlated, and capture human intuition behind what is meant by \"prototypical\".", "When the metrics disagree on the prototypicality of an example, we can often learn something interesting about that example (e.g., that it is from a rare submode of a class).", "Further, we explore the many reasons to utalize prototypes: we find that models trained on prototypes often have simpler decision boundaries and are thus more adversarially robust.", "However, training only on prototypes often yields inferior accuracy compared to training on outliers.", "We believe that further exploring metrics for identifying prototypes and developing methods for using them during training is an important area of future work, and hope that our analysis will be useful towards that end goal." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2857142686843872, 0.1538461446762085, 0.14999999105930328, 0.1428571343421936, 0.2545454502105713, 0.15094339847564697, 0.2142857164144516, 0.18518517911434174, 0.17142856121063232, 0.1538461446762085, 0.17777776718139648, 0.1666666567325592, 0.12244897335767746, 0.1538461446762085, 0.09090908616781235, 0.11320754140615463, 0.19607841968536377, 0.05405404791235924, 0.17543859779834747 ]
r1xyx3R9tQ
true
[ "We can identify prototypical and outlier examples in machine learning that are quantifiably very different, and make use of them to improve many aspects of neural networks." ]
[ "In this work, we propose the Sparse Deep Scattering Croisé Network (SDCSN) a novel architecture based on the Deep Scattering Network (DSN).", "The DSN is achieved by cascading wavelet transform convolutions with a complex modulus and a time-invariant operator.", "We extend this work by first,\n", "crossing multiple wavelet family transforms to increase the feature diversity while avoiding any learning.", "Thus providing a more informative latent representation and benefit from the development of highly specialized wavelet filters over the last decades.", "Beside, by combining all the different wavelet representations, we reduce the amount of prior information needed regarding the signals at hand.\n", "Secondly, we develop an optimal thresholding strategy for over-complete filter banks that regularizes the network and controls instabilities such as inherent non-stationary noise in the signal.", "Our systematic and principled solution sparsifies the latent representation of the network by acting as a local mask distinguishing between activity and noise.", "Thus, we propose to enhance the DSN by increasing the variance of the scattering coefficients representation as well as improve its robustness with respect to non-stationary noise.\n", "We show that our new approach is more robust and outperforms the DSN on a bird detection task.", "Modern Machine Learning focuses on developing algorithms to tackle natural machine perception tasks such as speech recognition, computer vision, recommendation among others.", "Historically, some of the proposed models were based on well-justified mathematical tools from signal processing such as Fourier analysis.", "Hand-crafted features were then computed based on those tools and a classifier was trained supervised for the task of interest.", "However, such theory-guided approaches have become almost obsolete with the growth of computational power and the advent of high-capacity models.", "As such, over the past decade the standard solution evolved around deep neural networks (DNNs).", "While providing state-of-the-art performance on many benchmarks, at least two pernicious problems still plague DNNs: First, the absence of stability in the DNN's input-output mapping.", "This has famously led to adversarial attacks where small perturbations of the input lead to dramatically different outputs.", "In addition, this lack of control manifests in the detection thresholds (i.e: ReLU bias) of DNNs, rendering them prone to instabilities when their inputs exhibit non-stationary noise and discontinuities.", "Second, when inputs have low SNR, or classes are unbalanced, the stability of DNNs is cantilevered.", "A common approach to tackle this difficulty is to increase both the size of the training set and the number of parameters of the network resulting in a longer training time and a costly labeling process.", "In order to alleviate these issues we propose the use of the DSN by creating a new non-linearity based on continuous wavelet thresholding.", "Thus our model, inherits the mathematical guarantees intrinsic to the DSN regarding the stability, and improves the control via wavelet thresholding method.", "Then, in order to produce time-frequency representation that are not biased toward a single wavelet family, we propose to combine diverse wavelet families throughout the network.", "Increasing the variability of the scattering coefficient, we improve the linearization capability of the DSN and reduce the need of an expert knowledge regarding the choice of specific filter bank with respect to each input signal.The paper is organized as follows: 1.1 and 1.2 are devoted to the related work and contribution of the paper, the section 2 shows the theoretical results, where 2.1 is dedicated to the network architecture and its properties, and 2.2 provides the milestone of our thresholding method, then section 2.3 shows the characterization, via latent representation, of our network on different events by on the Freefield1010 1 audio scenes dataset.", "Finally, we evaluate our architecture and compare it to the DSN on a bird detection task are shown in 2.4.", "The appendix in divided into three parts, Appendix A provides both, the pre-requisite and details about building the wavelets dictionary to create our architecture; Appendix B shows additional results on the sparsity of the SDCSN latent representations; Appendix C shows mathematical details and proofs for the over-complete thresholding non-linearity.", "We presented an extension of the scattering network so that one can leverage multiple wavelet families simultaneously.", "Via a specific topology, cross family representations are performed carrying crucial information, as we demonstrated experimentally, allowing to significantly outperform standard scattering networks.", "We then motivated and proposed analytical derivation of an optimal overcomplete basis threhsolding being input adaptive.", "By providing greater sparsity in the representation but also a measure of filter-bank fitness.", "Again, we provided experimental validation of the use of our thresholding technique proving the robustness implied by such non-linearity.", "Finally, the ability to perform active denoising has been demonstrated crucial as we demonstrated that even in large scale setting, standard machine learning approach coupled with the SN fail to discard non-stationary noise.", "This coupled with the denoising ability of our approach should provide real world application the stability needed for consistent results and prediction control.Among the possible extensions is the one adapting the technique to convolutional neural networks such that it provides robustness with respect to adversarial attacks.", "Furthermore, using a joint scattering and DNN will inherit the benefits presented with our technique as our layers are the ones closer to the input.", "Hence, denoising will benefit the inner layers, the unconstrained standard DNN layers.", "Finally, it is possible to perform more consistent best basis selection a la maxout network.", "In fact, our thresholding technique can be linked to an optimised ReLU based thresholding.", "In this scheme, applying best basis selection based on the empirical risk would thus become equivalent to the pooling operator of a maxout network.", "A BUILDING A DEEP CROISÉ SCATTERING NETWORK A.1", "CONTINOUS WAVELET TRANSFORM \"By oscillating it resembles a wave, but by being localized it is a wavelet\".", "Yves MeyerWavelets were first introduced for high resolution seismology BID21 and then developed theoretically by Meyer et al. BID14 .", "Formally, wavelet is a function ψ ∈ L 2 such that: DISPLAYFORM0 it is normalized such that ψ L 2 = 1.", "There exist two categories of wavelets, the discrete wavelets and the continuous ones.", "The discrete wavelets transform are constructed based on a system of linear equation.", "These equations represent the atom's property.", "These wavelet when scaled in a dyadic fashion form an orthonormal atom dictionary.", "Withal, the continuous wavelets have an explicit formulation and build an over-complete dictionary when successively scaled.", "In this work, we will focus on the continuous wavelets as they provide a more complete tool for analysis of signals.", "In order to perform a time-frequency transform of a signal, we first build a filter bank based on the mother wavelet.", "This wavelet is names the mother wavelet since it will be dilated and translated in order to create the filters that will constitute the filter bank.", "Notice that wavelets have a constant-Q property, thereby the ratio bandwidth to center frequency of the children wavelets are identical to the one of the mother.", "Then, the more the wavelet atom is high frequency the more it will be localized in time.", "The usual dilation parameters follows a geometric progression and belongs to the following set: DISPLAYFORM1 .", "Where the integers J and Q denote respectively the number of octaves, and the number of wavelets per octave.", "In order to develop a systematic and general principle to develop a filter bank for any wavelet family, we will consider the weighted version of the geometric progression mentioned above, that is: DISPLAYFORM2 .", "In fact, the implementation of wavelet filter bank can be delicate since the mother wavelet has to be define at a proper center frequency such that no artifact or redundant information will appear in the final representation.", "Thus, in the section A.3 we propose a principled approach that allows the computation of the filter bank of any continuous wavelet.", "Beside, this re-normalized scaled is crucial to the comparison between different continuous wavelet.", "Having selected a geometric progression ensemble, the dilated version of the mother wavelet in the time are computed as follows: DISPLAYFORM3 , and can be calculated in the Fourier domain as follows: DISPLAYFORM4 Notice that in practice the wavelets are computed in the Fourier domain as the wavelet transform will be based on a convolution operation which can be achieved with more efficiency.", "By construction the children wavelets have the same properties than the mother one.", "As a result, in the Fourier domain:ψ λ = 0, ∀λ ∈ Λ .", "Thus, to create a filter bank that cover all the frequency support, one needs a function that captures the low frequencies contents.", "The function is called the scaling function and satisfies the following criteria: DISPLAYFORM5 Finally, we denote by W x, where W ∈ C N * (J * Q)×N is a block matrix such that each block corresponds to the filters at all scales for a given time.", "Also, we denote by S(W x)(λ, t) the reshape operator such that, DISPLAYFORM6 where ψ is the complex conjugate of ψ λ ." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2666666507720947, 0.1860465109348297, 0.12121211737394333, 0.24390242993831635, 0.21276594698429108, 0.1702127605676651, 0.1538461446762085, 0.2083333283662796, 0.2745097875595093, 0.17777776718139648, 0.08163265138864517, 0.08695651590824127, 0.1702127605676651, 0.13333332538604736, 0.04878048226237297, 0.15686273574829102, 0.13636362552642822, 0.2142857164144516, 0.1395348757505417, 0.2222222238779068, 0.40816324949264526, 0.260869562625885, 0.2745097875595093, 0.1428571343421936, 0.2083333283662796, 0.17910447716712952, 0.1818181723356247, 0.07999999821186066, 0.1395348757505417, 0.19512194395065308, 0.1818181723356247, 0.17543859779834747, 0.1764705777168274, 0.16326530277729034, 0.052631575614213943, 0.0952380895614624, 0.09999999403953552, 0.1599999964237213, 0, 0.0952380895614624, 0.08695651590824127, 0.09090908616781235, 0.20512820780277252, 0.09999999403953552, 0.060606058686971664, 0.14999999105930328, 0.1428571343421936, 0.1666666567325592, 0.260869562625885, 0.2448979616165161, 0.1702127605676651, 0.1463414579629898, 0.1904761791229248, 0.1463414579629898, 0.2857142686843872, 0.19999998807907104, 0.3404255211353302, 0.20000000298023224, 0.17391303181648254, 0.052631575614213943, 0.14999999105930328, 0.1304347813129425, 0.1818181723356247, 0.12765957415103912 ]
rkpqdGDeM
true
[ "We propose to enhance the Deep Scattering Network in order to improve control and stability of any given machine learning pipeline by proposing a continuous wavelet thresholding scheme" ]
[ "We propose a neural clustering model that jointly learns both latent features and how they cluster.", "Unlike similar methods our model does not require a predefined number of clusters.", "Using a supervised approach, we agglomerate latent features towards randomly sampled targets within the same space whilst progressively removing the targets until we are left with only targets which represent cluster centroids.", "To show the behavior of our model across different modalities we apply our model on both text and image data and very competitive results on MNIST.", "Finally, we also provide results against baseline models for fashion-MNIST, the 20 newsgroups dataset, and a Twitter dataset we ourselves create.", "Clustering is one of the fundamental problems of unsupervised learning.", "It involves the grouping of items into clusters such that items within the same cluster are more similar than items in different clusters.", "Crucially, the ability to do this often hinges upon learning latent features in the input data which can be used to differentiate items from each other in some feature space.", "Two key questions thus arise: How do we decide upon cluster membership?", "and How do we learn good representations of data in feature space?Spurred", "initially by studies into the division of animals into taxa BID31 , cluster analysis matured as a field in the subsequent decades with the advent of various models. These included", "distribution-based models, such as Gaussian mixture models BID9 ; densitybased models, such as DBSCAN BID11 ; centroid-based models, such as k-means.2 and hierarchical models, including agglomerative BID29 and divisive models BID13 .While the cluster", "analysis community has focused on the unsupervised learning of cluster membership, the deep learning community has a long history of unsupervised representation learning, yielding models such as variational autoencoders BID21 , generative adversarial networks BID12 , and vector space word models BID28 .In this paper, we", "propose using noise as targets for agglomerative clustering (or NATAC). As in BID1 we begin", "by sampling points in features space called noise targets which we match with latent features. During training we", "progressively remove targets and thus agglomerate latent features around fewer and fewer target centroids using a simple heuristic. To tackle the instability", "of such training we augment our objective with an auxiliary loss which prevents the model from collapsing and helps it learn better representations. We explore the performance", "of our model across different modalities in Section 3.Recently, there have been several attempts at jointly learning both cluster membership and good representations using end-to-end differentiable methods. Similarly to us, BID37 use", "a policy to agglomerate points at each training step but they require a given number of clusters to stop agglomerating at. BID23 propose a form of supervised", "neural clustering which can then be used to cluster new data containing different categories. BID25 propose jointly learning representations", "and clusters by using a k-means style objective. BID36 introduce deep embedding clustering (DEC", ") which learns a mapping from data space to a lower-dimensional feature space in which it iteratively optimizes a clustering objective (however, as opposed to the hard assignment we use, they optimize based on soft assignment).Additionally, there have been unsupervised clustering", "methods using nonnegative low-rank approximations BID40 which perform competitively to current neural methods on datasets such as MNIST.Unlike all of the above papers, our method does not require a predefined number of clusters.", "In this paper, we present a novel neural clustering method which does not depend on a predefined number of clusters.", "Our empirical evaluation shows that our model works well across modalities.", "We show that NATAC has competitive performance to other methods which require a pre-defined number of clusters.", "Further, it outperforms powerful baselines on Fashion-MNIST and text datasets (20 Newsgroups and a Twitter hashtag dataset).", "However, NATAC does require some hyperparameters to be tuned, namely the dimensionality of the latent space, the length of warm-up training and the values for the loss coefficient λ.", "However, our experiments indicate that NATAC models are fairly robust to hyperparameter changes.Future work Several avenues of investigation could flow from this work.", "Firstly, the effectiveness of this method in a semi-supervised setting could be explored using a joint reconstruction and classi-fication auxiliary objective.", "Another interesting avenue to explore would be different agglomerative policies other than delete-and-copy.", "Different geometries of the latent space could also be considered other than a unit normalized hypersphere.", "To remove the need of setting hyperparameters by hand, work into automatically controlling the coefficients (e.g. using proportional control) could be studied.", "Finally, it would be interesting to see whether clustering jointly across different feature spaces would help with learning better representations.B EXAMPLES FROM THE FASHION-MNIST DATASET.", "We experimented with using polar coordinates early on in our experiments.", "Rather than using euclidean coordinates as the latent representation, z is considered a list of angles θ 1 , θ 2 · · · θ n where θ 1 · · · θ n−1 ∈ [0, π] and θ n ∈ [0, 2π].", "However, we found that the models using polar geometry performed significantly worse than those with euclidean geometry.Additionally, we also experimented with not L2 normalizing the output of the encoder network.", "We hypothesized that the model would learn a better representation of the latent space by also \"learning\" the geometry of the noise targets.", "Unfortunately, the unnormalized representation caused the noise targets to quickly collapse to a single point." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1666666567325592, 0.380952388048172, 0.0555555522441864, 0.06666666269302368, 0.0714285671710968, 0.11764705181121826, 0.14814814925193787, 0, 0, 0.09999999403953552, 0.11764705181121826, 0, 0.08695651590824127, 0.08695651590824127, 0, 0.0714285671710968, 0.05882352590560913, 0.04878048598766327, 0.2666666507720947, 0.07407406717538834, 0.27272728085517883, 0.09090908616781235, 0.20000000298023224, 0.37037035822868347, 0, 0.3199999928474426, 0.0833333283662796, 0.0624999962747097, 0.06451612710952759, 0.1428571343421936, 0, 0.1666666567325592, 0.06666666269302368, 0.060606058686971664, 0, 0.10810810327529907, 0.05882352590560913, 0.14814814925193787, 0.0952380895614624 ]
BJvVbCJCb
true
[ "Neural clustering without needing a number of clusters" ]
[ "Recent work on explanation generation for decision-making problems has viewed the explanation process as one of model reconciliation where an AI agent brings the human mental model (of its capabilities, beliefs, and goals) to the same page with regards to a task at hand.", "This formulation succinctly captures many possible types of explanations, as well as explicitly addresses the various properties -- e.g. the social aspects, contrastiveness, and selectiveness -- of explanations studied in social sciences among human-human interactions.", "However, it turns out that the same process can be hijacked into producing \"alternative explanations\" -- i.e. explanations that are not true but still satisfy all the properties of a proper explanation.", "In previous work, we have looked at how such explanations may be perceived by the human in the loop and alluded to one possible way of generating them.", "In this paper, we go into more details of this curious feature of the model reconciliation process and discuss similar implications to the overall notion of explainable decision-making.", "So far we have only considered explicit cases of deception.", "Interestingly, existing approaches in model reconciliation already tend to allow for misconceptions to be ignored if not actively induced by the agent." ]
[ 0, 0, 1, 0, 0, 0, 0 ]
[ 0.1071428507566452, 0.0416666604578495, 0.1666666567325592, 0.09090908616781235, 0.04878048226237297, 0, 0.15789473056793213 ]
HkxbEp2QqE
true
[ "Model Reconciliation is an established framework for plan explanations, but can be easily hijacked to produce lies." ]
[ "We consider new variants of optimization algorithms.", "Our algorithms are based on the observation that mini-batch of stochastic gradients in consecutive iterations do not change drastically and consequently may be predictable.", "Inspired by the similar setting in online learning literature called Optimistic Online learning, we propose two new optimistic algorithms for AMSGrad and Adam, respectively, by exploiting the predictability of gradients. ", "The new algorithms combine the idea of momentum method, adaptive gradient method, and algorithms in Optimistic Online learning, which leads to speed up in training deep neural nets in practice.", "Nowadays deep learning has been shown to be very effective in several tasks, from robotics (e.g. BID15 ), computer vision (e.g. BID12 ; BID9 ), reinforcement learning (e.g. BID18 , to natural language processing (e.g. ).", "Typically, the model parameters of a state-of-the-art deep neural net is very high-dimensional and the required training data is also in huge size.", "Therefore, fast algorithms are necessary for training a deep neural net.", "To achieve this, there are number of algorithms proposed in recent years, such as AMSGRAD (Reddi et al. (2018) ), ADAM BID13 ), RMSPROP (Tieleman & Hinton (2012) ), ADADELTA (Zeiler (2012) ), and NADAM BID6 ), etc.All the prevalent algorithms for training deep nets mentioned above combines two ideas: the idea of adaptivity in ADAGRAD BID7 BID17 ) and the idea of momentum as NESTEROV'S METHOD BID19 ) or the HEAVY BALL method BID20 ).", "ADAGRAD is an online learning algorithm that works well compared to the standard online gradient descent when the gradient is sparse.", "The update of ADAGRAD has a notable feature: the learning rate is different for different dimensions, depending on the magnitude of gradient in each dimension, which might help in exploiting the geometry of data and leading to a better update.", "On the other hand, NESTEROV'S METHOD or the Momentum Method BID20 ) is an accelerated optimization algorithm whose update not only depends on the current iterate and current gradient but also depends on the past gradients (i.e. momentum).", "State-of-the-art algorithms like AMSGRAD (Reddi et al. (2018) ) and ADAM BID13 ) leverages these two ideas to get fast training for neural nets.In this paper, we propose an algorithm that goes further than the hybrid of the adaptivity and momentum approach.", "Our algorithm is inspired by OPTIMISTIC ONLINE LEARNING BID4 ; Rakhlin & Sridharan (2013) ; Syrgkanis et al. (2015) ; BID0 ).", "OPTIMISTIC ONLINE LEARNING considers that a good guess of the loss function in the current round of online learning is available and plays an action by exploiting the good guess.", "By exploiting the guess, those algorithms in OPTIMISTIC ONLINE LEARNING have regret in the form of O( T t=1 g t − m t ), where g t is the gradient of loss function in round t and m t is the \"guess\" of g t before seeing the loss function in round t (i.e. before getting g t ).", "This kind of regret can be much smaller than O( √ T ) when one has a good guess m t of g t .", "We combine the OPTIMISTIC ONLINE LEARNING idea with the adaptivity and the momentum ideas to design new algorithms in training deep neural nets, which leads to NEW-OPTIMISTIC-AMSGRAD and NEW-OPTIMISTIC-ADAM.", "We also provide theoretical analysis of NEW-OPTIMISTIC-AMSGRAD.", "The proposed OPTIMISTIC-algorithms not only adapt to the informative dimensions and exhibit momentums but also take advantage of a good guess of the next gradient to facilitate acceleration.", "We evaluate our algorithms with BID13 ), (Reddi et al. (2018) ) and BID5 ).", "Experiments show that our OPTIMISTIC-algorithms are faster than the baselines.", "We should explain that BID5 proposed another version of optimistic algorithm for ADAM, which is referred to as ADAM-DISZ in this paper.", "We apply the idea of BID5 ) on AMSGRAD, which leads to AMSGRAD-DISZ.", "Both ADAM-DISZ and AMSGRAD-DISZ are used as baselines." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.7777777910232544, 0.11428570747375488, 0.20000000298023224, 0.3243243098258972, 0.0476190447807312, 0.1875, 0.3636363446712494, 0.1690140813589096, 0, 0.09302325546741486, 0.045454543083906174, 0.19607843458652496, 0, 0.0555555522441864, 0.08510638028383255, 0.060606054961681366, 0.277777761220932, 0.2222222238779068, 0.0555555522441864, 0.1599999964237213, 0, 0.1818181723356247, 0.1666666567325592, 0 ]
HkghV209tm
true
[ "We consider new variants of optimization algorithms for training deep nets." ]
[ "Using Recurrent Neural Networks (RNNs) in sequence modeling tasks is promising in delivering high-quality results but challenging to meet stringent latency requirements because of the memory-bound execution pattern of RNNs.", "We propose a big-little dual-module inference to dynamically skip unnecessary memory access and computation to speedup RNN inference.", "Leveraging the error-resilient feature of nonlinear activation functions used in RNNs, we propose to use a lightweight little module that approximates the original RNN layer, which is referred to as the big module, to compute activations of the insensitive region that are more error-resilient.", "The expensive memory access and computation of the big module can be reduced as the results are only used in the sensitive region.", "Our method can reduce the overall memory access by 40% on average and achieve 1.54x to 1.75x speedup on CPU-based server platform with negligible impact on model quality.", "Recurrent Neural Networks (RNNs) play a critical role in many natural language processing (NLP) tasks, such as machine translation Wu et al., 2016) , speech recognition (Graves et al., 2013; He et al., 2019) , and speech synthesis , owing to the capability of modeling sequential data.", "These RNN-based services deployed in both data-center and edge devices often process inputs in a streaming fashion, which demands a real-time interaction.", "For instance, in cloud-based translation tasks, multiple requests need to be served with very stringent latency limit, where inference runs concurrently and individually (Park et al., 2018) .", "For on-device speech recognition as an automated assistant, latency is the primary concern to pursue a fast response (He et al., 2019) .", "However, serving RNN-based models in latency-sensitive scenarios is challenging due to the low data reuse, and thus low resource utilization as memory-bound General Matrix-Vector multiplication (GEMV) is the core compute pattern of RNNs.", "Accessing weight matrix from off-chip memory is the bottleneck of GEMV-based RNN execution as the weight data almost always cannot fit in on-chip memory.", "Moreover, accessing weights repeatedly at each time-step, especially in sequenceto-sequence models, makes the memory-bound problem severer.", "Subsequently, the on-chip computing resources would be under-utilized.", "Although batching is a walk-around for low-utilization, using a large batch size is not favored in latency-sensitive scenarios such as speech recognition and translation.", "In essence, the RNN inference is not a simple GEMV.", "With non-linearity followed the GEMV operation as the activation functions, the RNN inference operation is \"activated\" GEMV.", "These nonlinear activation functions as used in neural networks bring error resilience.", "As shown in Figure 1 , sigmoid and tanh functions in Gated RNNs such as Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) and Gated Recurrent Unit (GRU) have insensitive regionsgreen shaded regions -where the outputs are saturated and resilient to errors in pre-activation accumulated results.", "In other words, not all computations in RNNs need to be accurate.", "Can we leverage this error resilience in RNNs to reduce the memory access and eventually achieve speedup?", "To this end, we propose a big-little dual-module inference that regarding the original RNN layer as the big module, and use a parameterized little module to approximate the big module to help reduce redundant weight accesses.", "The philosophy of dual-module inference is using approximated results computed by the memory-efficient little module in the insensitive region, and using accurate results computed by the memory-intensive big module in the sensitive region.", "For this reason, the final outputs are the mixture of the big-little module.", "With the memory-efficient little module computes for the insensitive region, we can reduce the expensive data access and computation of the big module and thus reduce overall memory access and computation cost.", "The (in)sensitive region is dynamically determined using the little module results.", "Because of the error resilience, using approximated results in the insensitive region has a negligible impact on the overall model quality but creates a significant acceleration potential.", "Given the trade-off between accuracy and efficiency, the little module needs to be sufficiently accurate while being as much lightweight as possible.", "To achieve this, we first use a dimension reduction method -random projection -to reduce the parameter size of the little module and thus reducing data accesses.", "Then, we quantize the weights of the little module to lower the overhead further.", "Because we only need the little module outputs in the insensitive region that is error-resilient, we can afford aggressively low bit-width.", "Compared with common sparsification schemes, our hybrid approach avoids indexing overheads and therefore successfully achieves practical speedup.", "We evaluate our method on language modeling and neural machine translation using RNN-based models and measure the performance, i.e., wall-clock execution time, on CPU-based server platform.", "With overall memory access data reduced by 40% on average, our method can achieve 1.54x to 1.75x speedup with negligible impact on model quality.", "Dimension reduction is an integral part of our dual-module inference method to reduce the number of parameters and memory footprint.", "Here, we study the impact of different levels of dimension reduction on the model quality and performance.", "We conduct experiments on language modeling using single-layer LSTM of 1500 hidden units.", "We quantize the little module to INT8 and reduce the hidden dimension from 1500 to three different levels, which are calculated by Sparse Random Projection.", "We fix the insensitive ratio to be 50% across this set of experiments.", "As we can see in Table 5 , the higher dimension of the little module, the better approximation the little module can perform.", "For instance, when we reduce hidden size to 966 and quantize to INT8, the dual-module inference can achieve slightly better quality -PPL of 80.40 -and 1.37x speedup.", "More aggressive dimension reduction can further have more speedup at the cost of more quality degradation: hidden dimension reduced to 417 and 266 can have 1.67x and 1.71x speedup but increase PPL by 0.72 and 2.87, respectively.", "We further show the overhead of performing the computation of the little module.", "As listed in the last three columns in Table 5 , we measure the execution time of performing dimension reduction on inputs by Sparse Random Projection, computation of the little module, and computation of the big module; the execution time is normalized to the baseline case, i.e., the execution time of standard LSTM, to highlight the percentage of overheads.", "When the hidden dimension is reduced to 966, the overhead of the little module accounts 22% while the execution time of the big module is cut off by half 3 .", "In our experiments, we choose = 0.5 as the default parameter in sparse random projection as it demonstrated good quality and speedup trade-off by our study.", "When further reducing the hidden dimension to 266, there is only a slight improvement on speedup compared with the hidden size of 417 in the little module, where the overhead of the little module is already small enough, but the quality dropped significantly.", "Quantizing the weights of the little module is another integral part of keeping memory footprint small.", "We show different quantization levels the impact on model quality and parameter size.", "After training the little module, we can quantize its weights to lower precision to reduce the memory accessing on top of dimension reduction.", "As we can see in Table 6 , more aggressive quantization leads to smaller parameter size that can reduce the overhead of computing the little module; on the other hand, the approximation of the little module is compromised by quantization.", "We can quantize the little module up to INT4 without significant quality degradation.", "Using lower precision would degrade the quality while decreasing the parameter size.", "For performance evaluation, we choose INT8 as the quantization level since we leverage off-the-shelf INT8 GEMM kernel in MKL.", "We expect more speedup once the little module overhead can be further reduced by leveraging INT4 compute kernels.", "As we aim at the memory-bound problem of RNN-based inference applications, we limit the discussion on related work to RNN inference acceleration.", "Although we only evaluate our dual-module inference method on standard LSTMs/GRUs, we believe our method can be applied to many newly released sequence modeling networks (Shen et al., 2019; as we leverage the commonly observed error-resilience of non-linear activation functions.", "In this paper, we describe a big-little dual-module inference method to mitigate the memory-bound problem in serving RNN-based models under latency-sensitive scenarios.", "We leverage the error resilience of nonlinear activation functions by using the lightweight little module to compute for the insensitive region and using the big module with skipped memory access and computation to compute for the sensitive region.", "With overall memory access reduced by near half, our method can achieve 1.54x to 1.75x wall-clock time speedup without significant degradation on model quality." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.043478257954120636, 0.47058823704719543, 0.1111111044883728, 0.20512819290161133, 0.17777776718139648, 0.10344827175140381, 0.10526315122842789, 0.08695651590824127, 0.04878048226237297, 0.0833333283662796, 0.1538461446762085, 0, 0, 0.14999999105930328, 0.2142857164144516, 0.12903225421905518, 0, 0.03389830142259598, 0.06666666269302368, 0.17142856121063232, 0.25, 0.2857142686843872, 0.13793103396892548, 0.19512194395065308, 0.13793103396892548, 0.1428571343421936, 0.10526315122842789, 0.1860465109348297, 0.06666666269302368, 0, 0.05714285373687744, 0.13636362552642822, 0.1428571343421936, 0.21621620655059814, 0.12121211737394333, 0.19354838132858276, 0.1463414579629898, 0.12903225421905518, 0.0555555522441864, 0.1304347813129425, 0.11764705181121826, 0.1428571343421936, 0.09999999403953552, 0.09756097197532654, 0.09302324801683426, 0.11538460850715637, 0.1249999925494194, 0.12903225421905518, 0.10256409645080566, 0.07999999821186066, 0.06451612710952759, 0, 0, 0.1111111044883728, 0.1621621549129486, 0.072727270424366, 0.09999999403953552, 0.31111109256744385, 0.1395348757505417 ]
SJe3KCNKPr
true
[ "We accelerate RNN inference by dynamically reducing redundant memory access using a mixture of accurate and approximate modules." ]