source
sequence | source_labels
sequence | rouge_scores
sequence | paper_id
stringlengths 9
11
| ic
unknown | target
sequence |
---|---|---|---|---|---|
[
"We present an approach for expanding taxonomies with synonyms, or aliases.",
"We target large shopping taxonomies, with thousands of nodes.",
"A comprehensive set of entity aliases is an important component of identifying entities in unstructured text such as product reviews or search queries.",
"Our method consists of two stages: we generate synonym candidates from WordNet and shopping search queries, then use a binary classifier to filter candidates.",
"We process taxonomies with thousands of synonyms in order to generate over 90,000 synonyms.",
"We show that using the taxonomy to derive contextual features improves classification performance over using features from the target node alone.We show that our approach has potential for transfer learning between different taxonomy domains, which reduces the need to collect training data for new taxonomies.",
"Semantic Networks (SN) represent entities, relationships between entities, and their properties.",
"Semantic Networks may represent a broad variety of information, from named entities, such as persons or places, to abstract concepts.",
"The term \"knowledge graph\" is also used to describe this form of structured data.",
"One of the properties commonly encoded in a SN are the primary name and aliases of an entity in multiple languages.",
"For example, Wikidata 1 entity Q2 has multilingual names, such as Earth or Blue Planet (in English), or Tierra (in Spanish).",
"Semantic networks may include sub-structures based on a subset of the relations defined, for example, taxonomies which define type-subtype relations; for example, ConceptNet includes the WordNet taxonomy BID28 as a subset of its nodes and relations.Synonyms, or aliases, are equivalent names for entities in a SN.",
"For example, \"washing machine\" and \"washer\" can refer to the same concept of an appliance type.",
"Synonyms enable improved performance in a variety of SN applications.",
"For entity extraction from text BID7 Hersh, 2005, Agrawal et al., 2008] , wikification BID18 BID5 , or natural language instruction grounding BID16 , a broader set of synonyms improves recall.",
"In applications which use SN to generate prompts for users, such as conversational agents BID11 BID31 or generating explanations of the system's state in natural language , a richer set of synonyms results in more varied utterances.In this paper, we focus on the problem of expanding taxonomies with synonyms for applications in which entities are complex concepts arranged into taxonomies designed to facilitate browsing the product catalog on amazon.com.",
"The ontologies contain product type taxonomies, which are the focus for this work, in addition to other information such as attributes for refining products in search results.",
"In addition to distinct product types, the taxonomies contain nodes which are complex concepts, for example combinations of types and attributes, or groupings of multiple types.",
"For example, the node \"Gloves & Protective Gear\" groups together gloves and other gear; the node \"Automatic Irrigation Equipment\" describes irrigation equipment that has automation features.The primary application of the synonyms generated using our method is to identify direct references to the taxonomy nodes in text such as search queries.",
"Having a broader set of synonyms for taxonomy nodes enables a broader query coverage for experiences that are specific to products in the taxonomy, for example, showing the best selling products under a given category.",
"It is thus important to the users' experience that node synonyms are as accurate as possible, within the broader context of the taxonomy.",
"For example, given the node \"household bathroom surface cleaners\" we output synonyms such as \"family bathroom surface cleaner\" and \"house bath surface cleansing.\"",
"Our method is robust to errors of word sense compatibility, for example we reject \"mack game restrainer\" as a synonym for \"mac game controllers,\" or \"store circuit board\" is a rejected candidate for \"memory cards.\"The taxonomies are authored by experts familiar with the respective shopping domains to facilitate navigation and browsing (Section 4.1).",
"They contain over 4,300 nodes and have depths of over 30 nodes; in addition to taxonomical relationships, they represent type properties, possible values, node equivalence, and other information.",
"In this paper, we identify each taxonomy by its root node name.",
"For the example shown in Figure 1 , the taxonomy \"Baby Products\" includes, among 15 other nodes, a category node named \"Car Seats and Accessories.\"",
"This has the children \"Car Seats,\" \"Car Seat Bases,\" \"Car Beds,\" and \"Accessories.\"",
"The \"Accesories\" node has 17 children (e.g. \"Cup Holders\" and \"Seat Liners\"), while the \"Car Seats\" node has five children grouped by age group and chair type.",
"We note the fine granularity of nodes, which includes distinctions based on product types, features, indented use, and other criteria dependent on the domain; concepts range from general to specific in fine increments, with children refining and specifying the parent node.",
"The taxonomy nodes we target have complex names, for example \"Convertible Child Safety Car Seats\" and are thus unlikely to be frequently found in large natural language text corpora with sufficient frequency in order to extract synonyms from unstructured text.We present a method that leverages similarity within the taxonomy to evaluate synonym candidates obtained using low-precision, high-recall methods.",
"Our goal is to enable collecting possible synonyms from a broad range of sources, and output a final set of synonyms consistent to a single standard.",
"This method enables expansion with synonyms for complex SN that are not common in typical text corpora, such as shopping taxonomies for browsing.",
"The main advantages of our approach are that:",
"1) it does not depend on frequent mentions in corpora of entities in the taxonomy;",
"2) it identifies synonyms that fit within the broader structure of a taxonomy contained within the graph, and outputs synonyms of similar specificity to the original name;",
"3) the classifier uses domain-independent features, enabling cross-domain predictions.Our method consists of the following stages ( Figure 2 ):1.",
"Generate synonym candidates for each node of the taxonomy.",
"We experimented with two methods of candidate generation.",
"First, we primarily used a method based on Figure 1 : Sample section of a taxonomy used in this work, which is designed for exploring and filtering online shopping catalogs.",
"We highlight the path from the root node, \"Baby Products,\" to a leaf node, \"Child Safety Booster Car Seats.\"",
"Each node prefixed by a + sign indicates the node has children; leaf nodes are marked by a -.",
"For compactness, we enumerate instead of indenting some of the 15 children of the root node.Figure 2: Overview of our method.",
"We start with product taxonomies designed for browsing a large online shopping catalog, described in Section 4.1, and generate synonym candidates for each node using a thesaurus such as WordNet (Section 3.1).",
"We then classify the set of candidates using a binary classifier (Section 3.2) to output the final set of synonyms.WordNet BID23 , to generate the cartesian product of concept-level synonyms that are present in the node's name (Section 3.1).",
"Secondly, we show additional results on classifying shopping search queries (Section 4.4).2.",
"Filter synonym candidates using a binary classifier (Section 3.2).",
"The classifier uses features derived from",
"a) similarity between the candidate the target node, and",
"b) similarity features between the candidate and other nodes in the taxonomy.",
"Our goal is to avoid producing synonyms more general or more specific than the original node name, such that the synonyms are consistent with the taxonomy as a whole.",
"The classifier uses features independent of the taxonomy vocabulary, making our method suitable for transfer learning by predicting on new taxonomies that do not have training data available.",
"Transfer learning is one method of interest to reduce the need to collect training labels for new taxonomies.The rest of the paper is structured as follows.",
"We first review relevant literature.",
"We then describe the taxonomies we use in this work (Section 4.1), and the methods of obtaining synonym candidates and classifying them.",
"We then evaluate the binary synonym classifier using a corpus of annotations collected using crowdsourcing for synonyms generated using the thesaurus.",
"We also include cross-domain learning experiments to evaluate the potential for training the classifier on one taxonomy and predicting on synonyms for different taxonomy (Section 4.3).",
"Furthermore, we conducted a separate evaluation using an alternative method of selecting synonym candidates, which we will briefly summarize: we associated search queries with taxonomy names using customer purchases, and used these search terms as synonym candidates (Section 4.4).",
"We evaluate the impact of using domain-specific knowledge, specifically lists of known brand names, which may be closely associated but not synonymous with product categories, to improve synonym filtering.",
"We conclude the paper with observations about the role of taxonomy-wide similarity in predicting synonymy and describe future directions.",
"Entity aliases are an important component of ontology construction, enabling entity recognition in text and generating natural language references to entities.",
"We demonstrate a method for identifying synonyms for large taxonomies used for online shopping.",
"Our method consists of two complementary approaches of selecting synonym candidates, and a candidate filtering stage which uses a classifier that includes structural similarity features.",
"We show that using structural similarity features, such as comparing synonym candidates with the parent, children, or root nodes in the taxonomy, improves classification accuracy, and that the method is applicable to transfer learning between taxonomies.",
"We include an additional evaluation on using search queries associated statistically with the taxonomy nodes via user behavior.",
"This method extracts a broader vocabulary for the candidates, including tokens that are not common words, such as proper names, model numbers, or years.",
"We show that using domain knowledge such as brand name definitions improves classification performance for candidates extracted from search queries, which conflate in the same context types, brands and other terms.In future work we will experiment with taxonomies in languages other than English.",
"We will explore the potential for predicting synonyms in other languages than the training language, similar to the experiments we showed for cross-domain prediction."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.27272728085517883,
0.29999998211860657,
0,
0.23529411852359772,
0.4166666567325592,
0.21276594698429108,
0,
0.06451612710952759,
0.07999999821186066,
0,
0,
0.08163265138864517,
0.07407406717538834,
0,
0.04999999701976776,
0.1764705926179886,
0.1111111044883728,
0.17142856121063232,
0.07017543911933899,
0.1538461446762085,
0.12903225421905518,
0.0624999962747097,
0.13333332538604736,
0.054054051637649536,
0,
0,
0,
0,
0.08510638028383255,
0.1538461446762085,
0.1249999925494194,
0.24242423474788666,
0,
0,
0.12121211737394333,
0,
0.09999999403953552,
0.10526315122842789,
0.10256409645080566,
0.13793103396892548,
0,
0,
0.2790697515010834,
0.1860465109348297,
0.07999999821186066,
0,
0,
0,
0,
0.1111111044883728,
0.1538461446762085,
0.23529411852359772,
0.1249999925494194,
0.1875,
0.20689654350280762,
0.29411762952804565,
0,
0.10256409645080566,
0.06896550953388214,
0.0624999962747097,
0.52173912525177,
0,
0.1818181723356247,
0.06896550953388214,
0.05714285373687744,
0.11320754140615463,
0.25
] | rJx2g-qaTm | true | [
"We use machine learning to generate synonyms for large shopping taxonomies."
] |
[
"Relational reasoning, the ability to model interactions and relations between objects, is valuable for robust multi-object tracking and pivotal for trajectory prediction.",
"In this paper, we propose MOHART, a class-agnostic, end-to-end multi-object tracking and trajectory prediction algorithm, which explicitly accounts for permutation invariance in its relational reasoning.",
"We explore a number of permutation invariant architectures and show that multi-headed self-attention outperforms the provided baselines and better accounts for complex physical interactions in a challenging toy experiment.",
"We show on three real-world tracking datasets that adding relational reasoning capabilities in this way increases the tracking and trajectory prediction performance, particularly in the presence of ego-motion, occlusions, crowded scenes, and faulty sensor inputs.",
"To the best of our knowledge, MOHART is the first fully end-to-end multi-object tracking from vision approach applied to real-world data reported in the literature.",
"Real-world environments can be rich and contain countless types of interacting objects.",
"Intelligent autonomous agents need to understand both the objects and interactions between them if they are to operate in those environments.",
"This motivates the need for class-agnostic algorithms for tracking multiple objects-a capability that is not supported by the popular tracking-by-detection paradigm.",
"In tracking-by-detection, objects are detected in each frame independently, e.",
"g.",
", by a pre-trained deep convolutional neural network (CNN) such as YOLO (Redmon et al. (2016) ), and then linked across frames.",
"Algorithms from this family can achieve high accuracy, provided sufficient labelled data to train the object detector, and given that all encountered objects can be associated with known classes, but fail when faced with objects from previously unseen categories.",
"Hierarchical attentive recurrent tracking (HART) is a recently-proposed, alternative method for single-object tracking (SOT), which can track arbitrary objects indicated by the user (Kosiorek et al. (2017) ).",
"This is done by providing an initial bounding-box, which may be placed over any part of the image, regardless of whether it contains an object or what class the object is.",
"HART efficiently processes just the relevant part of an image using spatial attention; it also integrates object detection, feature extraction, and motion modelling into one network, which is trained fully end-to-end.",
"Contrary to tracking-by-detection, where only one video frame is typically processed at any given time to generate bounding box proposals, end-to-end learning in HART allows for discovering complex visual and spatio-temporal patterns in videos, which is conducive to inferring what an object is and how it moves.",
"In the original formulation, HART is limited to the single-object modality-as are other existing end-to-end trackers (Kahou et al. (2017); Rasouli Danesh et al. (2019) ; Gordon et al. (2018) ).",
"In this work, we present MOHART, a class-agnostic tracker with complex relational reasoning capabilities provided by a multi-headed self-attention module (Vaswani et al. (2017) ; Lee et al. (2019) ).",
"MOHART infers the latent state of every tracked object in parallel, and uses self-attention to inform per-object states about other tracked objects.",
"This helps to avoid performance loss under self-occlusions of tracked objects or strong camera motion.",
"Moreover, since the model is trained end-to-end, it is able to learn how to manage faulty or missing sensor inputs.",
"See fig. 1 for a high-level illustration of MOHART.",
"In order to track objects, MOHART estimates their states, which can be naturally used to predict future trajectories over short temporal horizons, which is especially useful for planning in the context of autonomous agents.",
"MOHART can be trained simultaneously for object tracking and trajectory prediction at the same time, thereby increasing statistical efficiency of learning.",
"In contrast to prior art, where trajectory prediction and object tracking are usually addressed as separate problems with unrelated solutions, our work show trajectory prediction and object tracking are best addressed jointly.",
"Section 2 describes prior art in tracking-by-detection, end-to-end tracking and predestrian trajectory prediction.",
"In Section 3, we describe our approach, which uses a permutation-invariant self-attention module to enable tracking multiple objects end-to-end with relational reasoning.",
"Section 4 contrasts our approach with multi-object trackers which do not explicitly enforce permutation invariance but have the capacity to learn it, simpler permutation-invariant architectures, as well as multiple single-object trackers running in parallel.",
"We show that multi-headed self-attention significantly outperforms other approaches.",
"Finally, in Section 5, we apply MOHART to real world datasets and show that permutation-invariant relational reasoning leads to consistent performance improvement compared to HART both in tracking and trajectory prediction.",
"With MOHART, we introduce an end-to-end multi-object tracker that is capable of capturing complex interactions and leveraging these for precise predictions as experiments both on toy and real world data show.",
"However, the experiments also show that the benefit of relational reasoning strongly depends on the nature of the data.",
"The toy experiments showed that in an entirely deterministic world relational reasoning was much less important than in a stochastic environment.",
"Amongst the real-world dataset, the highest performance gains from relational reasoning were achieved on the MOTChallenge dataset, which features crowded scenes, ego-motion and occlusions."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1875,
0.3243243098258972,
0.1538461446762085,
0.1860465109348297,
0.2857142686843872,
0,
0.1249999925494194,
0.06451612710952759,
0.09090908616781235,
0.05882352590560913,
0.04255318641662598,
0.10256409645080566,
0,
0,
0.07547169178724289,
0.052631575614213943,
0.20512820780277252,
0.3030303120613098,
0.07407406717538834,
0.06666666269302368,
0.1904761791229248,
0.13636362552642822,
0.12121211737394333,
0.10810810327529907,
0.1599999964237213,
0.4117647111415863,
0.13636362552642822,
0.0952380895614624,
0.307692289352417,
0.0476190447807312,
0.14814814925193787,
0.25,
0.12121211737394333
] | Byl-264tvr | true | [
"MOHART uses a self-attention mechanism to perform relational reasoning in multi-object tracking."
] |
[
"We investigate the combination of actor-critic reinforcement learning algorithms with uniform large-scale experience replay and propose solutions for two challenges:",
"(a) efficient actor-critic learning with experience replay",
"(b) stability of very off-policy learning.",
"We employ those insights to accelerate hyper-parameter sweeps in which all participating agents run concurrently and share their experience via a common replay module.\n\n",
"To this end we analyze the bias-variance tradeoffs in V-trace, a form of importance sampling for actor-critic methods.",
"Based on our analysis, we then argue for mixing experience sampled from replay with on-policy experience, and propose a new trust region scheme that scales effectively to data distributions where V-trace becomes unstable.\n\n",
"We provide extensive empirical validation of the proposed solution.",
"We further show the benefits of this setup by demonstrating state-of-the-art data efficiency on Atari among agents trained up until 200M environment frames.",
"Value-based and actor-critic policy gradient methods are the two leading techniques of constructing general and scalable reinforcement learning agents (Sutton et al., 2018) .",
"Both have been combined with non-linear function approximation (Tesauro, 1995; Williams, 1992) , and have achieved remarkable successes on multiple challenging domains; yet, these algorithms still require large amounts of data to determine good policies for any new environment.",
"To improve data efficiency, experience replay agents store experience in a memory buffer (replay) (Lin, 1992) , and reuse it multiple times to perform reinforcement learning updates (Riedmiller, 2005) .",
"Experience replay allows to generalize prioritized sweeping (Moore & Atkeson, 1993) to the non-tabular setting (Schaul et al., 2015) , and can also be used to simplify exploration by including expert (e.g., human) trajectories (Hester et al., 2017) .",
"Overall, experience replay can be very effective at reducing the number of interactions with the environment otherwise required by deep reinforcement learning algorithms (Schaul et al., 2015) .",
"Replay is often combined with the value-based Q-learning (Mnih et al., 2015) , as it is an off-policy algorithm by construction, and can perform well even if the sampling distribution from replay is not aligned with the latest agent's policy.",
"Combining experience replay with actor-critic algorithms can be harder due to their on-policy nature.",
"Hence, most established actor-critic algorithms with replay such as (Wang et al., 2017; Gruslys et al., 2018; Haarnoja et al., 2018) employ and maintain Q-functions to learn from the replayed off-policy experience.",
"In this paper, we demonstrate that off-policy actor-critic learning with experience replay can be achieved without surrogate Q-function approximators using V-trace by employing the following approaches:",
"a) off-policy replay experience needs to be mixed with a proportion of on-policy experience.",
"We show experimentally ( Figure 2 ) and theoretically that the V-trace policy gradient is otherwise not guaranteed to converge to a locally optimal solution.",
"b) a trust region scheme (Conn et al., 2000; Schulman et al., 2015; can mitigate bias and enable efficient learning in a strongly off-policy regime, where distinct agents share experience through a commonly shared replay module.",
"Sharing experience permits the agents to benefit from parallel exploration (Kretchmar, 2002) (Figures 1 and 3 ).",
"Our paper is structured as follows: In Section 2 we revisit pure importance sampling for actor-critic agents (Degris et al., 2012 ) and V-trace, which is notable for allowing to trade off bias and variance in its estimates.",
"We recall that variance reduction is necessary (Figure 4 left) but is biased in V-trace.",
"We derive proposition 2 stating that off-policy V-trace is not guaranteed to converge to a locally optimal solution -not even in an idealized scenario when provided with the optimal value function.",
"Through theoretical analysis (Section 3) and experimental validation (Figure 2 ) we determine that mixing on-policy experience into experience replay alleviates the problem.",
"Furthermore we propose a trust region scheme (Conn et al., 2000; Schulman et al., 2015; in Section 4 that enables efficient learning even in a strongly off-policy regime, where distinct agents share the experience replay module and learn from each others experience.",
"We define the trust region in policy space and prove that the resulting estimator is correct (i.e. estimates an improved return).",
"As a result, we present state-of-the-art data efficiency in Section 5 in terms of median human normalized performance across 57 Atari games (Bellemare et al., 2013) , as well as improved learning efficiency on DMLab30 (Beattie et al., 2016) (Table 1 ).",
"Figure 1: Sharing experience between agents leads to more efficient hyper-parameter sweeps on 57 Atari games.",
"Prior art results are presented as horizontal lines (with scores cited from Gruslys et al. (2018) , Hessel et al. (2017) and Mnih et al. (2013) ).",
"Note that the only previous agent \"R2D2\" that achieved a score beyond 400% required more than 3,000 million environment steps (see Kapturowski et al. (2019) , page 14, Figure 9 ).",
"We present the pointwise best agent from hyper-parameter sweeps with and without experience replay (shared and not shared).",
"Each sweep contains 9 agents with different learning rate and entropy cost combinations.",
"Replay experiment were repeated twice and ran for 50M steps.",
"To report scores at 200M we ran the baseline and one shared experience replay agent for 200M steps.",
"Table 1 : Comparison of state-of-the-art agents on 57 Atari games trained up until 200M environment steps (per game) and DMLab-30 trained until 10B steps (multi-task; all games combined).",
"The first two rows are quoted from Xu et al. (2018) and Hessel et al. (2019) , the third is our implementation of a pixel control agent from Hessel et al. (2019) and the last two rows are our proposed LASER (LArge Scale Experience Replay) agent.",
"All agents use hyper-parameter sweeps expect for the marked.",
"V-trace importance sampling is a popular off-policy correction for actor-critic agents (Espeholt et al., 2018) .",
"In this section we revisit how V-trace controls the (potentially infinite) variance that arises from naive importance sampling.",
"We note that this comes at the cost of a biased estimate (see Proposition",
"1) and creates a failure mode (see Proposition",
"2) which makes the policy gradient biased.",
"We discuss our solutions for said issues in Section 4.",
"Figure 2: Left: Learning entirely off-policy from experience replay fails, while combining on-policy data with experience replay leads to improved data efficiency: We present sweeps on DMLab-30 with experience replays of 10M capacity.",
"A ratio of 87.5% implies that there are 7 replayed transitions in the batch for each online transition.",
"Furthermore we consider an agent identical to \"LASER 87.5% replay\" which however draws all samples from replay.",
"Its batch thus does not contain any online data and we observe a significant performance decrease (see Proposition 2 and 3).",
"The shading represents the point-wise best and worst replica among 3 repetitions.",
"The solid line is the mean.",
"Right: The effect of capacity in experience replay with 87.5% replay data per batch on sweeps on DMLab-30.",
"Data-efficiency improves with larger capacity.",
"Figure 3: Left: Naively sharing experience between distinct agents in a hyper-parameter sweep fails (green) and is worse than the no-replay baseline (blue).",
"The proposed trust region estimator mitigates the issue (red).",
"Right: Combining population based training with trust region estimation improves performance further.",
"All replay experiments use a capacity of 10 million observations and 87.5% replay data per batch.",
"We have presented LASER -an off-policy actor-critic agent which employs a large and shared experience replay to achieve data-efficiency.",
"By sharing experience between concurrently running experiments in a hyper-parameter sweep it is able to take advantage of parallel exploration.",
"As a result it achieves state-of-the-art data efficiency on 57 Atari games given 200M environment steps.",
"Furthermore it achieves competitive results on both DMLab-30 and Atari under regular, not shared experience replay conditions.",
"To facilitate this algorithm we have proposed two approaches:",
"a) mixing replayed experience and on-policy data and",
"b) a trust region scheme.",
"We have shown theoretically and demonstrated through a series of experiments that they enable learning in strongly off-policy settings, which present a challenge for conventional importance sampling schemes."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.6511628031730652,
0.46666666865348816,
0.41379308700561523,
0.2083333283662796,
0.19512194395065308,
0.21052631735801697,
0.1249999925494194,
0.08695651590824127,
0.260869562625885,
0.13114753365516663,
0.23529411852359772,
0.06896550953388214,
0.2800000011920929,
0.13793103396892548,
0.21621620655059814,
0.23076923191547394,
0.2448979616165161,
0.277777761220932,
0.08510638028383255,
0.25,
0.09999999403953552,
0.1355932205915451,
0.10810810327529907,
0.1538461446762085,
0.13333332538604736,
0.26229506731033325,
0.13636362552642822,
0.09999999403953552,
0.10256409645080566,
0.04347825422883034,
0,
0.25,
0.1666666567325592,
0.12121211737394333,
0.19999998807907104,
0.0833333283662796,
0.1090909019112587,
0.0624999962747097,
0.1538461446762085,
0,
0.10810810327529907,
0.06451612710952759,
0,
0.24242423474788666,
0.23529411852359772,
0.1428571343421936,
0.04878048226237297,
0.04651162400841713,
0.05714285373687744,
0,
0.25,
0.0714285671710968,
0.1304347813129425,
0,
0.05714285373687744,
0.1538461446762085,
0.2857142686843872,
0.1395348757505417,
0,
0.14999999105930328,
0.0624999962747097,
0.13333332538604736,
0,
0.2800000011920929
] | HygaikBKvS | true | [
"We investigate and propose solutions for two challenges in reinforcement learning: (a) efficient actor-critic learning with experience replay (b) stability of very off-policy learning."
] |
[
"Stochastic gradient descent (SGD) is the workhorse of modern machine learning.",
"Sometimes, there are many different potential gradient estimators that can be used.",
"When so, choosing the one with the best tradeoff between cost and variance is important.",
"This paper analyzes the convergence rates of SGD as a function of time, rather than iterations.",
"This results in a simple rule to select the estimator that leads to the best optimization convergence guarantee.",
"This choice is the same for different variants of SGD, and with different assumptions about the objective (e.g. convexity or smoothness).",
"Inspired by this principle, we propose a technique to automatically select an estimator when a finite pool of estimators is given.",
"Then, we extend to infinite pools of estimators, where each one is indexed by control variate weights.",
"This is enabled by a reduction to a mixed-integer quadratic program.",
"Empirically, automatically choosing an estimator performs comparably to the best estimator chosen with hindsight.",
"In stochastic gradient variational inference (SGVI) there are multiple gradient estimators with varying costs and variances.",
"Estimators may be obtained using the reparameterization trick (Kingma and Welling (2013) ; Rezende et al. (2014) ; Titsias and Lázaro-Gredilla (2014)), the score function method (Williams (1992) ), or other techniques (Titsias and Lázaro-Gredilla (2015) ; Ruiz et al. (2016) ; Agakov and Barber (2004) ).",
"Also, many control variates can be added to an estimator to reduce variance (Miller et al. (2017) (2018)).",
"The cost and variance of an estimator significantly affects optimization convergence speed (Bottou et al. (2018) ).",
"The use of different estimators leads to different optimization performances, and the estimator with optimal cost-variance tradeoff is often situationdependent (for an example see Fig. 1 ).",
"In settings where multiple estimators with varying costs and variances are available, selecting the optimal one is important.",
"Rather than rely on the user to manually select one, we propose that estimator selection could be done adaptively.",
"This paper investigates how, given a pool of gradient estimators, automatically choose one to get the best convergence guarantee for stochastic optimization.",
"We study cost-variance tradeoffs by analyzing the convergence rates of several variants of SGD.",
"We express convergence rates in terms of time rather than iterations.",
"This leads to what we call the \"G 2 T principle\": A simple rule that predicts, given a pool of gradient estimators, which one results in the best convergence guarantees for optimization.",
"We use the principle to propose two gradient estimator selection algorithms: One for the case in which a finite pool of estimators is available, and other when the pool contains an infinite number of estimators, each indexed by control variate weights (i.e. control variate selection).",
"Notation: We use g(w, ξ), where ξ is a random variable, to denote an unbiased estimator of target's gradient, G 2 (g) to denote a bound on g's expected squared norm, and T (g) to denote the computational cost of computing estimator g(w, ξ), measured in seconds."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.1599999964237213,
0.07692307233810425,
0.1428571343421936,
0.13793103396892548,
0.2666666507720947,
0.11764705181121826,
0.1764705777168274,
0,
0.0833333283662796,
0.2222222238779068,
0.13793103396892548,
0.039215683937072754,
0.06451612710952759,
0.12903225421905518,
0.19999998807907104,
0.1249999925494194,
0.3030303120613098,
0.2222222238779068,
0.14814814925193787,
0.07999999821186066,
0.17777776718139648,
0.25925925374031067,
0.19607843458652496
] | SygQKy3EFB | true | [
"We propose a gradient estimator selection algorithm with the aim on improving optimization efficiency."
] |
[
"We study the problem of alleviating the instability issue in the GAN training procedure via new architecture design.",
"The discrepancy between the minimax and maximin objective values could serve as a proxy for the difficulties that the alternating gradient descent encounters in the optimization of GANs.",
"In this work, we give new results on the benefits of multi-generator architecture of GANs.",
"We show that the minimax gap shrinks to \\epsilon as the number of generators increases with rate O(1/\\epsilon).",
"This improves over the best-known result of O(1/\\epsilon^2).",
"At the core of our techniques is a novel application of Shapley-Folkman lemma to the generic minimax problem, where in the literature the technique was only known to work when the objective function is restricted to the Lagrangian function of a constraint optimization problem.",
"Our proposed Stackelberg GAN performs well experimentally in both synthetic and real-world datasets, improving Frechet Inception Distance by 14.61% over the previous multi-generator GANs on the benchmark datasets.",
"Generative Adversarial Nets (GANs) are emerging objects of study in machine learning, computer vision, natural language processing, and many other domains.",
"In machine learning, study of such a framework has led to significant advances in adversarial defenses BID25 BID22 and machine security BID3 BID22 .",
"In computer vision and natural language processing, GANs have resulted in improved performance over standard generative models for images and texts BID11 , such as variational autoencoder BID14 and deep Boltzmann machine BID20 .",
"A main technique to achieve this goal is to play a minimax two-player game between generator and discriminator under the design that the generator tries to confuse the discriminator with its generated contents and the discriminator tries to distinguish real images/texts from what the generator creates.",
"Despite a large amount of variants of GANs, many fundamental questions remain unresolved.",
"One of the long-standing challenges is designing universal, easy-to-implement architectures that alleviate the instability issue of GANs training.",
"Ideally, GANs are supposed to solve the minimax optimization problem BID11 , but in practice alternating gradient descent methods do not clearly privilege minimax over maximin or vice versa (page 35, Goodfellow (2016) ), which may lead to instability in training if there exists a large discrepancy between the minimax and maximin objective values.",
"The focus of this work is on improving the stability of such minimax game in the training process of GANs.",
"To alleviate the issues caused by the large minimax gap, our study is motivated by the so-called Stackelberg competition in the domain of game theory.",
"In the Stackelberg leadership model, the players of this game are one leader and multiple followers, where the leader firm moves first and then the follower firms move sequentially.",
"It is known that the Stackelberg model can be solved to find a subgame perfect Nash equilibrium.",
"We apply this idea of Stackelberg leadership model to the architecture design of GANs.",
"That is, we design an improved GAN architecture with multiple generators (followers) which team up to play against the discriminator (leader).",
"We therefore name our model Stackelberg GAN.",
"Our theoretical and experimental results establish that: GANs with multi-generator architecture have smaller minimax gap, and enjoy more stable training performances.",
"Our Contributions.",
"This paper tackles the problem of instability during the GAN training procedure with both theoretical and experimental results.",
"We study this problem by new architecture design.",
"Figure 1: Stackelberg GAN stabilizes the training procedure on a toy 2D mixture of 8 Gaussians.",
"Top Row: Standard GAN training.",
"It shows that several modes are dropped.",
"Bottom Row: Stackelberg GAN training with 8 generator ensembles, each of which is denoted by one color.",
"We can see that each generator exactly learns one mode of the distribution without any mode being dropped.Under review as a conference paper at ICLR 2019",
"(a) Step 0 Standard GAN training.",
"It shows that several modes are dropped.",
"Bottom Row: Stackelberg GAN training with 8 generator ensembles, each of which is denoted by one color.",
"We can see that each generator exactly learns one mode of the distribution without any mode being dropped.•",
"We propose Stackelberg GAN framework of having multiple generators in the GAN architecture. Our",
"framework is general that can be applied to all variants of GANs, e.g., vanilla GAN, Wasserstein GAN, etc. It",
"is built upon the idea of jointly optimizing an ensemble of GAN losses w.r.t. all pairs of discriminator and generator. Differences",
"with prior work. Although the",
"idea of having multiple generators in the GAN architecture is not totally new, e.g., MIX+GAN BID1 and MGAN BID13 , there are key differences between Stackelberg GAN and prior work. a) In MGAN BID13",
", various generators are combined as a mixture of probabilistic models with assumption that the generators and discriminator have enough capacity. In contrast, in",
"the Stackelberg GAN model we uniformly ensemble the losses of various standard GAN without any assumption on the model capacity. b) In MIX+GAN BID1",
", the losses are ensembled with learned weights and an extra regularization term, which discourages the weights being too far away from uniform. We find it slightly",
"unnecessary because the expressive power of each generator already allows implicit scaling of each generator. To the contrary, in",
"the Stackelberg GAN we apply equal weights for all generators.• We prove that the",
"minimax duality gap shrinks as the number of generators increases (see Theorem 1 and Corollary 2). Unlike the previous",
"work, our result has no assumption on the expressive power of generators and discriminator, but instead depends on their non-convexity. With extra condition",
"on the expressive power of generators, we show that Stackelberg GAN is able to achieve ✏-approximate equilibrium with e O(1/✏) generators (see Theorem 3). This Stackelberg GAN",
"training with 10 generator ensembles on real images without cherry pick, where each row corresponds to one generator. We can see that each",
"generator exactly learns one mode of the distribution without any mode being dropped.[Pengtao: It is kind",
"of abrupt that you say \"Stackelberg GAN stabilizes the training procedure\" in the beginning sentence, then the rest talks about losing mode. In the introduction, a convincing tie between instability and mode collapse is still missing.]• We propose Stackelberg GAN framework of having multiple generators in the GAN architecture. Our framework is general",
"that can be applied to all variants of GANs, e.g., vanilla GAN, Wasserstein GAN, etc. It is built upon the idea",
"of jointly optimizing an ensemble of GAN losses w.r.t. all pairs of discriminator and generator. Differences with prior work",
". Although the idea of having",
"multiple generators in the GAN architecture is not totally new, e.g., MIX+GAN BID1 and MGAN BID13 , there are key differences between Stackelberg GAN and prior work. a) In MGAN BID13 , various",
"generators are combined as a mixture of probabilistic models with assumption that the generators and discriminator have enough capacity. In contrast, in the Stackelberg",
"GAN model we uniformly ensemble the losses of various standard GAN without any assumption on the model capacity. b) In MIX+GAN BID1 , the losses",
"are ensembled with learned weights and an extra regularization term, which discourages the weights being too far away from uniform. We find it slightly unnecessary",
"because the expressive power of each generator already allows implicit scaling of each generator. To the contrary, in the Stackelberg",
"GAN we apply equal weights for all generators.• We prove that the minimax duality",
"gap shrinks as the number of generators increases (see Theorem 1 and Corollary 2). Unlike the previous work, our result",
"has no assumption on the • We propose the Stackelberg GAN framework of multiple generators in the GAN architecture. Our framework is general since it can",
"be applied to all variants of GANs, e.g., vanilla GAN, Wasserstein GAN, etc. It is built upon the idea of jointly",
"optimizing an ensemble of GAN losses w.r.t. all pairs of discriminator and generator. Differences from prior work. Although",
"the idea of having multiple",
"generators in the GAN architecture is not totally new, e.g., MIX+GAN BID1 , MGAN BID13 , MAD-GAN BID9 and GMAN BID8 , there are key differences between Stackelberg GAN and prior work. a) In MGAN BID13 and MAD-GAN BID9 ,",
"various generators are combined as a mixture of probabilistic models with assumption that the generators and discriminator have infinite capacity. Also, they require that the generators",
"share common network parameters. In contrast, in the Stackelberg GAN model",
"we allow various sampling schemes beyond the mixture model, e.g., each generator samples a fixed but unequal number of data points independently. Furthermore, each generator has free parameters",
". We also make no assumption on the model capacity",
"in our analysis. This is an important research question as raised",
"by BID2 . b) In MIX+GAN BID1 , the losses are ensembled with",
"learned weights and an extra regularization term, which discourages the weights being too far away from uniform. We find it slightly unnecessary because the expressive",
"power of each generator already allows implicit scaling of each generator. In the Stackelberg GAN, we apply equal weights for all",
"generators and obtain improved guarantees. c) In GMAN BID8 , there are multiple discriminators while",
"it is unclear in theory why multi-discriminator architecture works well. In this paper, we provide formal guarantees for our model",
". • We prove that the minimax duality gap shrinks as the number",
"of generators increases (see Theorem 1 and Corollary 2). Unlike the previous work, our result has no assumption on the",
"expressive power of generators and discriminator, but instead depends on their non-convexity. With extra condition on the expressive power of generators, we",
"show that Stackelberg GAN is able to achieve -approximate equilibrium with O(1/ ) generators (see Theorem 3). This improves over the best-known result in BID1 which requires",
"generators as many as O(1/ 2 ). At the core of our techniques is a novel application of the ShapleyFolkman",
"lemma to the generic minimax problem, where in the literature the technique was only known to work when the objective function is restricted to the Lagrangian function of a constrained optimization problem . This results in tighter bounds than that of the covering number argument as",
"in BID1 . We also note that MIX+GAN is a heuristic model which does not exactly match",
"the theoretical analysis in BID1 , while this paper provides formal guarantees for the exact model of Stackelberg GAN.• We empirically study the performance of Stackelberg GAN for various synthetic",
"and real datasets. We observe that without any human assignment, surprisingly, each generator automatically",
"learns balanced number of modes without any mode being dropped (see FIG2 ). Compared with other multi-generator GANs with the same network capacity, our experiments",
"show that Stackelberg GAN enjoys 26.76 Fréchet Inception Distance on CIFAR-10 dataset while prior results achieve 31.34 (smaller is better), achieving an improvement of 14.61%.",
"In this work, we tackle the problem of instability during GAN training procedure, which is caused by the huge gap between minimax and maximin objective values.",
"The core of our techniques is a multi-generator architecture.",
"We show that the minimax gap shrinks to as the number of generators increases with rate O(1/ ), when the maximization problem w.r.t. the discriminator is concave.",
"This improves over the best-known results of O(1/ 2 ).",
"Experiments verify the effectiveness of our proposed methods.",
"TAB5 is by the weak duality.",
"Thus it suffices to prove the other side of the inequality.",
"All notations in this section are defined in Section 3.1.",
"We first show that DISPLAYFORM0 Denote by DISPLAYFORM1 We have the following lemma.Lemma",
"4. We have DISPLAYFORM2 Proof.",
"By the definition of p(0), we have p(0) = inf γ1,...,γ I ∈R g sup θ∈R t Φ(γ 1 , ..., γ I ; θ).",
"Since (clp)(·) is the convex closure of function p(·) (a.k.a. weak duality theorem), we have (clp)(0) ≤ p(0).",
"We now show that sup DISPLAYFORM3 Note that p(u) = inf γ1,...,γ I ∈R g p γ1,...,γ I (u), where p γ1,...,γ I (u) = sup θ∈R t { Φ(γ 1 , ..., γ I ; θ) − u T θ} = (− Φ(γ 1 , ..., γ I ; ·)) * (−u), and that .",
"We have the following lemma.",
"DISPLAYFORM4 Lemma",
"5. Under the assumption in Theorem 1, DISPLAYFORM5 Proof.",
"We note that DISPLAYFORM6 where u 1 , ..., u I , u ∈ R t .",
"Therefore, DISPLAYFORM7 Consider the subset of R t+1 : DISPLAYFORM8 Define the vector summation DISPLAYFORM9 is continuous and domh i is compact, the set DISPLAYFORM10 DISPLAYFORM11 We apply Lemma 6 to prove Lemma 5 with m = t + 1.",
"Let (r, w) ∈ conv(Y) be such that r = 0, and w =clp(0).",
"DISPLAYFORM12 i ∈I DISPLAYFORM13 Representing elements of the convex hull of DISPLAYFORM14 by Carathéodory theorem, we have that for each i ∈ I, there are vectors {u DISPLAYFORM15 Recall that we definȇ DISPLAYFORM16 and DISPLAYFORM17 We have for i ∈ I, DISPLAYFORM18 Thus, by Eqns.",
"FORMULA27 and FORMULA30 , we have DISPLAYFORM19 Therefore, we have DISPLAYFORM20 (by Eqns. FORMULA28 and FORMULA33 ) DISPLAYFORM21 , (by Lemma 6) as desired.By Lemmas 4 and 5, we have proved that DISPLAYFORM22 To prove Theorem 1, we note that DISPLAYFORM23 When φ(γ i ; θ) is concave and closed w.r.t. discriminator parameter θ, we have clφ = φ.",
"Thus, ∆ minimax θ = ∆ maximin θ = 0 and 0 ≤ w * − q * ≤ ."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.8571428656578064,
0.13636362552642822,
0.24242423474788666,
0.2222222238779068,
0.14814814925193787,
0.15686273574829102,
0.12765957415103912,
0.14999999105930328,
0.14999999105930328,
0.03999999538064003,
0.07692307233810425,
0.06451612710952759,
0.2857142686843872,
0.1492537260055542,
0.2222222238779068,
0.19999998807907104,
0.09302324801683426,
0.0555555522441864,
0.25,
0.19999998807907104,
0.1538461446762085,
0.20512819290161133,
0.5,
0.37037035822868347,
0.2857142686843872,
0.1666666567325592,
0,
0.2222222238779068,
0.13333332538604736,
0.1599999964237213,
0,
0.2222222238779068,
0.1621621549129486,
0.375,
0.051282044500112534,
0.14999999105930328,
0.1666666567325592,
0.19999998807907104,
0.1904761791229248,
0.1538461446762085,
0.13636362552642822,
0.1764705777168274,
0.1818181723356247,
0.10810810327529907,
0.09756097197532654,
0.1818181723356247,
0.14999999105930328,
0.11428570747375488,
0.25806450843811035,
0.09756097197532654,
0.15789473056793213,
0.1666666567325592,
0.1666666567325592,
0.19512194395065308,
0.1538461446762085,
0.1395348757505417,
0.1764705777168274,
0.1764705777168274,
0.10526315122842789,
0.2926829159259796,
0.09999999403953552,
0.10526315122842789,
0.1666666567325592,
0.1599999964237213,
0.1463414579629898,
0.19999998807907104,
0.08510638028383255,
0.1428571343421936,
0.06666666269302368,
0.12903225421905518,
0.0952380895614624,
0.10810810327529907,
0.05882352590560913,
0.1538461446762085,
0.13333332538604736,
0.10526315122842789,
0.10810810327529907,
0.17391303181648254,
0.1111111044883728,
0.14814814925193787,
0.11428570747375488,
0.3636363446712494,
0.060606054961681366,
0.1395348757505417,
0.08510638028383255,
0.27272728085517883,
0.1428571343421936,
0.2222222238779068,
0.13793103396892548,
0.14814814925193787,
0.07999999821186066,
0.13793103396892548,
0.06896550953388214,
0.1249999925494194,
0.0833333283662796,
0.09302324801683426,
0.10256409645080566,
0.0357142798602581,
0.1666666567325592,
0.1428571343421936,
0.06451612710952759,
0.145454540848732,
0,
0.1111111044883728,
0,
0
] | SJxCsj0qYX | true | [
"We study the problem of alleviating the instability issue in the GAN training procedure via new architecture design, with theoretical guarantees."
] |
[
"\nNesterov SGD is widely used for training modern neural networks and other machine learning models.",
"Yet, its advantages over SGD have not been theoretically clarified.",
"Indeed, as we show in this paper, both theoretically and empirically, Nesterov SGD with any parameter selection does not in general provide acceleration over ordinary SGD.",
"Furthermore, Nesterov SGD may diverge for step sizes that ensure convergence of ordinary SGD.",
"This is in contrast to the classical results in the deterministic setting, where the same step size ensures accelerated convergence of the Nesterov's method over optimal gradient descent.\n\n",
"To address the non-acceleration issue, we introduce a compensation term to Nesterov SGD.",
"The resulting algorithm, which we call MaSS, converges for same step sizes as SGD.",
"We prove that MaSS obtains an accelerated convergence rates over SGD for any mini-batch size in the linear setting. ",
"For full batch, the convergence rate of MaSS matches the well-known accelerated rate of the Nesterov's method. \n\n",
"We also analyze the practically important question of the dependence of the convergence rate and optimal hyper-parameters on the mini-batch size, demonstrating three distinct regimes: linear scaling, diminishing returns and saturation.\n\n",
"Experimental evaluation of MaSS for several standard architectures of deep networks, including ResNet and convolutional networks, shows improved performance over SGD, Nesterov SGD and Adam.",
"Many modern neural networks and other machine learning models are over-parametrized (5) .",
"These models are typically trained to have near zero training loss, known as interpolation and often have strong generalization performance, as indicated by a range of empirical evidence including (23; 3).",
"Due to a key property of interpolation -automatic variance reduction (discussed in Section 2.1), stochastic gradient descent (SGD) with constant step size is shown to converge to the optimum of a convex loss function for a wide range of step sizes (12) .",
"Moreover, the optimal choice of step size η * for SGD in that setting can be derived analytically.",
"The goal of this paper is to take a step toward understanding momentum-based SGD in the interpolating setting.",
"Among them, stochastic version of Nesterov's acceleration method (SGD+Nesterov) is arguably the most widely used to train modern machine learning models in practice.",
"The popularity of SGD+Nesterov is tied to the well-known acceleration of the deterministic Nesterov's method over gradient descent (15) .",
"Yet, has not not theoretically clear whether Nesterov SGD accelerates over SGD.",
"As we show in this work, both theoretically and empirically, Nesterov SGD with any parameter selection does not in general provide acceleration over ordinary SGD.",
"Furthermore, Nesterov SGD may diverge, even in the linear setting, for step sizes that guarantee convergence of ordinary SGD.",
"Intuitively, the lack of acceleration stems from the fact that, to ensure convergence, the step size of SGD+Nesterov has to be much smaller than the optimal step size for SGD.",
"This is in contrast to the deterministic Nesterov method, which accelerates using the same step size as optimal gradient descent.",
"As we prove rigorously in this paper, the slow-down of convergence caused by the small step size negates the benefit brought by the momentum term.",
"We note that a similar lack of acceleration for the stochastic Heavy Ball method was analyzed in (9) .",
"To address the non-acceleration of SGD+Nesterov, we introduce an additional compensation term to allow convergence for the same range of step sizes as SGD.",
"The resulting algorithm, MaSS (Momentum-added Stochastic Solver) 1 updates the weights w and u using the following rules (with the compensation term underlined): Figure 1 : Non-acceleration of Nesterov SGD and fast convergence of MaSS.",
"w t+1 ← u t − η 1∇ f (u t ), u t+1 ← (1 + γ)w t+1 − γw t + η 2∇ f (u t ).",
"(",
"Here,∇ represents the stochastic gradient.",
"The step size η 1 , the momentum parameter γ ∈ (0, 1) and the compensation parameter η 2 are independent of t.",
"We proceed to analyze theoretical convergence properties of MaSS in the interpolated regime.",
"Specifically, we show that in the linear setting MaSS converges exponentially for the same range of step sizes as plain SGD, and the optimal choice of step size for MaSS is exactly η * which is optimal for SGD.",
"Our key theoretical result shows that MaSS has accelerated convergence rate over SGD.",
"Furthermore, in the full batch (deterministic) scenario, our analysis selects η 2 = 0, thus reducing MaSS to the classical Nesterov's method (15) .",
"In this case our convergence rate also matches the well-known convergence rate for the Nesterov's method (15; 4) .",
"This acceleration is illustrated in Figure 1 .",
"Note that SGD+Nesterov (as well as Stochastic Heavy Ball) does not converge faster than SGD, in line with our theoretical analysis.",
"We also prove exponential convergence of MaSS in more general convex setting under additional conditions.",
"We further analyze the dependence of the convergence rate e −s(m)t and optimal hyper-parameters on the mini-batch size m.",
"We identify three distinct regimes of dependence defined by two critical values m * 1 and m * 2 : linear scaling, diminishing returns and saturation, as illustrated in Figure 2 .",
"The convergence speed per iteration s(m), as well as the optimal hyper-parameters, increase linearly as m in the linear scaling regime, sub-linearly in the diminishing returns regime, and can only increase by a small constant factor in the saturation regime.",
"The critical values m * 1 and m * 2 are derived analytically.",
"We note that the intermediate \"diminishing terurns\" regime is new and is not found in SGD (12) .",
"To the best of our knowledge, this is the first analysis of mini-batch dependence for accelerated stochastic gradient methods.",
"We also experimentally evaluate MaSS on deep neural networks, which are non-convex.",
"We show that MaSS outperforms SGD, SGD+Nesterov and Adam (10) both in optimization and generalization, on different architectures of deep neural networks including convolutional networks and ResNet (7) .",
"The paper is organized as follows: In section 2, we introduce notations and preliminary results.",
"In section 3, we discuss the non-acceleration of SGD+Nesterov.",
"In section 4 we introduce MaSS and analyze its convergence and optimal hyper-parameter selection.",
"In section 5, we analyze the mini-batch MaSS.",
"In Section 6, we show experimental results."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1666666567325592,
0.06451612710952759,
0.2666666507720947,
0.1764705777168274,
0.17391303181648254,
0.23529411852359772,
0.11428570747375488,
0.24390242993831635,
0.11428570747375488,
0.1249999925494194,
0.1860465109348297,
0.060606054961681366,
0.07999999821186066,
0.14035087823867798,
0.25641024112701416,
0.25641024112701416,
0.13636362552642822,
0.10526315122842789,
0.19354838132858276,
0.27272728085517883,
0.25641024112701416,
0.13636362552642822,
0.29999998211860657,
0.1428571343421936,
0.1538461446762085,
0.1860465109348297,
0.19999998807907104,
0,
0.07692307233810425,
0.1463414579629898,
0.1764705777168274,
0.2745097875595093,
0.05882352590560913,
0.09302324801683426,
0.0555555522441864,
0.1428571343421936,
0.0952380895614624,
0.1666666567325592,
0.15789473056793213,
0.1249999925494194,
0.1538461446762085,
0.0624999962747097,
0.2702702581882477,
0.10526315122842789,
0.060606054961681366,
0.1304347813129425,
0.0555555522441864,
0.20000000298023224,
0.05882352590560913,
0.06896550953388214,
0
] | r1gixp4FPH | true | [
"This work proves the non-acceleration of Nesterov SGD with any hyper-parameters, and proposes new algorithm which provably accelerates SGD in the over-parameterized setting."
] |
[
"We propose the Fixed Grouping Layer (FGL); a novel feedforward layer designed to incorporate the inductive bias of structured smoothness into a deep learning model.",
"FGL achieves this goal by connecting nodes across layers based on spatial similarity.",
"The use of structured smoothness, as implemented by FGL, is motivated by applications to structured spatial data, which is, in turn, motivated by domain knowledge.",
"The proposed model architecture outperforms conventional neural network architectures across a variety of simulated and real datasets with structured smoothness.",
"The effectiveness of predictive models often depends on the choice of inductive bias, and the extent to which this inductive bias captures real-world structure.",
"For instance, one example of such bias encoding leading to improved performance is convolution.",
"In principle, convolutional weights could be learned directly from data.",
"However, in practice, imposing this structure leads to improved performance when compared to fully connected models, and as a result, Convolutional neural networks (CNNs) have enjoyed wide use for computer vision tasks (Krizhevsky et al., 2012) .",
"Similarly, recurrent neural networks such as LSTMs are effective for text (Sundermeyer et al., 2012) , and certain graphical models are ideal for sentence segmentation and labeling (Lafferty et al., 2001) .",
"Our work follows this philosophy.",
"Specifically, we propose a feedforward layer for deep neural networks that is suitable for neuroimaging and potentially useful for other data where variables can be grouped due to underlying structure.",
"Data with multiple input variables often exhibit some structure.",
"For example, the El Nino dataset (Bay et al., 2000) consists of measurements by weather buoys in the ocean, and one expects that nearby buoys can be grouped together.",
"Similarly, socio-economic data can often be grouped together by geographic proximity.",
"Financial market data of individual stocks can be grouped together based on the industrial sector to which a company belongs.",
"Along similar lines, brain parcellations are a well studied paradigm for capturing the structure of brain activity , often via statistical parcellation based on ward clustering (Ward Jr, 1963) .",
"The result of ward clustering is a tree where leaf nodes represent voxels of the brain and interior nodes represent grouping of voxels into spatial clusters.",
"Figure 1 visualizes the output of ward clustering at various granularities when applied to the human connectome project resting state brain data (Van Essen et al., 2012) .",
"Contribitions: Our primary technical contribution is the Fixed Grouping Layer (FGL).",
"FGL is designed to extract features within each group, and additionally guarantees that each output vector is only affected by the input vectors related to it by the specified grouping.",
"We demonstrate the benefit of using FGL on simulated experiments and real neuroimaging data.",
"We compare FGL against fully connected networks, convolutional neural networks, CoordConv (Liu et al., 2018) , and a closely related method proposed by Aydore et al. (2018) .",
"We extensively evaluate the performance of FGL on simulated and real brain imaging data showing improved performance.",
"In this work we propose a new layer architecture, the Fixed Grouping Layer (FGL), parameterized by a grouping of input variables.",
"FGL explicitly extracts features within each input group.",
"This is in contrast to convolution which extracts local features across the input, and fully connected networks which extract both global and local features.",
"We demonstrate the benefit of using FGL on 5 real fMRI datasets of different sizes.",
"Future work will involve the application of FGL to other tasks and application domains."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.6285714507102966,
0,
0.12121211737394333,
0.25,
0.060606054961681366,
0.07692307233810425,
0,
0.0833333283662796,
0,
0,
0.25,
0,
0,
0,
0.1249999925494194,
0.04999999701976776,
0.12121211737394333,
0.05128204822540283,
0,
0.05405404791235924,
0,
0.0555555522441864,
0,
0.1249999925494194,
0,
0.0624999962747097,
0,
0.07999999821186066
] | Hklr204Fvr | true | [
"A feedforward layer to incorporate structured smoothness into a deep learning model"
] |
[
"Convolutional networks are not aware of an object's geometric variations, which leads to inefficient utilization of model and data capacity.",
"To overcome this issue, recent works on deformation modeling seek to spatially reconfigure the data towards a common arrangement such that semantic recognition suffers less from deformation.",
"This is typically done by augmenting static operators with learned free-form sampling grids in the image space, dynamically tuned to the data and task for adapting the receptive field.",
"Yet adapting the receptive field does not quite reach the actual goal -- what really matters to the network is the *effective* receptive field (ERF), which reflects how much each pixel contributes.",
"It is thus natural to design other approaches to adapt the ERF directly during runtime.",
"In this work, we instantiate one possible solution as Deformable Kernels (DKs), a family of novel and generic convolutional operators for handling object deformations by directly adapting the ERF while leaving the receptive field untouched.",
"At the heart of our method is the ability to resample the original kernel space towards recovering the deformation of objects.",
"This approach is justified with theoretical insights that the ERF is strictly determined by data sampling locations and kernel values.",
"We implement DKs as generic drop-in replacements of rigid kernels and conduct a series of empirical studies whose results conform with our theories.",
"Over several tasks and standard base models, our approach compares favorably against prior works that adapt during runtime.",
"In addition, further experiments suggest a working mechanism orthogonal and complementary to previous works.",
"The rich diversity of object appearance in images arises from variations in object semantics and deformation.",
"Semantics describe the high-level abstraction of what we perceive, and deformation defines the geometric transformation tied to specific data (Gibson, 1950) .",
"Humans are remarkably adept at making abstractions of the world (Hudson & Manning, 2019) ; we see in raw visual signals, abstract semantics away from deformation, and form concepts.",
"Interestingly, modern convolutional networks follow an analogous process by making abstractions through local connectivity and weight sharing (Zhang, 2019) .",
"However, such a mechanism is an inefficient one, as the emergent representations encode semantics and deformation together, instead of as disjoint notions.",
"Though a convolution responds accordingly to each input, how it responds is primarily programmed by its rigid kernels, as in Figure 1(a, b) .",
"In effect, this consumes large model capacity and data modes .",
"We argue that the awareness of deformations emerges from adaptivity -the ability to adapt at runtime (Kanazawa et al., 2016; Jia et al., 2016; Li et al., 2019) .",
"Modeling of geometric transformations has been a constant pursuit for vision researchers over decades (Lowe et al., 1999; Lazebnik et al., 2006; Jaderberg et al., 2015; Dai et al., 2017) .",
"A basic idea is to spatially recompose data towards a common mode such that semantic recognition suffers less from deformation.",
"A recent work that is representative of this direction is Deformable Convolution (Dai et al., 2017; Zhu et al., 2019) .",
"As shown in Figure 1",
"(c), it augments the convolutions with free-form sampling grids in the data space.",
"It is previously justified as adapting receptive field, or what we phrase as the \"theoretical receptive field\", that defines which input pixels can contribute to the final output.",
"However, theoretical receptive field does not measure how much impact an input pixel actually has.",
"On the other hand, (Dai et al., 2017) reconfigure data towards common arrangement to counter the effects of geometric deformation.",
"(d) Our Deformable Kernels (DKs) instead resample kernels and, in effect, adapt kernel spaces while leaving the data untouched.",
"Note that",
"(b) and",
"(c) share kernel values but sample different data locations, while",
"(b) and",
"(d) share data locations but sample different kernel values.",
"Luo et al. (2016) propose to measure the effective receptive field (ERF), i.e. the partial derivative of the output with respect to the input data, to quantify the exact contribution of each raw pixel to the convolution.",
"Since adapting the theoretical receptive field is not the goal but a means to adapt the ERF, why not directly tune the ERF to specific data and tasks at runtime?",
"Toward this end, we introduce Deformable Kernels (DKs), a family of novel and generic convolutional operators for deformation modeling.",
"We aim to augment rigid kernels with the expressiveness to directly interact with the ERF of the computation during inference.",
"Illustrated in Figure 1 (d), DKs learn free-form offsets on kernel coordinates to deform the original kernel space towards specific data modality, rather than recomposing data.",
"This can directly adapt ERF while leaving receptive field untouched.",
"The design of DKs that is agnostic to data coordinates naturally leads to two variants -the global DK and the local DK, which behave differently in practice as we later investigate.",
"We justify our approach with theoretical results which show that ERF is strictly determined by data sampling locations and kernel values.",
"Used as a generic drop-in replacement of rigid kernels, DKs achieve empirical results coherent with our developed theory.",
"Concretely, we evaluate our operator with standard base models on image classification and object detection.",
"DKs perform favorably against prior works that adapt during runtime.",
"With both quantitative and qualitative analysis, we further show that DKs can work orthogonally and complementarily with previous techniques.",
"In this paper, we introduced Deformable Kernels (DKs) to adapt effective receptive fields (ERFs) of convolutional networks for object deformation.",
"We proposed to sample kernel values from the original kernel space.",
"This in effect samples the ERF in linear networks and also roughly generalizes to non-linear cases.",
"We instantiated two variants of DKs and validate our designs, showing connections to previous works.",
"Consistent improvements over them and compatibility with them were found, as illustrated in visualizations.",
"image patch kernel patch Figure 5 : Illustration of feed-forwarding through a 3×3 local Deformable Kernel from a 4×4 scope.",
"For each input patch, local DK first generates a group of kernel offsets {∆k} from input feature patch using the light-weight generator G (a 3×3 convolution of rigid kernel).",
"Given the original kernel weights W and the offset group {∆k}, DK samples a new set of kernel W using a bilinear sampler B. Finally, DK convolves the input feature map and the sampled kernels to complete the whole computation."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0,
0,
0.060606058686971664,
0,
0,
0,
0,
0.0714285671710968,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0.1111111044883728,
0,
0,
0,
0.07999999821186066,
0,
0,
0,
0,
0,
0.09090908616781235,
0.06666666269302368,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0.054054051637649536
] | SkxSv6VFvS | true | [
"Don't deform your convolutions -- deform your kernels."
] |
[
"Inferring the structural properties of a protein from its amino acid sequence is a challenging yet important problem in biology.",
"Structures are not known for the vast majority of protein sequences, but structure is critical for understanding function.",
"Existing approaches for detecting structural similarity between proteins from sequence are unable to recognize and exploit structural patterns when sequences have diverged too far, limiting our ability to transfer knowledge between structurally related proteins.",
"We newly approach this problem through the lens of representation learning.",
"We introduce a framework that maps any protein sequence to a sequence of vector embeddings --- one per amino acid position --- that encode structural information.",
"We train bidirectional long short-term memory (LSTM) models on protein sequences with a two-part feedback mechanism that incorporates information from",
"(i) global structural similarity between proteins and",
"(ii) pairwise residue contact maps for individual proteins.",
"To enable learning from structural similarity information, we define a novel similarity measure between arbitrary-length sequences of vector embeddings based on a soft symmetric alignment (SSA) between them.",
"Our method is able to learn useful position-specific embeddings despite lacking direct observations of position-level correspondence between sequences.",
"We show empirically that our multi-task framework outperforms other sequence-based methods and even a top-performing structure-based alignment method when predicting structural similarity, our goal.",
"Finally, we demonstrate that our learned embeddings can be transferred to other protein sequence problems, improving the state-of-the-art in transmembrane domain prediction.",
"Proteins are linear chains of amino acid residues that fold into specific 3D conformations as a result of the physical properties of the amino acid sequence.",
"These structures, in turn, determine the wide array of protein functions, from binding specificity to catalytic activity to localization within the cell.",
"Information about structure is vital for studying the mechanisms of these molecular machines in health and disease, and for development of new therapeutics.",
"However, experimental structure determination is costly and atomic structures have only been determined for a tiny fraction of known proteins.",
"Methods for finding proteins with related structure directly from sequence are of considerable interest, but the problem is challenging, because sequence similarity and structural similarity are only loosely related BID0 BID1 BID2 BID3 , e.g. similar structural folds can be formed by diverse sequences.",
"As a result, our ability to transfer knowledge between proteins with similar structures is limited.In this work, we address this problem by learning protein sequence embeddings using weak supervision from global structural similarity for the first time.",
"Specifically, we aim to learn a bidirectional LSTM (biLSTM) embedding model, mapping sequences of amino acids to sequences of vector representations, such that residues occurring in similar structural contexts will be close in embedding space.",
"This is difficult, because we have not observed position-level correspondences between sequences, only global sequence similarity.",
"We solve this by defining a whole sequence similarity measure from sequences of vector embeddings.",
"The measure decomposes into an alignment of the sequences and pairwise comparison of the aligned positions in embedding space.",
"For the alignment, we propose a soft symmetric alignment (SSA) mechanism -a symmetrization of the directional alignment commonly used in attention mechanisms.",
"Furthermore, in order to take advantage of information about local structural context within proteins, we extend this framework to include position-level supervision from contacts between residues in the individual protein structures.",
"This multitask framework FIG0 ) allows us to newly leverage both global structural similarity between proteins and residue-residue contacts within proteins for training embedding models.",
"The similarity prediction module takes pairs of proteins represented by their sequences of vector embeddings and predicts their shared SCOP level.",
"Sequences are first aligned based on L1 distance between their vector embeddings using SSA.",
"From the alignment, a similarity score is calculated and related to shared SCOP levels by ordinal regression.",
"(3) The contact prediction module uses the sequence of vector embeddings to predict contacts between amino acid positions within each protein.",
"The contact loss is calculated by comparing these predictions with contacts observed in the 3D structure of the protein.",
"Error signal from both tasks is used to fit the parameters of the encoder.We first benchmark our model's ability to correctly predict structural similarity between pairs of sequences using the SCOPe ASTRAL dataset BID4 .",
"This dataset contains protein domains manually classified into a hierarchy of structural categories (Appendix Figure 3) .",
"We show that our model dramatically outperforms other sequence-based protein comparison methods when predicting comembership in the SCOP hierarchy.",
"Remarkably, our model even outperforms TMalign BID5 , which requires structures as input and therefore structures must be known a priori.",
"In contrast, our model uses only sequence as input.",
"Next, we perform an ablation study to evaluate the importance of our modeling components for structural similarity prediction.",
"We also consider an additional task, secondary structure prediction, to assess the model's ability to capture local structure features.",
"We demonstrate that SSA outperforms alternative alignment methods for both of these tasks and that inclusion of the contact prediction training task further improves performance.Finally, we demonstrate that the embeddings learned by our model are generally applicable to other protein machine learning problems by leveraging our embeddings to improve the state-of-the-art in transmembrane prediction.",
"This work presents the first attempt in learning protein sequence embeddings from structure and takes a step towards bridging the sequence-structure divide with representation learning.",
"In this work, we proposed a novel alignment approach to learning contextual sequence embeddings with weak supervision from a global similarity measure.",
"Our SSA model is fully differentiable, fast to compute, and can be augmented with position-level structural information.",
"It outperforms competition in predicting protein structural similarity including, remarkably, structure alignment with TMalign.",
"One consideration of training using SCOP, however, is that we focus exclusively on single-domain protein sequences.",
"This means that the highly contextual embeddings given by the biLSTM encoder to single domains may differ from embeddings for the same domain in a multi-domain sequence.",
"One interesting extension would thus be to modify the encoder architecture or training procedure to better model domains in multi-domain contexts.",
"Nonetheless, the resulting embeddings are widely useful, allowing us to improve over the state-of-the-art in transmembrane region prediction, and can easily be applied to other protein prediction tasks such as predicting functional properties, active site locations, protein-protein interactions, etc.",
"Most methods that use HMM sequence profiles or position-specific scoring matrices could be augmented with our embeddings.",
"The broader framework extends to other related (non-biological) tasks.A APPENDIX Figure The bidirectional LSTM language model was trained on the full set of protein domain sequences in the Pfam database, 21,827,419 total sequences.",
"The language model was trained to predict the amino acid at position i given observations of all amino acids before i and all amino acids after i by minimizing the cross entropy loss with log predicted log probabilities given by the sum of the forward and reverse LM direction predictions DISPLAYFORM0 where p F (x i ) is the probability given by the forward direction LSTM and p R (x i ) is the probability given by the reverse direction LSTM.The language model architecture consisted of a 2-layer LSTM with 1024 units in each layer followed by a linear transformation into the 20-d amino acid prediction.",
"All parameters were shared between the forward and reverse direction components.",
"The model was trained for a single epoch using ADAM with a learning rate of 0.001 and minibatch size of 32."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3181818127632141,
0.1904761791229248,
0.2545454502105713,
0.2222222238779068,
0.2978723347187042,
0.2222222238779068,
0.375,
0.12121211737394333,
0.23999999463558197,
0.1395348757505417,
0.2083333283662796,
0.1702127605676651,
0.17391303181648254,
0.2222222238779068,
0.2222222238779068,
0.2222222238779068,
0.2461538463830948,
0.3870967626571655,
0.1818181723356247,
0.19512194395065308,
0.25,
0.2380952388048172,
0.17777776718139648,
0.3333333134651184,
0.4897959232330322,
0.1818181723356247,
0.10256409645080566,
0.1904761791229248,
0.30434781312942505,
0.23255813121795654,
0.25,
0.19512194395065308,
0.1818181723356247,
0.08888888359069824,
0.05882352590560913,
0.23255813121795654,
0.0952380895614624,
0.23188404738903046,
0.2916666567325592,
0.21739129722118378,
0.1428571343421936,
0.20512820780277252,
0.1463414579629898,
0.20408162474632263,
0.08888888359069824,
0.12903225421905518,
0.0476190410554409,
0.1428571343421936,
0.11627906560897827,
0.1666666567325592,
0.2666666507720947
] | SygLehCqtm | true | [
"We present a method for learning protein sequence embedding models using structural information in the form of global structural similarity between proteins and within protein residue-residue contacts."
] |
[
"Inspired by the adaptation phenomenon of biological neuronal firing, we propose regularity normalization: a reparameterization of the activation in the neural network that take into account the statistical regularity in the implicit space.",
"By considering the neural network optimization process as a model selection problem, the implicit space is constrained by the normalizing factor, the minimum description length of the optimal universal code.",
"We introduce an incremental version of computing this universal code as normalized maximum likelihood and demonstrated its flexibility to include data prior such as top-down attention and other oracle information and its compatibility to be incorporated into batch normalization and layer normalization.",
"The preliminary results showed that the proposed method outperforms existing normalization methods in tackling the limited and imbalanced data from a non-stationary distribution benchmarked on computer vision task.",
"As an unsupervised attention mechanism given input data, this biologically plausible normalization has the potential to deal with other complicated real-world scenarios as well as reinforcement learning setting where the rewards are sparse and non-uniform.",
"Further research is proposed to discover these scenarios and explore the behaviors among different variants.",
"The Minimum Description Length (MDL) principle asserts that the best model given some data minimizes the combined cost of describing the model and describing the misfit between the model and data BID9 with a goal to maximize regularity extraction for optimal data compression, prediction and communication BID4 .",
"Most unsupervised learning algorithms can be understood using the MDL principle BID10 , treating the neural network as a system communicating the input to a receiver.",
"If we consider the neural network training as the optimization process of a communication system, each input at each layers of the system can be described as a point in a low-dimensional continuous constraint space BID13 .",
"If we consider the neural networks as population codes, the constraint space can be subdivided into the input-vector space, the hidden-vector space, and the implicit space, which represents the underlying dimensions of variability in the other two spaces, i.e., a reduced representation of the constraint space.",
"For instance, given a image of an object, the rotated or scaled version still refers to the same object, thus each image instance of the same object can be represented by a position on a 2D implicit space with one dimension as orientation and the other as size BID13 .",
"The relevant information about the implicit space can be constrained to ensure a minimized description length of the system.",
"This type of constraint can also be found in biological brains of primates: high-level brain areas are known to send top-down feedback connections to lower-level areas to select of the most relevant information in the current input given the current task BID2 , a process similar to the communication system.",
"This type of modulation is performed by collecting statistical regularity in a hierarchical encoding process among brain areas.",
"One feature of the neural coding during the hierarchical processing is the adaptation: in vision neuroscience, vertical orientation reduce their firing rates to that orientaiton after the adaptation BID1 , while the cell responses to other orientations may increase BID3 .",
"These behaviors well match the information theoretical point-of-view that the most relevant information (saliency), which depends on the statistical regularity, have higher \"information\", just as the firing of the neurons.",
"The more regular the input features are, the lower it should yield the activation.",
"We introduce the minimum description length (MDL), such that the activation of neurons can be analogous to the code length of the model (a specific neuron or neuronal population) -a shorter code length would be assigned to a more regular input (such as after adaptation), and a longer code length to a more rare input or event.In this paper, we adopt the similar definition of implicit space as in BID13 , but extend it beyond unsupervised learning, into a generic neural network optimization problem in both supervised and unsupervised setting.",
"Given the neuroscience inspiration described above, we consider the formulation and computation of description length differently.",
"Instead of considering neural networks as population codes, we formulate each layer of neural networks during training a state of module selection.",
"In our setup, the description length is computed not in the scale of the entire neural networks, but by the unit of each layer of the network.",
"In addition, the optimization objective is not to minimize the description length, but instead, to take into account the minimum description length as part of the normalization procedure to reparameterize the activation of each neurons in each layer.",
"The computation of the description length (or model cost as in BID13 ) aims to minimize it, while we directly compute the minimum description length in each layer not to minimize anything, but to reassign the weights based on statistical regularities.",
"Finally, we compute the description length by an optimal universal code obtained by the batch input distribution in an online incremental fashion.We begin our presentation in section 2, formulating the problem setting in neural network training as a layer-specific model selection process under MDL principle.",
"We then introduce the proposed regularity normalization (RN) method, its formulation and the incremental implementation.",
"We also present several variants of the regularity normalization by incorporating batch and layer normalizations, termed regularity batch normalization (RBN) and regularity layer normalization (RLN), as well as including the data prior as a top-down attention mechanism during the training process, termed saliency normalization (SN).",
"In appendix A, we present the preliminary results on the imbalanced MNIST dataset and demonstrated that our approach is advantageous over existing normalization methods in different imbalanced scenarios.",
"In the last section, we conclude our methods and point out several future work directions as the next step of this research.",
"DISPLAYFORM0 Figure 1: Normalized maximal likelihood.",
"Data sample xi are drawn from the data distribution X and model θi is the optimal model that describes data xi with the shortest code length.",
"θj is an arbitrary model that is notθ3, so P (x3|θj) is not considered when computing optimal universal code according to NML formulation.Given a model class Θ consisting of a finite number of models parameterized by the parameter set θ.",
"Given a data sample x, each model in the model class describes a probability P (x|θ) with the code length computed as − log P (x|θ).",
"The minimum code length given any arbitrary θ would be given by L(x|θ(x)) = − log P (x|θ(x)) with model θ(x) which compresses data x most efficiently and offers the maximum likelihood P (x|θ(x)) BID4 .",
"However, the compressibility of the model will be unattainable for multiple inputs, as the probability distributions are different.",
"The solution relies on a universal code,P (x) defined for a model class Θ such that for any data sample x, the shortest code for x is always L(x|θ(x)) BID11 ).",
"Inspired by the neural code adaptation of biological brains, we propose a biologically plausible normalization method taking into account the regularity (or saliency) of the activation distribution in the implicit space, and normalize it to upweight activation for rarely seen scenario and downweight activation for commonly seen ones.",
"We introduce the concept from MDL principle and proposed to consider neural network training process as a model selection problem.",
"We compute the optimal universal code length by normalized maximum likelihood in an incremental fashion, and showed this implementation can be easily incorporated with established methods like batch normalization and layer normalization.",
"In addition, we proposed saliency normalization, which can introduce topdown attention and data prior to facilitate representation learning.",
"Fundamentally, we implemented with an incremental update of normalized maximum likelihood, constraining the implicit space to have a low model complexity and short universal code length.Preliminary results offered a proof of concept to the proposed method.",
"Given the limited experiments at the current state, our approach empirically outperforms existing normalization methods its advantage in the imbalanced or limited data scenario as hypothesized.",
"Next steps of this research include experiments with variants of the regularity normalization (SN, RLN, RBN etc. ), as well as the inclusion of top-down attention given by data prior (such as feature extracted from signal processing, or task-dependent information).",
"In concept, regularity-based normalization can also be considered as an unsupervised attention mechanism imposed on the input data.",
"As the next step, we are currently exploring this method to convolutional and recurrent neural networks, and applying to popular state-of-the-art neural network architectures in multiple modalities of datasets, as well as the reinforcement learning setting where the rewards can be very sparse and non-uniform.",
"Table 1 : Test errors of the imbalanced permutation-invariant MNIST 784-1000-1000-10 task \"Balanced\" \"Rare minority\" \"Highly imbalanced\" \"Dominant oligarchy\" n = 0 n = 1 n = 2 n = 3 n = 4 n = 5 n = 6 n = 7 n = 8 n = 9 baseline 4.80 ± 0.34 14."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2857142686843872,
0.3214285671710968,
0.1846153736114502,
0.28070175647735596,
0.158730149269104,
0.08888888359069824,
0.24242423474788666,
0.2641509473323822,
0.23728813230991364,
0.1515151411294937,
0.11594202369451523,
0.0833333283662796,
0.11594202369451523,
0.1666666567325592,
0.0923076868057251,
0.1090909019112587,
0,
0.22680412232875824,
0.08888888359069824,
0.2083333283662796,
0.07843136787414551,
0.1355932205915451,
0.158730149269104,
0.3142856955528259,
0.1818181723356247,
0.19354838132858276,
0.178571417927742,
0.11764705181121826,
0,
0.15686273574829102,
0.12121211737394333,
0.15686273574829102,
0.09677419066429138,
0.08695651590824127,
0.13793103396892548,
0.28985506296157837,
0.47999998927116394,
0.06666666269302368,
0.2083333283662796,
0.1904761791229248,
0.18867923319339752,
0.1230769157409668,
0.1249999925494194,
0.20588234066963196,
0.0307692252099514
] | HklJQ1JEDE | true | [
"Considering neural network optimization process as a model selection problem, we introduce a biological plausible normalization method that extracts statistical regularity under MDL principle to tackle imbalanced and limited data issue."
] |
[
"In this paper we study generative modeling via autoencoders while using the elegant geometric properties of the optimal transport (OT) problem and the Wasserstein distances.",
"We introduce Sliced-Wasserstein Autoencoders (SWAE), which are generative models that enable one to shape the distribution of the latent space into any samplable probability distribution without the need for training an adversarial network or defining a closed-form for the distribution.",
"In short, we regularize the autoencoder loss with the sliced-Wasserstein distance between the distribution of the encoded training samples and a predefined samplable distribution.",
"We show that the proposed formulation has an efficient numerical solution that provides similar capabilities to Wasserstein Autoencoders (WAE) and Variational Autoencoders (VAE), while benefiting from an embarrassingly simple implementation. \n",
"Scalable generative models that capture the rich and often nonlinear distribution of highdimensional data, (i.e., image, video, and audio), play a central role in various applications of machine learning, including transfer learning BID13 BID24 , super-resolution BID15 BID20 , image inpainting and completion BID34 , and image retrieval BID6 , among many others.",
"The recent generative models, including Generative Adversarial Networks (GANs) BID0 BID1 BID10 BID29 and Variational Autoencoders (VAE) BID4 BID14 BID23 enable an unsupervised and end-to-end modeling of the high-dimensional distribution of the training data.Learning such generative models boils down to minimizing a dissimilarity measure between the data distribution and the output distribution of the generative model.",
"To this end, and following the work of Arjovsky et al. BID0 and Bousquet et al. BID4 we approach the problem of generative modeling from the optimal transport point of view.",
"The optimal transport problem BID17 BID33 provides a way to measure the distances between probability distributions by transporting (i.e., morphing) one distribution into another.",
"Moreover, and as opposed to the common information theoretic dissimilarity measures (e.g., f -divergences), the p-Wasserstein dissimilarity measures that arise from the optimal transport problem:",
"1) are true distances, and",
"2) metrize a weak convergence of probability measures (at least on compact spaces).",
"Wasserstein distances have recently attracted a lot of interest in the learning community BID0 BID4 BID8 BID11 BID17 due to their exquisite geometric characteristics BID30 .",
"See the supplementary material for an intuitive example showing the benefit of the Wasserstein distance over commonly used f -divergences.In this paper, we introduce a new type of autoencoders for generative modeling (Algorithm 1), which we call Sliced-Wasserstein Autoencoders (SWAE), that minimize the sliced-Wasserstein distance between the distribution of the encoded samples and a predefined samplable distribution.",
"Our work is most closely related to the recent work by Bousquet et al. BID4 and the followup work by Tolstikhin et al. BID32 .",
"However, our approach avoids the need to perform costly adversarial training in the encoding space and is not restricted to closed-form distributions, while still benefiting from a Wasserstein-like distance measure in the encoding space that permits a simple numerical solution to the problem.In what follows we first provide an extensive review of the preliminary concepts that are needed for our formulation.",
"In Section 3 we formulate our proposed method.",
"The proposed numerical scheme to solve the problem is presented in Section",
"4. Our experiments are summarized in Section",
"5. Finally, our work is Concluded in Section 6.",
"We introduced Sliced Wasserstein Autoencoders (SWAE), which enable one to shape the distribution of the encoded samples to any samplable distribution.",
"We theoretically showed that utilizing the sliced Wasserstein distance as a dissimilarity measure between the distribution of the encoded samples and a predefined distribution ameliorates the need for training an adversarial network in the embedding space.",
"In addition, we provided a simple and efficient numerical scheme for this problem, which only relies on few inner products and sorting operations in each SGD iteration.",
"We further demonstrated the capability of our method on two mid-size image datasets, namely the MNIST dataset and the CelebA face dataset and showed results comparable to the techniques that rely on additional adversarial trainings.",
"Our implementation is publicly available BID0 ."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.06451612710952759,
0.1428571343421936,
0.0714285671710968,
0,
0,
0.038461536169052124,
0.0624999962747097,
0,
0,
0,
0,
0,
0.07407407462596893,
0,
0.10344827175140381,
0,
0,
0,
0,
0,
0.15789473056793213,
0.05882352590560913,
0.054054051637649536,
0
] | B1VPIA7iM | true | [
"\"Generative modeling with no need for adversarial training\""
] |
[
"The goal of imitation learning (IL) is to learn a good policy from high-quality demonstrations.",
"However, the quality of demonstrations in reality can be diverse, since it is easier and cheaper to collect demonstrations from a mix of experts and amateurs.",
"IL in such situations can be challenging, especially when the level of demonstrators' expertise is unknown.",
"We propose a new IL paradigm called Variational Imitation Learning with Diverse-quality demonstrations (VILD), where we explicitly model the level of demonstrators' expertise with a probabilistic graphical model and estimate it along with a reward function.",
"We show that a naive estimation approach is not suitable to large state and action spaces, and fix this issue by using a variational approach that can be easily implemented using existing reinforcement learning methods.",
"Experiments on continuous-control benchmarks demonstrate that VILD outperforms state-of-the-art methods.",
"Our work enables scalable and data-efficient IL under more realistic settings than before.",
"The goal of sequential decision making is to learn a policy that makes good decisions (Puterman, 1994) .",
"As an important branch of sequential decision making, imitation learning (IL) (Schaal, 1999) aims to learn such a policy from demonstrations (i.e., sequences of decisions) collected from experts.",
"However, high-quality demonstrations can be difficult to obtain in reality, since such experts may not always be available and sometimes are too costly (Osa et al., 2018) .",
"This is especially true when the quality of decisions depends on specific domain-knowledge not typically available to amateurs; e.g., in applications such as robot control (Osa et al., 2018) and autonomous driving (Silver et al., 2012) .",
"In practice, demonstrations are often diverse in quality, since it is cheaper to collect demonstrations from mixed demonstrators, containing both experts and amateurs (Audiffren et al., 2015) .",
"Unfortunately, IL in such settings tends to perform poorly, since low-quality demonstrations often negatively affect the performance of IL methods (Shiarlis et al., 2016) .",
"For example, amateurs' demonstrations for robotics can be cheaply collected via a robot simulation (Mandlekar et al., 2018 ), but such demonstrations may cause damages to the robot which is catastrophic in the real-world (Shiarlis et al., 2016) .",
"Similarly, demonstrations for autonomous driving can be collected from drivers in public roads (Fridman et al., 2017) , which may contain traffic-accident demonstrations.",
"Learning a self-driving car from these low-quality demonstrations may cause traffic accidents.",
"When the level of demonstrators' expertise is known, multi-modal IL (MM-IL) can be used to learn a good policy with diverse-quality demonstrations Hausman et al., 2017; Wang et al., 2017) .",
"Specifically, MM-IL aims to learn a multi-modal policy, where each mode of the policy represents the decision making of each demonstrator.",
"When knowing the level of demonstrators' expertise, good policies can be obtained by selecting modes that correspond to the decision making of high-expertise demonstrators.",
"However, in practice, it is difficult to truly determine the level of demonstrators' expertise beforehand.",
"Without knowing the level of expertise, it is difficult to distinguish the decision making of experts and amateurs, and learning a good policy is challenging.",
"To overcome the issue of MM-IL, pioneer works have proposed to estimate the quality of each demonstration using auxiliary information from experts (Audiffren et al., 2015; Wu et al., 2019; Brown et al., 2019) .",
"Specifically, Audiffren et al. (2015) inferred the demonstration quality using similarities between diverse-quality demonstrations and high-quality demonstrations, where the latter are collected in a small number from experts.",
"In contrast, Wu et al. (2019) proposed to estimate the demonstration quality using a small number of demonstrations with confidence scores.",
"Namely, the score value given by an expert is proportion to the demonstration quality.",
"Similarly, the demonstration quality can be estimated by ranked demonstrations, where ranking from an expert is evaluated due to the relative quality (Brown et al., 2019) .",
"To sum up, these methods rely on auxiliary information from experts, namely high-quality demonstrations, confidence scores, and ranking.",
"In practice, these pieces of information can be scarce or noisy, which leads to a poor performance of these methods.",
"In this paper, we consider a novel but realistic setting of IL where only diverse-quality demonstrations are available.",
"Meanwhile, the level of demonstrators' expertise and auxiliary information from experts are fully absent.",
"To tackle this challenging setting, we propose a new learning paradigm called variational imitation learning with diverse-quality demonstrations (VILD).",
"The central idea of VILD is to model the level of demonstrators' expertise via a probabilistic graphical model, and learn it along with a reward function that represents an intention of expert's decision making.",
"To scale up our model for large state and action spaces, we leverage the variational approach (Jordan et al., 1999) , which can be implemented using reinforcement learning (RL) for flexibility (Sutton & Barto, 1998) .",
"To further improve data-efficiency of VILD when learning the reward function, we utilize importance sampling (IS) to re-weight a sampling distribution according to the estimated level of demonstrators' expertise.",
"Experiments on continuous-control benchmarks and real-world crowdsourced demonstrations (Mandlekar et al., 2018) denote that:",
"1) VILD is robust against diverse-quality demonstrations and outperforms existing methods significantly.",
"2) VILD with IS is data-efficient, since it learns the policy using a less number of transition samples.",
"In this paper, we explored a practical setting in IL where demonstrations have diverse-quality.",
"We showed the deficiency of existing methods, and proposed a robust method called VILD, which learns both the reward function and the level of demonstrators' expertise by using the variational approach.",
"Empirical results demonstrated that our work enables scalable and data-efficient IL under this practical setting.",
"In future, we will explore other approaches to efficiently estimate parameters of the proposed model except the variational approach.",
"We will also explore approaches to handle model misspecification, i.e., scenarios where the noisy policy differs from the model p ω .",
"Specifically, we will explore more flexible models of p ω such as neural networks, as well as using the tempered posterior approach (Grünwald & van Ommen, 2017) to improve robustness of our model."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.4117647111415863,
0.1904761791229248,
0.17142856121063232,
0.2800000011920929,
0.16326530277729034,
0,
0,
0.1666666567325592,
0.3829787075519562,
0.08695651590824127,
0.072727270424366,
0.1304347813129425,
0.1395348757505417,
0.11320754140615463,
0.1463414579629898,
0.12903225421905518,
0.3333333432674408,
0.1621621549129486,
0.24390242993831635,
0.23529411852359772,
0.19999998807907104,
0.1249999925494194,
0.17391303181648254,
0.19999998807907104,
0.1875,
0.1818181723356247,
0.05405404791235924,
0.10810810327529907,
0.1621621549129486,
0.24242423474788666,
0.3243243098258972,
0.2800000011920929,
0.037735845893621445,
0.22727271914482117,
0.05882352590560913,
0.12903225421905518,
0.10810810327529907,
0.12121211737394333,
0.2666666507720947,
0,
0.10810810327529907,
0.14999999105930328,
0.08163265138864517
] | SJgNkpVFPr | true | [
"We propose an imitation learning method to learn from diverse-quality demonstrations collected by demonstrators with different level of expertise."
] |
[
"With a growing number of available services, each having slightly different parameters, preconditions and effects, automated planning on general semantic services become highly relevant.",
"However, most exiting planners only consider PDDL, or if they claim to use OWL-S, they usually translate it to PDDL, losing much of the semantics on the way.\n",
"In this paper, we propose a new domain-independent heuristic based on a semantic distance that can be used by generic planning algorithms such as A* for automated planning of semantic services described with OWL-S.",
"For the heuristic to include more relevant information we calculate the heuristic at runtime.",
"Using this heuristic, we are able to produce better results (fewer expanded states) in less time than with established techniques.",
"We motivate our work by the need of a heuristic for AI planning.",
"Since the search space of domain-independent planners for large problems becomes computationally intractable BID9 we need heuristics to guide our search through the state space.For domain-specific planners that have a special purpose (e.g., finding a route from one place to another for a GPS traffic guidance systems), a heuristic can easily be provided e.g. the Manhattan-Distance or the Euclidean distance.",
"But for an agent which has the capability of creating general plans, these heuristics are not sufficient.",
"This means it is impossible for our general purpose planner to create a problem specific heuristic at design time.Even reusing old ones like it is done for meta-heuristics or learning parameters of hyper-heuristics have only been successfully applied to simple problems BID20 .",
"Meta-heuristics or hyper-heuristics have an additional drawback: they need a learning phase to gather information about the problem to be solved.The calculation of the heuristic during runtime is motivated by the additional information available like the grounding information which could consist of concrete in-dividuals to abstract classes describing e.g. the parameters of a service.The creation of heuristics during runtime can lead to the encounter of new concepts used in an interface definition like a service description, which then lead us back to a fundamental question in AI research: How can AI make sense of new concepts?",
"For heuristics this means interpreting the new concepts and adding information to classical heuristic approaches.",
"A function H : state → R + is called heuristic (Russel and Norvig 2002, p. 92) and estimates the distance from a state to a given goal.",
"We extend this definition of heuristic to H : service × state × goal → R + making the heuristic more dynamic since now it is able to adapt with changing goals and services.",
"With that, the heuristic determines the usefulness of the given service in the current state regarding a current goal.",
"This is done because if an alone state would be the information source for the heuristic, information like the service description would be lost.The interested reader is referred to (Pearl 1985) for a formal description of heuristics and their properties.",
"During our analysis of this topic, we have found that understanding the described functionality of a service is an AI-hard task BID26 .",
"This is because interpretation of what a description creator might have understood the service to be, might not be entirely reflected in the description.",
"Furthermore, the service can have multiple interpretations in different contexts.",
"Here the context we defined is the additional information relevant to our problem.",
"As an example strategy for problem-solving using a heuristic, we have selected planning.",
"This means our context consists of the start and goal state which include a domain description.Starting from this setup, we need to evaluate if a capability is useful in the endeavour of finding a plan solving our problem.The approach presented in in Figure 1 is a goal-oriented heuristic at runtime utilizing the semantics of the goal and capability description.",
"The heuristic is thought for a oneshop-planning problem, where it is expensive for the agent to try out services since we are looking at possible worldaltering capabilities, which means a learning phase should be kept as short as possible.",
"This is done at runtime so that we know the goal we want to fulfill and can create a heuris- Figure 1: Abstract approach to a greedy heuristic tic reflecting the given problem.",
"We do so by looking at the goal and the capability description we encounter to calculate the usefulness of a capability.",
"Here the idea of the heuristic is to find useful capabilities to try in our search to the goal state, and reprobate others.",
"The heuristic additionally estimates how much of the precondition is fulfilled to see which capabilities are more likely to be executed.",
"These two evaluations of the service are then combined to create our heuristic.",
"In this section we will first look at the state of the art of heuristics in Section 2.",
"Then we create our goal oriented heuristic in Section 3, select a data set to test this heuristic on in Section 4 and discuss the results in Section 5.",
"We conclude this experiment in Section 6."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.19607841968536377,
0.1538461446762085,
0.24137930572032928,
0.10256409645080566,
0.04255318641662598,
0.20000000298023224,
0.17721518874168396,
0.27272728085517883,
0.12121211737394333,
0.1702127605676651,
0.1904761791229248,
0.26923075318336487,
0.24137930572032928,
0.3333333432674408,
0.3333333432674408,
0.2083333283662796,
0.25,
0.10810810327529907,
0.10256409645080566,
0.14999999105930328,
0.25,
0.19672130048274994,
0.2142857164144516,
0.35555556416511536,
0.21739129722118378,
0.1702127605676651,
0.20000000298023224,
0.1395348757505417,
0.19607841968536377,
0
] | rJMK4bD6vE | true | [
"Describing a semantic heuristics which builds upon an OWL-S service description and uses word and sentence distance measures to evaluate the usefulness of services for a given goal. "
] |
[
"In colored graphs, node classes are often associated with either their neighbors class or with information not incorporated in the graph associated with each node.",
"We here propose that node classes are also associated with topological features of the nodes.",
"We use this association to improve Graph machine learning in general and specifically, Graph Convolutional Networks (GCN). \n\n",
"First, we show that even in the absence of any external information on nodes, a good accuracy can be obtained on the prediction of the node class using either topological features, or using the neighbors class as an input to a GCN.",
"This accuracy is slightly less than the one that can be obtained using content based GCN.\n\n",
"Secondly, we show that explicitly adding the topology as an input to the GCN does not improve the accuracy when combined with external information on nodes.",
"However, adding an additional adjacency matrix with edges between distant nodes with similar topology to the GCN does significantly improve its accuracy, leading to results better than all state of the art methods in multiple datasets.",
"One of the central assumptions in node classification tasks is that neighboring nodes have similar classes (Ji & Han, 2012; Berberidis & Giannakis, 2018; Zhu et al., 2003b; Sindhwani et al., 2005) .",
"This has been extensively used in node classification tasks (Belkin & Niyogi, 2004; Zhu et al., 2003a) .",
"Such approaches are now often denoted as graph neural networks (i.e. machine learning where the input is a graph/network) (Scarselli et al., 2008; Gori et al., 2005; Li et al., 2015) .",
"Four main approaches have been proposed to take advantage of a graph in machine learning:",
"• Regularize the output requiring that neighboring nodes either in the graph or in its projection have similar classes.",
"• Use the graph to propagate labels and learn the best propagation.",
"• Use the graph to project the nodes to real valued vectors and use those for supervised or unsupervised learning.",
"• Use Graph Convolutional Networks (GCN) for convolutions on the input of a node and its neighbors.",
"Regularization or graph partitioning methods include among others partitioning the graphs based on the eigenvalues of the Laplacian (assuming that nodes with the same partition have similar classes).",
"The Laplacian of a graph is : L = D −A , where D is a diagonal matrix, with the sum of each row in the diagonal and A is the adjacency matrix.",
"This Laplacian is often weighted by multiplying it on the left and the right by D to normalize for the degree (Dhillon et al., 2007; Karypis & Kumar, 1995) .",
"Other works have used variants of this idea, each using smoothness and graph distance differently (Belkin & Niyogi, 2004; Sindhwani et al., 2005) .",
"An alternative approach is to use quadratic penalty with fixed labels for seed nodes (Zhou et al., 2004; Zhu et al., 2003a) .",
"Multiple diffusion and information propagation models have also been proposed either through explicit diffusion, or through the projection of nodes into real valued vectors (Rosenfeld & Globerson, 2017) .",
"For example, DeepWalk (Perozzi et al., 2014) , where a truncated random walk is performed on nodes.",
"It then uses these sentences as an input to skipgram to compute a projection of each word into R N , maximizing the sentence probability.",
"Planetoid Yang et al. (2016) also uses random walks combined with negative sampling.",
"Duvenaud et al. (2015) uses a translation of subgraphs to hash functions for a similar task in the context of molecule classifications.",
"A very similar approach was presented by Grover & Leskovec (2016) (Node2Vec) by projecting nodes minimizing the distance of neighbored nodes in a truncated random walk.",
"The DNGR model (Cao et al., 2016) uses random walk to compute the mutual information between points (the PPMI-positive pointwise mutual information), and then a SVD decomposition to project into space.",
"PPMI was used for word representations in Levy et al. (2015) and is a sparse high dimensional representation.",
"Another possible approach is the projection of the graph (often using the Laplacian eigenvectors), and the usage of the projection for classification (and not only for a smoothness based regularization), where either the graph itself is used (in such a case, the eigenvectors themselves are used) or an input to the graph is used.",
"In such a case, a convolution with these eigenvectors was used (Masci et al., 2015; Monti et al., 2017) .",
"A Multi-Dimensional-Scaling (MDS) projection of the points in the graphs was also used for a similar goal (Belkin & Niyogi, 2002; Levy et al., 2015) .",
"Alternative approaches were inspired again by word embedding methods (Mikolov et al., 2013) such as word2vec.",
"These methods use the graph to define a context in relation to which the node embedding is constructed.",
"When the data includes only the graph, the embeddings are used as features and fed into existing predictors (Perozzi et al., 2014) .",
"These methods can be thought of as propagating features rather than labels.",
"Henderson et al. (2011) defines local features to translate each node to a features vector and use those to predict classes.",
"Recently, Kipfs and collaborators, in a seminal work, proposed a simplification of spectral based convolutions (Kipf & Welling, 2016; Schlichtkrull et al., 2018) , and instead use a two-layer approach, which can be summarized as:",
"whereà is a normalized adjacency matrix:",
"They test their work on multiple graphs with labeled nodes including CiteSeer, Cora, Pubmed, and Nell.",
"Convolution approaches can also be used with the graph as a filter on the input.",
"Most such convolutions are spectral (use the Laplacian eigenvectors).",
"However, recent methods are based on random filters.",
"Those include among others: Atwood & Towsley (2016) which defines predetermined convolutions with powers of the adjacency matrix and then combines these powers using learned weights to maximize the classification precision of either the full graph or the classification of nodes.",
"Bruna et al. (2013) provide a multi-level graph convolution with pooling, where at each stage nodes are merged into clusters using agglomerative clustering methods, and combine it with a pooling method to represent the different resolution of images.",
"This has been extended (Henaff et al., 2015; to different convolutional kernels (mainly spectral, but also diffusion-based kernels) and the classification of images, using ImageNet (see for a detailed review of all convolution methods).",
"Vandergheynst and collaborators mainly use polynomial convolution in the spectral domain.",
"Similar formalisms were used to study not only single snapshots, but also with recurrent networks time series of graphs, mainly again in image analysis (Seo et al., 2018) .",
"Over the last 3 years, over 1,500 extensions and applications of GCN have been published in combination with many other learning methods, including among many others combinations of GCN with recurrent neural networks (Ling et al., 2019) , with GANs (Lei et al., 2019) and with active learning (Abel & Louzoun, 2019) .",
"GCNs capture dependencies of nodes' features.",
"However, current techniques consider only local neighborhoods.",
"Thus, long-range dependencies can only be captured when these operations are applied repeatedly, propagating signals progressively through the data.",
"To catch long-range dependencies, Kipf & Welling (2016) proposed stacking multiple layers of GCN.",
"While this is possible in theory, it has never been successfully applied.",
"In practice, GCN models work the best with 2-3 layers (Kipf & Welling, 2016; Monti et al., 2017; Veličković et al., 2017; Levie et al., 2018; Fey et al., 2018) .",
"Abu-El-Haija et al. (2018) proposed using NGCN train multiple instances of GCNs over different distances regions.",
"While this led to good performance, it is highly inefficient and does not scale to long distances (as the number of models scales linearly with the desired length).",
"However, long range correlations can be obtained from a different direction.",
"Recently, a correlation has been shown between the topological attributes (e.g. degree, centrality, clustering coefficient...) of nodes and their class (Shi & Malik, 2000; Yang et al., 2013; Cannistraci et al., 2013; Rosen & Louzoun, 2015; Naaman et al., 2018) .",
"Inspired by the improvement of non-local operations in a variety of tasks in the field of computer vision Wang et al. (2018) , we propose a novel non-local operation for GCN, based on the topology of the graph.",
"Our operation is generic and can be implemented with every GCN to capture long-range dependencies, allowing information propagation to distant nodes.",
"There are several advantages of using non-local operations:",
"(a) In contrast to the standard local convolution layer, non-local operations capture long-range dependencies directly by computing interactions between any two nodes, regardless of their positional distance;",
"(b) As we show in experiments, non-local operations are efficient and achieve their best results even with only a few layers;",
"(c) Finally, our non-local convolution can be easily combined with other graph convolution techniques (e.g. GCN, GAT).",
"Convolution methods to aggregate information from multiple distances are among the leading image classification methods.",
"In images, most of these convolutions are symmetric and sometimes isotropic around each point.",
"However, in contrast with images that are typically overlaid on a 2D lattice, graphs have a complex topology.",
"This topology is highly informative of the properties of nodes and edges (Rosen & Louzoun, 2015; Naaman et al., 2018) , and can thus be used to classify their classes.",
"This complex topology can be combined with convolutional networks to improve their accuracy.",
"In undirected graphs, the topology can often be captured by a distance maintaining projection into R N , using unsupervised methods, such as the classical MDS (Kruskal, 1964), or supervised methods to minimize the distance between nodes with similar classes in the training set (Cao et al., 2016) .",
"In directed graphs, a more complex topology emerges from the asymmetry between incoming and outgoing edges (i.e., the distance between node i and node j differs from the distance between node j and node i), creating a distribution of subgraphs around each node often denoted sub-graph motifs (Milo et al., 2002) .",
"Such motifs have been reported to be associated with both single node/edge attributes as well as whole-graph attributes (Milo et al., 2002) .",
"We have here shown that in a manuscript assignment task, the topology around each node is indeed associated with the manuscript class.",
"In order to combine topological information with information propagation, we proposed a novel GCN where the fraction of second neighbors belonging to each class is used as an input, and the class of the node is compared to the softmax output of the node.",
"This method can indeed produce a high classification accuracy, but less than the one obtained using a BOW input.",
"Moreover, explicitly combining the topology as an input with the BOW reduces the accuracy.",
"However, using the topology to add new edges between nodes with similar topological features actually significantly improves performance in most studied datasets.",
"This suggests that the topology is better used to correlate between the class of distant nodes than to be actually used as an input.",
"The results presented here are a combination of information propagation and topology-based classification.",
"While each of these two elements was previously reported, their combination into a single coherent GCN based classifier provides a novel content independent method to classify nodes.",
"With the current ever-increasing concerns about privacy, new content independent methods for node classification become essential.",
"The citation networks contain scientific papers divided into classes by their research field.",
"Edges describe citations in the data set.",
"BOW is also available to describe each publication in the dataset.",
"BOW can be either a 1/0 vector or a TF/IDF weighted word vector for PubMed.",
"Coauthor CS and Coauthor Physics are co-authorship graphs based on the Microsoft Academic Graph from the KDD Cup 2016 challenge 3.",
"Here, nodes are authors, that are connected by an edge if they co-authored a paper, node features represent paper keywords for each authors papers, and class labels indicate the most active fields of study for each author.",
"Here are the parameters used for each of the models.",
"For T-GCN and T-GAT the parameters were optimized for PubMed ( as observed by Monti et al. (2017) and Veličković et al. (2017) ) except for Cora data for set which we used slightly different parameters (denotes as T-GCN Cora, and T-GAT Cora).",
"The parameters are summarized in Table 3 .",
"In all models, the activation function of the last layer is Softmax.",
"The activation function of the first layer is presented In Table 3 .",
"Hidden size X+Y means size of X for the original GCN operator and Y for the GCN on the dual graph.",
"The two outputs are concatenated to a total of X+Y size.",
"GAT heads X,Y,Z means X heads for the original GAT operator, and Y heads for the GAT on the dual graph.",
"Z is the number of heads in the last layer.",
"See Models And Data for more details."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0,
0.27272728085517883,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0.27272728085517883,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0.0833333283662796,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | ryey1TNKvH | true | [
"Topology-Based Graph Convolutional Network (GCN)"
] |
[
"Flies and mice are species separated by 600 million years of evolution, yet have evolved olfactory systems that share many similarities in their anatomic and functional organization.",
"What functions do these shared anatomical and functional features serve, and are they optimal for odor sensing?",
"In this study, we address the optimality of evolutionary design in olfactory circuits by studying artificial neural networks trained to sense odors.",
"We found that artificial neural networks quantitatively recapitulate structures inherent in the olfactory system, including the formation of glomeruli onto a compression layer and sparse and random connectivity onto an expansion layer.",
"Finally, we offer theoretical justifications for each result.",
"Our work offers a framework to explain the evolutionary convergence of olfactory circuits, and gives insight and logic into the anatomic and functional structure of the olfactory system.",
"Over the last two decades, both the anatomic and functional organization of the fly and mouse olfactory systems have been mapped to excruciating detail, affording knowledge of how odors are processed along the entirety of olfactory pathway.",
"In both model organisms, the layout of the olfactory system is two layers deep and comprises of a compression layer and an expansion layer.",
"Olfactory perception is initiated by the recognition of odorants by a large repertoire of receptors in the sensory epithelium (Buck & Axel, 1991) .",
"In fruit flies, individual olfactory receptor neurons (ORNs) express only one of 50 different olfactory receptors (ORs), and all neurons (10 on average) that express the same receptor converge with precision onto a unique set of 2-4 projection neurons (PNs) through a specialized structure known as an olfactory glomerulus (Vosshall, Wong, & Axel, 2000) .",
"This layout establishes a one-to-one mapping between ORs and PNs.",
"Information is then conveyed to an expansion layer of 2,500 Kenyon Cells (KCs) through sparse and random connectivity to support a high dimensional representation of odor information before it is classified by 20 read-out neurons, the mushroom body output neurons (MBONs).",
"Experiments reveal that synaptic plasticity at the KC-MBON synapse is necessary and causal in odor learning.",
"The only major differences between the circuits of mice and flies appear to be numerical.",
"Whereas the fly olfactory system consists of 50 ORs, 50 glomeruli, and 2500 KCs, the mouse olfactory system consists of 1500 ORs, 1500 glomeruli, and 1 million piriform neurons.",
"The fact that evolution has evolved to hardwire the same architecture in flies, mice, and multiple other organisms suggests that such an architecture is optimal for the general task of odor sensing.",
"Although we have a detailed anatomy of the olfactory system in both flies and mice, it is unclear why certain features are optimal for odor sensing.",
"In particular,",
"1) why does every ORN express a single OR,",
"2) why is information preserved through a one-to-one mapping between ORs and PNs, and",
"3) why is connectivity onto the expansion layer sparse and random (Litwin-Kumar, Harris, Axel, Sompolinsky, & Abbott, 2017)?",
"To study optimal circuit design, we use a goal-driven approach to train an artificial neural network to classify odors and then analyze the anatomical and functional structures that emerge after training.",
"This approach has recently been used to study the functional profiles of the ventral stream in visual object processing (Yamins & DiCarlo, 2016) .",
"The simplicity of the fly olfactory circuit and the exhaustive knowledge that we have of its anatomy provides constraints that can be used to gain insight into evolutionary design.",
"We trained artificial neural networks using stochastic gradient descent to classify odors.",
"We found that glomeruli emerged in PN layer, and sparse random connectivity emerges in the PN to KC connections.",
"We then explored the sufficient conditions that enabled these features to emerge.",
"We found that the formation of glomeruli did not depend on input noise but rather on the existence of an expansion layer downstream.",
"In addition, we found that an expansion layer with a synaptic degree of 7 endows the olfactory system with robustness by allowing for large tolerances in synaptic efficacies without affecting task performance.",
"Our work offers a framework to explain the Optimal K predicted by maximal weight robustness (red) and direct training (plus signs).",
"The line is a power-law fit of the red dots."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2978723347187042,
0.05405404791235924,
0.41860464215278625,
0.3265306055545807,
0,
0.23255813121795654,
0.2745097875595093,
0.19512194395065308,
0.1463414579629898,
0.1492537260055542,
0.06451612710952759,
0.1355932205915451,
0.1621621549129486,
0.3333333432674408,
0.19999998807907104,
0.2800000011920929,
0.25531914830207825,
0,
0.05882352590560913,
0.10256409645080566,
0.3199999928474426,
0.1860465109348297,
0.21276594698429108,
0.3636363446712494,
0.21052631735801697,
0.12121211737394333,
0.09756097197532654,
0.15686273574829102,
0.1428571343421936,
0.12903225421905518
] | BylUXXFI8S | true | [
"Artificial neural networks evolved the same structures present in the olfactory systems of flies and mice after being trained to classify odors"
] |
[
"The recent direction of unpaired image-to-image translation is on one hand very exciting as it alleviates the big burden in obtaining label-intensive pixel-to-pixel supervision, but it is on the other hand not fully satisfactory due to the presence of artifacts and degenerated transformations.",
"In this paper, we take a manifold view of the problem by introducing a smoothness term over the sample graph to attain harmonic functions to enforce consistent mappings during the translation.",
"We develop HarmonicGAN to learn bi-directional translations between the source and the target domains.",
"With the help of similarity-consistency, the inherent self-consistency property of samples can be maintained.",
"Distance metrics defined on two types of features including histogram and CNN are exploited.",
"Under an identical problem setting as CycleGAN, without additional manual inputs and only at a small training-time cost, HarmonicGAN demonstrates a significant qualitative and quantitative improvement over the state of the art, as well as improved interpretability.",
"We show experimental results in a number of applications including medical imaging, object transfiguration, and semantic labeling.",
"We outperform the competing methods in all tasks, and for a medical imaging task in particular our method turns CycleGAN from a failure to a success, halving the mean-squared error, and generating images that radiologists prefer over competing methods in 95% of cases.",
"Image-to-image translation BID15 aims to learn a mapping from a source domain to a target domain.",
"As a significant and challenging task in computer vision, image-to-image translation benefits many vision and graphics tasks, such as realistic image synthesis BID15 BID41 , medical image generation BID39 BID9 , and domain adaptation BID13 .",
"Given a pair of training images with detailed pixel-to-pixel correspondences between the source and the target, image-to-image translation can be cast as a regression problem using e.g. Fully Convolutional Neural Networks (FCNs) BID23 by minimizing e.g. the per-pixel prediction loss.",
"Recently, approaches using rich generative models based on Generative Adaptive Networks (GANs) BID11 BID27 BID0 have achieved astonishing success.",
"The main benefit of introducing GANs BID11 to image-to-image translation BID15 is to attain additional image-level (often through patches) feedback about the overall quality of the translation, and information which is not directly accessible through the per-pixel regression objective.The method by BID15 is able to generate high-quality images, but it requires paired training data which is difficult to collect and often does not exist.",
"To perform translation without paired data, circularity-based approaches BID41 BID17 BID37 have been proposed to learn translations from a set to another set, using a circularity constraint to establish relationships between the source and target domains and forcing the result generated from a sample in the source domain to map back and generate the original sample.",
"The original image-to-image translation problem BID15 ) is supervised in pixel-level, whereas the unpaired image-to-image translation task BID41 ) is considered unsupervised, with pixel-level supervision absent but with adversarial supervision at the image-level (in the target domain) present.",
"By using a cycled regression for the pixel-level prediction (source→target→source) plus a term for the adversarial difference between the transferred images and the target images, CycleGAN is able to successfully, in many cases, train a translation model without paired source→target supervision.",
"However, lacking a mechanism to enforce regularity in the translation creates problems like in Fig To combat the above issue, in this paper we look at the problem of unpaired image-to-image translation from a manifold learning perspective BID33 BID28 .",
"Intuitively, the problem can be alleviated by introducing a regularization term in the translation, encouraging similar contents (based on textures or semantics) in the same image to undergo similar translations/transformations.",
"A common principle in manifold learning is to preserve local distances after the unfolding: forcing neighboring (similar) samples in the original space to be neighbors in the new space.",
"The same principle has been applied to graph-based semisupervised learning BID44 where harmonic functions with graph Laplacians BID45 BID2 are used to obtain regularized labels of unlabeled data points.During the translation/transformation, some domain-specific attributes are changed, such as the colors, texture, and semantics of certain image regions.",
"Although there is no supervised information for these changes, certain consistency during the transformation is desirable, meaning that for image contents similar in the source space should also be similar in the target space.",
"Inspired by graphbased semi-supervised learning BID45 BID44 , we introduce smoothness terms to unpaired image-to-image translation BID41 by providing a stronger regularization for the translation/transformation between the source and target domains, aiming to exploit the \"manifold structure\" of the source and target domains.",
"For a pair of similar samples (two different locations in an image; one can think of them as two patches although the receptive fields of CNN are quite large), we add the smoothness term to minimize a weighted distance of the corresponding locations in the target image.",
"Note that two spatially distant samples might be neighbors in the feature space.",
"We name our algorithm HarmonicGAN as it behaves harmonically along with the circularity and adversarial constraints to learn a pair of dual translations between the source and target domains, as shown in FIG0 .",
"Distance metrics defined on two alternative features are adopted: (1) a low-level soft RGB histograms; and (2) CNN (VGG) features with pre-trained semantics.We conduct experiments in a number of applications, showing that in each of them our method outperforms existing methods quantitatively, qualitatively, and with user studies.",
"For a medical imaging task BID6 that was recently calling attention to a major CycleGAN failure case (learning to accidentally add/remove tumors in an MRI image translation task), our proposed method provides a large improvement over CycleGAN, halving the mean-squared error, and generating images that radiologists prefer over competing methods in 95% of cases.",
"CONTRIBUTIONS 1.",
"We introduce smooth regularization over the graph for unpaired image-to-image translation to attain harmonic translations.2.",
"When building an end-to-end learning pipeline, we adopt two alternative types of feature measures to compute the weight matrix for the graph Laplacian, one based on a soft histogram BID35 and another based on semantic CNN (VGG) features BID31 .3.",
"We show that this method results in significantly improved consistency for transformations.",
"With experiments on multiple translation tasks, we demonstrate that HarmonicGAN outperforms the state-of-the-art.",
"We introduce a smoothness term over the sample graph to enforce smoothness-consistency between the source and target domains.",
"We have shown that by introducing additional regularization to enforce consistent mappings during the image-to-image translation, the inherent self-consistency property of samples can be maintained.",
"Through a set of quantitative, qualitative and user studies, we have demonstrated that this results in a significant improvement over the current state-of-the-art methods in a number of applications including medical imaging, object transfiguration, and semantic labeling.",
"In a medical imaging task in particular our method provides a very significant improvement over CycleGAN.",
"(1) They show different motivations and formulations.",
"The distance constraint aims to preserve the distance between samples in the mapping in a direct way, so it minimizes the expectation of differences between distances in two domains.",
"The distance constraint in DistanceGAN is not doing a graph-based Laplacian to explicitly enforce smoothness.",
"In contrast, the smoothness constraint is designed from a graph Laplacian to build the similarity-consistency between image patches.",
"Thus, the smoothness constraint uses the affinity between two patches as weight to measure the similarityconsistency between two domains.",
"The whole idea is based on manifold learning.",
"The smoothness term defines a Laplacian ∆ = D − W , where W is our weight matrix and D is a diagonal matrix with D i = j w ij , thus, the smoothness term defines a graph Laplacian with the minimal value achieved as a harmonic function.(2",
") They are different in implementation. The",
"smoothness constraint in HarmonicGAN is computed on image patches while the distance constraint in DistanceGAN is computed on whole image samples. Therefore",
", the smoothness constraint is fine-grained compared to the distance constraint. Moreover",
", the distances in DistanceGAN is directly computed from the samples in each domain. They scale",
"the distances with the precomputed means and stds of two domains to reduce the effect of the gap between two domains. Differently",
", the smoothness constraint in HarmonicGAN is measured on the features (Histogram or CNN features) of each patch, which maps samples in two domains into the same feature space and removes the gap between two domains.(3) They show",
"different results. FIG5 shows the",
"qualitative results of CycleGAN, DistanceGAN and the proposed HarmonicGAN on the BRATS dataset. As shown in FIG5",
", the problem of randomly adding/removing tumors in the translation of CycleGAN is still present in the results of Distance-GAN, while HarmonicGAN can correct the location of tumors. Table 1 shows the",
"quantitative results on the whole test set, which also yields the same conclusion. The results of DistanceGAN",
"on four metrics are even worse than CycleGAN, while HarmonicGAN yields a large improvement over CycleGAN."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1599999964237213,
0.19512194395065308,
0,
0,
0,
0.08695651590824127,
0.12903225421905518,
0.12244897335767746,
0.07692307233810425,
0.13333332538604736,
0.07843136787414551,
0,
0.0624999962747097,
0.10526315122842789,
0.1818181723356247,
0.12244897335767746,
0.1702127605676651,
0.09999999403953552,
0.05405404791235924,
0.03448275476694107,
0.1463414579629898,
0.20408162474632263,
0.038461536169052124,
0.07407406717538834,
0.045454539358615875,
0.0357142835855484,
0.09677419066429138,
0.46666666865348816,
0.07843136787414551,
0.4615384638309479,
0.07407406717538834,
0.19354838132858276,
0.10526315122842789,
0.1304347813129425,
0.13793103396892548,
0,
0.05405404791235924,
0.06896550953388214,
0.06451612710952759,
0,
0,
0.0416666641831398,
0.0952380895614624,
0.06666666269302368,
0,
0.0714285671710968,
0,
0.04255318641662598,
0.10526315122842789,
0.13333332538604736,
0.1621621549129486,
0.06896550953388214,
0.06666666269302368
] | S1M6Z2Cctm | true | [
"Smooth regularization over sample graph for unpaired image-to-image translation results in significantly improved consistency"
] |
[
"Multiview stereo aims to reconstruct scene depth from images acquired by a camera under arbitrary motion.",
"Recent methods address this problem through deep learning, which can utilize semantic cues to deal with challenges such as textureless and reflective regions.",
"In this paper, we present a convolutional neural network called DPSNet (Deep Plane Sweep Network) whose design is inspired by best practices of traditional geometry-based approaches.",
"Rather than directly estimating depth and/or optical flow correspondence from image pairs as done in many previous deep learning methods, DPSNet takes a plane sweep approach that involves building a cost volume from deep features using the plane sweep algorithm, regularizing the cost volume via a context-aware cost aggregation, and regressing the depth map from the cost volume.",
"The cost volume is constructed using a differentiable warping process that allows for end-to-end training of the network.",
"Through the effective incorporation of conventional multiview stereo concepts within a deep learning framework, DPSNet achieves state-of-the-art reconstruction results on a variety of challenging datasets.",
"Various image understanding tasks, such as semantic segmentation BID3 and human pose/action recognition BID29 BID33 , have been shown to benefit from 3D scene information.",
"A common approach to reconstructing 3D geometry is by multiview stereo, which infers depth based on point correspondences among a set of unstructured images BID10 ; .",
"To solve for these correspondences, conventional techniques employ photometric consistency constraints on local image patches.",
"Such photo-consistency constraints, though effective in many instances, can be unreliable in scenes containing textureless and reflective regions.Recently, convolutional neural networks (CNNs) have demonstrated some capacity to address this issue by leveraging semantic information inferred from the scene.",
"The most promising of these methods employ a traditional stereo matching pipeline, which involves computation of matching cost volumes, cost aggregation, and disparity estimation BID5 ; BID19 ; BID14 ; BID0 .",
"Some are designed for binocular stereo BID31 ; BID19 ; BID0 and cannot readily be extended to multiple views.",
"The CNN-based techniques for multiview processing BID5 ; BID14 both follow the plane-sweep approach, but require plane-sweep volumes as input to their networks.",
"As a result, they are not end-to-end systems that can be trained from input images to disparity maps.In this paper, we present Deep Plane Sweep Network (DPSNet), an end-to-end CNN framework for robust multiview stereo.",
"In contrast to previous methods that employ the plane-sweep approach BID14 ; BID5 , DPSNet fully models the plane-sweep process, including construction of plane-sweep cost volumes, within the network.",
"This is made possible through the use of a differentiable warping module inspired by spatial transformer networks BID17 to build the cost volumes.",
"With the proposed network, plane-sweep stereo can be learned in an end-to-end fashion.",
"Additionally, we introduce a cost aggregation module based on local cost-volume filtering BID26 for context-aware refinement of each cost slice.",
"Through this cost-volume regularization, the effects of unreliable matches scattered within the cost volume are reduced considerably.With this end-to-end network for plane-sweep stereo and the proposed cost aggregation, we obtain state-of-the-art results over several standard datasets.",
"Ablation studies indicate that each of these technical contributions leads to appreciable improvements in reconstruction accuracy.",
"We developed a multiview stereo network whose design is inspired by best practices of traditional non-learning-based techniques.",
"The plane sweep algorithm is formulated as an end-to-end network via a differentiable construction of plane sweep cost volumes and by solving for depth as a multilabel classification problem.",
"Moreover, we propose a context-aware cost aggregation method that leads to improved depth regression without any post-processing.",
"With this incorporation of traditional multiview stereo schemes into a deep learning framework, state-of-the-art reconstruction results are achieved on a variety of datasets.Directions exist for improving DPSNet.",
"One is to integrate semantic instance segmentation into the cost aggregation, similar to the segment-based cost aggregation method of BID25 .",
"Another direction is to improve depth prediction by employing viewpoint selection in constructing cost volumes BID6 , rather than by simply averaging the estimated cost volumes as currently done in DPSNet.",
"Lastly, the proposed network requires pre-calibrated intrinsic and extrinsic parameters for reconstruction.",
"Lifting this restriction by additionally estimating camera poses in an end-to-end learning framework is an important future challenge."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.11428570747375488,
0,
0.5777777433395386,
0,
0.21621620655059814,
0.0952380895614624,
0,
0.17777776718139648,
0.05882352590560913,
0.07017543166875839,
0.17777776718139648,
0.10810810327529907,
0.04878048226237297,
0.07407406717538834,
0.09090908616781235,
0.19512194395065308,
0.0624999962747097,
0.10526315122842789,
0.1538461446762085,
0.05714285373687744,
0.6111111044883728,
0.22727271914482117,
0,
0.17777776718139648,
0.1111111044883728,
0.08695651590824127,
0.12903225421905518,
0.1111111044883728
] | ryeYHi0ctQ | true | [
"A convolution neural network for multi-view stereo matching whose design is inspired by best practices of traditional geometry-based approaches"
] |
[
"The ability to design biological structures such as DNA or proteins would have considerable medical and industrial impact.",
"Doing so presents a challenging black-box optimization problem characterized by the large-batch, low round setting due to the need for labor-intensive wet lab evaluations.",
"In response, we propose using reinforcement learning (RL) based on proximal-policy optimization (PPO) for biological sequence design.",
"RL provides a flexible framework for optimization generative sequence models to achieve specific criteria, such as diversity among the high-quality sequences discovered.",
"We propose a model-based variant of PPO, DyNA-PPO, to improve sample efficiency, where the policy for a new round is trained offline using a simulator fit on functional measurements from prior rounds.",
"To accommodate the growing number of observations across rounds, the simulator model is automatically selected at each round from a pool of diverse models of varying capacity. ",
"On the tasks of designing DNA transcription factor binding sites, designing antimicrobial proteins, and optimizing the energy of Ising models based on protein structure, we find that DyNA-PPO performs significantly better than existing methods in settings in which modeling is feasible, while still not performing worse in situations in which a reliable model cannot be learned.",
"Driven by real-world obstacles in health and disease requiring new drugs, treatments, and assays, the goal of biological sequence design is to identify new discrete sequences x which optimize some oracle, typically an experimentally-measured functional property f (x).",
"This is a difficult black-box optimization problem over a combinatorially large search space in which function evaluation relies on slow and expensive wet-lab experiments.",
"The setting induces unusual constraints in black-box optimization and reinforcement learning: large synchronous batches with few rounds total.",
"The current gold standard for biomolecular design is directed evolution, which was recently recognized with a Nobel prize (Arnold, 1998) and is a form of randomized local search.",
"Despite its impact, directed evolution is sample inefficient and relies on greedy hillclimbing to the optimal sequences.",
"Recent work has demonstrated that machine-learning-guided optimization (Section 3) can find better sequences faster.",
"Reinforcement learning (RL) provides a flexible framework for black-box optimization that can harness modern deep generative sequence models.",
"This paper proposes a simple method for improving the sample efficiency of policy gradient methods such as PPO (Schulman et al., 2017) for black-box optimization by using surrogate models that are trained online to approximate f (x).",
"Our method updates the policy's parameters using sequences x generated by the current policy π θ (x), but evaluated using a learned surrogate f w (x), instead of the true, but unknown, oracle reward function f (x).",
"We learn the parameters of the reward model, w, simultaneously with the parameters of the policy.",
"This is similar to other model-based RL methods, but simpler, since in the context of sequence optimization, the state-transition model is deterministic and known.",
"Initially the learned reward model, f w (x), is unreliable, so we rely entirely on f (x) to assess sequences and update the policy.",
"This allows a graceful fallback to PPO when the model is not effective.",
"Over time, the reward model becomes more reliable and can be used as a cheap surrogate, similar to Bayesian optimization methods (Shahriari et al., 2015) .",
"We show empirically that cross-validation is an effective heuristic for assessing the model quality, which is simpler than the inference required by Bayesian optimization.",
"We rigorously evaluate our method on three in-silico sequence design tasks that draw on experimental data to construct functions f (x) characteristic of real-world design problems: optimizing binding affinity of DNA sequences of length 8 (search space size 4 8 ); optimizing anti-microbial peptide sequences (search space size 20 50 ), and optimizing binary sequences where f (x) is defined by the energy of an Ising model for protein structure (search space size 20 50 ).",
"These do not rely on wet lab experiments, and thus allow for large-scale benchmarking across a range of methods.",
"We show that our DyNA-PPO method achieves higher cumulative reward for a given budget (measured in terms of number of calls to f (x)) than existing methods, such as standard PPO, various forms of the cross-entropy method, Bayesian optimization, and evolutionary search.",
"In summary, our contributions are as follows:",
"• We provide a model-based RL algorthm, DyNA-PPO, and demonstrate its effectiveness in performing sample efficient batched black-box function optimization.",
"• We address model bias by quantifying the reliability and automatically selecting models of appropriate complexity via cross validation.",
"• We propose a visitation-based exploration bonus and show that it is more effective than entropy-regularization in identifying multiple local optima.",
"• We present a new optimization task for benchmarking methods for biological sequence design based on protein energy Ising models.",
"We have shown that RL is an attractive alternative to existing methods for designing DNA and protein sequences.",
"We have proposed DyNA-PPO, a model-based extension of PPO (Schulman et al., 2017) with automatic model selection that improves sample efficiency, and incorporates a reward function that promotes exploration by penalizing identical sequences.",
"By approximating an expensive wet-lab experiment with a surrogate model, we can perform many rounds of optimization in simulation.",
"While this work has been focused on showing the benefit of DyNA-PPO for biological sequence design, we believe that the large-batch, low-round optimization setting described here may well be of general interest, and that model-based RL may be applicable in other domains such as agriculture, education, and economics.",
"A IMPLEMENTATION DETAILS"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
0.08695651590824127,
0.07843136787414551,
0.04444443807005882,
0.11999999731779099,
0.13793103396892548,
0.07547169178724289,
0.1818181723356247,
0.1249999925494194,
0.15686273574829102,
0.17391303181648254,
0.1111111044883728,
0.13333332538604736,
0.0476190447807312,
0.08695651590824127,
0.1230769157409668,
0.20338982343673706,
0.25641024112701416,
0.11999999731779099,
0.19999998807907104,
0.09756097197532654,
0.14814814925193787,
0.07999999821186066,
0.1666666567325592,
0.08510638028383255,
0.1764705777168274,
0,
0.25,
0.12765957415103912,
0.20408162474632263,
0.12765957415103912,
0.260869562625885,
0.19999998807907104,
0.1702127605676651,
0.11428570747375488,
0
] | HklxbgBKvr | true | [
"We augment model-free policy learning with a sequence-level surrogate reward functions and count-based visitation bonus and demonstrate effectiveness in the large batch, low-round regime seen in designing DNA and protein sequences."
] |
[
"Achieving machine intelligence requires a smooth integration of perception and reasoning, yet models developed to date tend to specialize in one or the other; sophisticated manipulation of symbols acquired from rich perceptual spaces has so far proved elusive.",
"Consider a visual arithmetic task, where the goal is to carry out simple arithmetical algorithms on digits presented under natural conditions (e.g. hand-written, placed randomly).",
"We propose a two-tiered architecture for tackling this kind of problem.",
"The lower tier consists of a heterogeneous collection of information processing modules, which can include pre-trained deep neural networks for locating and extracting characters from the image, as well as modules performing symbolic transformations on the representations extracted by perception.",
"The higher tier consists of a controller, trained using reinforcement learning, which coordinates the modules in order to solve the high-level task.",
"For instance, the controller may learn in what contexts to execute the perceptual networks and what symbolic transformations to apply to their outputs.",
"The resulting model is able to solve a variety of tasks in the visual arithmetic domain,and has several advantages over standard, architecturally homogeneous feedforward networks including improved sample efficiency.",
"Recent successes in machine learning have shown that difficult perceptual tasks can be tackled efficiently using deep neural networks BID18 .",
"However, many challenging tasks may be most naturally solved by combining perception with symbol manipulation.",
"The act of grading a question on a mathematics exam, for instance, requires both sophisticated perception (identifying discrete symbols rendered in many different writing styles) and complex symbol manipulation (confirming that the rendered symbols correspond to a correct answer to the given question).",
"In this work, we address the question of creating machine learning systems that can be trained to solve such perceptuo-symbolic problems from a small number of examples.",
"In particular, we consider, as a first step toward fullblown exam question grading, the visual arithmetic task, where the goal is to carry out basic arithmetic algorithms on hand-written digits embedded in an image, with the wrinkle that an additional symbol in the image specifies which of a handful of algorithms (e.g. max, min, +, *) should be performed on the provided digits.One straightforward approach to solving the visual arithmetic task with machine learning would be to formulate it as a simple classification problem, with the image as input, and an integer giving the correct answer to the arithmetic problem posed by the image as the label.",
"A convolutional neural network (CNN; BID17 could then be trained via stochastic gradient descent to map from input images to correct answers.",
"However, it is clear that there is a great deal of structure in the problem which is not being harnessed by this simple approach, and which would likely improve the sample efficiency of any learning algorithm that was able to exploit it.",
"While the universal approximation theorem BID9 suggests that an architecturally homogeneous network such as a CNN should be able to solve any task when it is made large enough and given sufficient data, imposing model structure becomes important when one is aiming to capture human-like abilities of strong generalization and learning from small datasets BID16 .In",
"particular, in this instance we would like to provide the learner with access to modules implementing information processing functions that are relevant for the task at hand -for example, modules that classify individual symbols in the image, or modules that perform symbolic computations on stored representations. However",
", it is not immediately clear how to include such modules in standard deep networks; the classifiers need to somehow be applied to the correct portion of the image, while the symbolic transformations need to be applied to the correct representations at the appropriate time and, moreover, will typically be non-differentiable, precluding the possibility of training via backpropogation.In this work we propose an approach that solves this type of task in two steps. First,",
"the machine learning practitioner identifies a collection of modules, each performing an elementary information processing function that is predicted to be useful in the domain of interest, and assembles them into a designed information processing machine called an interface BID30 that is coupled to the external environment. Second",
", reinforcement learning (RL) is used to train a controller to make use of the interface; use of RL alleviates any need for the interface to be differentiable. For example",
", in this paper we make use of an interface for the visual arithmetic domain that contains: a discrete attention mechanism; three pre-trained perceptual neural networks that classify digits/classify arithmetic symbols/detect salient locations (respectively); several modules performing basic arithmetic operations on stored internal representations. Through the",
"use of RL, a controller learns to sequentially combine these components to solve visual arithmetic tasks.",
"There are number of possible future directions related to the current work, including potential benefits of our approach that were not explored here.",
"These include the ability to take advantage of conditional computation; in principle, only the subset of the interface needed to carry out the chosen action needs to be executed every time step.",
"If the interface contains many large networks or other computationally intensive modules, large speedups can likely be realized along these lines.",
"A related idea is that of adaptive computation time; in the current work, all episodes ran for a fixed number of time steps, but it should be possible to have the controller decide when it has the correct answer and stop computation at that point, saving valuable computational resources.",
"Furthermore, it may be beneficial to train the perceptual modules and controller simultaneously, allowing the modules to adapt to better perform the uses that the controller finds for them.",
"Finally, the ability of reinforcement learning to make use of discrete and non-differentiable modules opens up a wide array of possible interface components; for instance, a discrete knowledge base may serve as a long term memory.",
"Any generally intelligent system will need many individual competencies at its disposal, both perceptual and algorithmic; in this work we have proposed one path by which a system may learn to coordinate such competencies.We have proposed a novel approach for solving tasks that require both sophisticated perception and symbolic computation.",
"This approach consists in first designing an interface that contains information processing modules such as pre-trained deep neural networks for processing perceptual data and modules for manipulating stored symbolic representations.",
"Reinforcement learning is then used to train a controller to use the interface to solve tasks.",
"Using the Visual Arithmetic task domain as an example, we demonstrated empirically that the interface acts as a source of inductive bias that allows tasks to be solved using a much smaller number of training examples than required by traditional approaches."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.158730149269104,
0.15094339847564697,
0.15789473056793213,
0.25,
0.2916666567325592,
0.17391303181648254,
0.25,
0.1702127605676651,
0.0952380895614624,
0.1249999925494194,
0.18867923319339752,
0.19607843458652496,
0.0416666604578495,
0.19354838132858276,
0.17721518874168396,
0.09090908616781235,
0.14457830786705017,
0.1875,
0.26923075318336487,
0.31884056329727173,
0.380952388048172,
0.08163265138864517,
0.07547169178724289,
0,
0.11428570747375488,
0.20408162474632263,
0.27586206793785095,
0.19718308746814728,
0.2222222238779068,
0.3414634168148041,
0.2222222238779068
] | H1kMMmb0- | true | [
"We use reinforcement learning to train an agent to solve a set of visual arithmetic tasks using provided pre-trained perceptual modules and transformations of internal representations created by those modules."
] |
[
"Animals develop novel skills not only through the interaction with the environment but also from the influence of the others.",
"In this work we model the social influence into the scheme of reinforcement learning, enabling the agents to learn both from the environment and from their peers.",
"Specifically, we first define a metric to measure the distance between policies then quantitatively derive the definition of uniqueness.",
"Unlike previous precarious joint optimization approaches, the social uniqueness motivation in our work is imposed as a constraint to encourage the agent to learn a policy different from the existing agents while still solve the primal task.",
"The resulting algorithm, namely Interior Policy Differentiation (IPD), brings about performance improvement as well as a collection of policies that solve a given task with distinct behaviors",
"The paradigm of Reinforcement Learning (RL), inspired by cognition and animal studies (Thorndike, 2017; Schultz et al., 1997) , can be described as learning by interacting with the environment to maximize a cumulative reward (Sutton et al., 1998) .",
"From the perspective of ecology, biodiversity as well as the development of various skills are crucial to the continuation and evolution of species (Darwin, 1859; Pianka, 1970) .",
"Thus the behavioral diversity becomes a rising topic in RL.",
"Previous works have tried to encourage the emergence of behavioral diversity in RL with two approaches: The first approach is to design interactive environments which contain sufficient richness and diversity.",
"For example, Heess et al. (2017) show that rich environments enable agents to learn different locomotion skills even using the standard RL algorithms.",
"Yet designing a complex environment requires manual efforts, and the diversity is limited by the obstacle classes.",
"The second approach to increase behavioral diversity is to motivate agents to explore beyond just maximizing the reward for the given task.",
"Zhang et al. (2019) proposed to maximize a heuristically defined novelty metric between policies through task-novelty joint optimization, but the final performance of agents is not guaranteed.",
"In this work, we address the topic of policy differentiation in RL, i.e., to improve the diversity of RL agents while keeping their ability to solve the primal task.",
"We draw the inspiration from the Social Influence in animal society (Rogoff, 1990; Ryan & Deci, 2000; van Schaik & Burkart, 2011; Henrich, 2017; Harari, 2014) and formulate the concept of social influence in the reinforcement learning paradigm.",
"Our learning scheme is illustrated in Fig 1.",
"The target agent not only learns to interact with the environment to maximize the reward but also differentiate the actions it takes in order to be different from other existing agents.",
"Since the social influence often acts on people passively as a sort of peer pressure, we implement the social influence in terms of social uniqueness motivation (Chan et al., 2012) and consider it as a constrained optimization problem.",
"In the following of our work, we first define a rigorous policy distance metric in the policy space to compare the similarity of the agents.",
"Then we develop an optimization constraint using the proposed metric, which brings immediate rather than episodic feedback in the learning process.",
"A novel method, namely Interior Policy Differentiation (IPD), is further I should learn to run as fast as I can I should try to be different Figure 1 : The illustration of learning with social influence.",
"Instead of focusing only on the primal task, an additional constraint is introduced to the target agent, motivating it to not only perform well in the primal task but also take actions differently to other existing agents.",
"proposed as a better solution for the constrained policy optimization problem.",
"We benchmark our method on several locomotion tasks and show it can learn various diverse and well-behaved policies for the given tasks based on the standard Proximal Policy Optimization (PPO) algorithm (Schulman et al., 2017) .",
"In this work, we develop an efficient approach to motivate RL to learn diverse strategies inspired by social influence.",
"After defining the distance between policies, we introduce the definition of policy uniqueness.",
"Regarding the problem as constrained optimization problem, our proposed method, Interior Policy Differentiation (IPD), draws the key insight of the Interior Point Methods.",
"And our experimental results demonstrate IPD can learn various well-behaved policies, and our approach can help agents to avoid local minimum and can be interpreted as a kind of implicit curriculum learning in certain cases."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.052631575614213943,
0.13636362552642822,
0.20512819290161133,
0.22641508281230927,
0.3913043439388275,
0.1071428507566452,
0.09302324801683426,
0.12903225421905518,
0.16326530277729034,
0.13636362552642822,
0.10810810327529907,
0.25,
0.25,
0.2083333283662796,
0.03703703358769417,
0.06896550953388214,
0.0416666604578495,
0.07547169178724289,
0.1463414579629898,
0.04878048226237297,
0.307692289352417,
0.19230768084526062,
0.1875,
0.2641509473323822,
0.20512819290161133,
0.060606054961681366,
0.24390242993831635,
0.1538461446762085
] | SJeQi1HKDH | true | [
"A new RL algorithm called Interior Policy Differentiation is proposed to learn a collection of diverse policies for a given primal task."
] |
[
"Generative adversarial networks (GANs) have been extremely effective in approximating complex distributions of high-dimensional, input data samples, and substantial progress has been made in understanding and improving GAN performance in terms of both theory and application. \n",
"However, we currently lack quantitative methods for model assessment.",
"Because of this, while many GAN variants being proposed, we have relatively little understanding of their relative abilities.",
"In this paper, we evaluate the performance of various types of GANs using divergence and distance functions typically used only for training.",
"We observe consistency across the various proposed metrics and, interestingly, the test-time metrics do not favour networks that use the same training-time criterion.",
"We also compare the proposed metrics to human perceptual scores.",
"Generative adversarial networks (GANs) aim to approximate a data distribution P , using a parameterized model distribution Q. They achieve this by jointly optimizing generative and discriminative networks BID9 .",
"GANs are end-to-end differentiable, and samples from the generative network are propagated forward to a discriminative network, and error signals are then propagated backwards from the discriminative network to the generative network.",
"The discriminative network is often viewed as learned, adaptive loss function for the generative network.GANs have achieved state-of-the-art results for a number of applications BID8 , producing more realistic, sharper samples than other popular generative models, such as variational autoencoders BID22 .",
"Because of their success, many GAN frameworks have been proposed.",
"However, it has been difficult to compare these algorithms and understand their strengths and weaknesses because we are currently lacking in quantitative methods for assessing the learned generators.In this work, we propose new metrics for measuring how realistic samples generated from GANs are.",
"These criteria are based on a formulation of divergence between the distributions P and Q BID33 BID38 : DISPLAYFORM0 Here, different choices of µ, υ, and F can correspond to different f -divergences BID33 or different integral probability metrics (IPMs) BID38 .",
"Importantly, J(Q) can be estimated using samples from P and Q, and does not require us to be able to estimate P (x) or Q(x) for samples x.",
"Instead, evaluating J(Q) involves finding the function f ∈ F that is maximally different with respect to P and Q.This measure of divergence between the distributions P and Q is related to the GAN criterion if we restrict the function class F to be neural network functions parameterized by the vector φ and the class of approximating distributions to correspond to neural network generators G θ parameterized by the vector θ, allowing formulation as a min-max problem: H is a Reproducing Kernel Hilbert Space (RKHS) and · L is the Lipschitz constant.",
"For the LS-DCGAN, we used b = 1 and a = 0 BID28 .",
"DISPLAYFORM1 Metric µ υ Function Class GAN (GC) log f − log(1 − f ) X → R + , ∃M ∈ R : |f (x)| ≤ M Least-Squares GAN (LS) −(f − b) DISPLAYFORM2 In this formulation, Q θ corresponds to the generator network's distribution and D φ corresponds to the discriminator network (see BID33 for details).We",
"propose using J(θ) to evaluate the performance of the generator network G θ for various choices of µ and υ, corresponding to different f -divergences or IPMs between distributions P and Q θ , that have been successfully used for GAN training. Our",
"proposed metrics differ from most existing metrics in that they are adaptive, and involve finding the maximum over discriminative networks. We",
"compare four metrics, those corresponding to the original GAN (GC) BID8 , the Least-Squares GAN (LS) BID28 ,the Wasserstein GAN (IW) , and the Maximum Mean Discrepency GAN (MMD) criteria. Choices",
"for µ, υ, and F for these metrics are shown in TAB0 . Our method",
"can easily be extended to other f -divergences or IPMs.To compare these and previous metrics for evaluating GANs, we performed many experiments, training and comparing multiple types of GANs with multiple architectures on multiple data sets. We qualitatively",
"and quantitatively compared these metrics to human perception, and found that our proposed metrics better reflected human perception. We also show that",
"rankings produced using our proposed metrics are consistent across metrics, thus are robust to the exact choices of the functions µ and υ in Equation 2.We used the proposed metrics to quantitatively analyze three different families of GANs: Deep Convolutional Generative Adversarial Networks (DCGAN) BID34 , Least-Squares GANs (LS-DCGAN), and Wasserstein GANs (W-DCGAN), each of which corresponded to a different proposed metric. Interestingly, we",
"found that the different proposed metrics still agreed on the best GAN framework for each dataset. Thus, even though",
", e.g. for MNIST the W-DCGAN was trained with the IW criterion, LS-DCGAN still outperformed it for the IW criterion.Our analysis also included carrying out a sensitivity analysis with respect to various factors, such as the architecture size, noise dimension, update ratio between discriminator and generator, and number of data points. Our empirical results",
"show that: i) the larger the GAN",
"architecture, the better the results; ii) having a generator",
"network larger than the discriminator network does not yield good results; iii) the best ratio between",
"discriminator and generator updates depend on the data set; and iv) the W-DCGAN and LS-DCGAN",
"performance increases much faster than DCGAN as the number of training examples grows. These metrics thus allow us",
"to tune the hyper-parameters and architectures of GANs based on our proposed method.",
"In this paper, we proposed to use four well-known distance functions as an evaluation metrics, and empirically investigated the DCGAN, W-DCGAN, and LS-DCGAN families under these metrics.",
"Previously, these models were compared based on visual assessment of sample quality and difficulty of training.",
"In our experiments, we showed that there are performance differences in terms of average experiments, but that some are not statistically significant.",
"Moreover, we thoroughly analyzed the performance of GANs under different hyper-parameter settings.There are still several types of GANs that need to be evaluated, such as GRAN BID18 , IW-DCGAN BID12 , BEGAN BID4 , MMDGAN , and CramerGAN (Bellemare et al., 2017) .",
"We hope to evaluate all of these models under this framework and thoroughly analyze them in the future.",
"Moreover, there has been an investigation into taking ensemble approaches to GANs, such as Generative Adversarial Parallelization BID19 .",
"Ensemble approaches have been empirically shown to work well in many domains of research, so it would be interesting to find out whether ensembles can also help in min-max problems.",
"Alternatively, we can also try to evaluate other log-likelihood-based models like NVIL BID30 , VAE BID22 , DVAE BID17 , DRAW BID10 , RBMs BID15 BID35 , NICE Dinh et al. (2014) , etc.Model evaluation is an important and complex topic.",
"Model selection, model design, and even research direction can change depending on the evaluation metric.",
"Thus, we need to continuously explore different metrics and rigorously evaluate new models."
] | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.10526315122842789,
0,
0,
0,
0.07407406717538834,
0,
0.1818181723356247,
0.07407406717538834,
0.04444444179534912,
0,
0,
0.0476190447807312,
0,
0,
0,
0,
0,
0.07407406717538834,
0,
0,
0.04651162400841713,
0,
0,
0.07999999821186066,
0.03703703358769417,
0,
0,
0,
0.10526315122842789,
0,
0.09999999403953552,
0.060606058686971664,
0.09090908616781235,
0,
0,
0,
0,
0,
0.045454543083906174,
0.1818181723356247,
0
] | SJQHjzZ0- | true | [
"An empirical evaluation on generative adversarial networks"
] |
[
"Knowledge bases (KB) are often represented as a collection of facts in the form (HEAD, PREDICATE, TAIL), where HEAD and TAIL are entities while PREDICATE is a binary relationship that links the two.",
"It is a well-known fact that knowledge bases are far from complete, and hence the plethora of research on KB completion methods, specifically on link prediction.",
"However, though frequently ignored, these repositories also contain numerical facts.",
"Numerical facts link entities to numerical values via numerical predicates; e.g., (PARIS, LATITUDE, 48.8).",
"Likewise, numerical facts also suffer from the incompleteness problem.",
"To address this issue, we introduce the numerical attribute prediction problem.",
"This problem involves a new type of query where the relationship is a numerical predicate.",
"Consequently, and contrary to link prediction, the answer to this query is a numerical value.",
"We argue that the numerical values associated with entities explain, to some extent, the relational structure of the knowledge base.",
"Therefore, we leverage knowledge base embedding methods to learn representations that are useful predictors for the numerical attributes.",
"An extensive set of experiments on benchmark versions of FREEBASE and YAGO show that our approaches largely outperform sensible baselines.",
"We make the datasets available under a permissive BSD-3 license.",
"Knowledge Bases (KBs) are playing an increasingly important role in a number of AI applications.",
"KBs can be seen as a collection of facts or triples of the form (head, predicate, tail) , denoted as (h, p, t) , where head and tail correspond to entities and predicate corresponds to a relationship that holds between these two entities.",
"This structured information is easily accessible by AI systems to enhance their performance.",
"A variety of AI applications such as recommender systems, natural language chatbots or question answering models, have benefited from the rich structural information archived in these repositories.",
"This is because much of human knowledge can be expressed with one or more conjunctions of knowledge facts.However, KBs' capabilities are limited due to their incompleteness 1 .",
"Consequently there has been a flurry of research on knowledge base completion methods in recent years.",
"Relationship extraction BID27 (i.e., classification of semantic relationship mentions), knowledge graph matching BID32 BID12 (i.e., alignment and integration of entities and predicates across KBs), or search-based question-answering BID36 (i.e., queries issued to a web search engine) are a few different ways to address the incompleteness problem.",
"However, the literature on the so-called link prediction methods BID22 has received more attention in the last few years in comparison to the aforementioned approaches.",
"Contrary to other solutions, link prediction methods aim to find missing links between entities exclusively based on the existing information contained in the KB.",
"This is achieved by ranking entities that are answer candidates for the query.",
"The queries these methods typically address are of the form (USA, /location/contains, ?), or (Madrid, /location/capitalOf, ?), whereas the missing element -represented by a question mark-is an entity contained in the KB.Many link prediction methods only harness feature types learned from the rich relational information contained in the KB to infer new links, and only very recently Niepert, 2018, Pezeshkpour et al., 2018] numerical attributes have been integrated along with other feature types to improve link prediction performance.",
"Similarly, numerical information is also represented as facts such as (Berlin, /location/latitude, 52.31) or (Albert Einstein, /person/birth year, 1879).",
"However, as shown in BID5 ] the application of numerical attributes is limited because of the same incompleteness problem: Many entities are missing numerical attribute values they are expected to possess.",
"For example, entities that represent locations should have numerical information regarding latitude, longitude or area, among others; whereas for entities representing people, numerical predicates such as the birth year, weight or height would be more appropriate.",
"In this work we focus on the problem of completing queries where the relationship is a numerical predicate.",
"Consequently, the answer to this new type of query is a numerical value.",
"This is contrary to the link prediction problem, wherein the answer to a query is always an element of a closed vocabulary.",
"Examples of queries addressed in this paper are (Apple Inc., revenue, ?) or (California, average salary, ?).",
"While one can interpret link prediction as a classification/ranking problem, this is rather a regression problem.The main contributions of this paper are:• We introduce the problem of predicting the value of entities' numerical attributes in KBs.",
"For the sake of simplicity we term this as 'numerical attribute prediction problem'.",
"To our knowledge, this is the first time this problem is addressed in the literature.•",
"We create benchmark datasets for this problem. We",
"use well-known subsets of Freebase and Yago as the blueprints for creating these benchmarks. We",
"also create versions of these datasets for different percentages of sparsity by artificially removing facts that involve numerical predicates. All",
"these benchmark datasets will be made publicly available.• We",
"propose two meaningful baselines for this problem. These",
"baselines are inspired by previous work done in the node classification and the imputation literature.• We propose",
"supervised and semi-supervised approaches to this problem. The semisupervised",
"approaches significantly outperform the baselines in all datasets and conditions.The paper is organized as follows: We discuss the related work in Section 2. Afterwards we formalize",
"the problem of predicting numerical values for entities' numerical attributes in KBs in Section 3. We describe our approaches",
"to this problem, as well as the two baselines. Section 4 reports the experimental",
"setting followed by an extensive set of experiments on different datasets with different degrees of sparsity in Section 5. Finally, we summarize the conclusions",
"of our study in Section 6.",
"We introduce a novel problem, namely numerical attribute prediction in knowledge bases.",
"Contrary to link prediction, the answer to this new query type is a numerical value, and not an element from a (small) closed vocabulary.",
"Our premise to this problem is that the relational structure of a KB can be partially explained by the numerical attribute values associated with entities.",
"This allows for leveraging KB embedding methods to learn representations that are useful predictors of numerical attributes.",
"An extensive set of experiments validates our premise.",
"Furthermore, we also show that learning KB representations enriched with numerical attribute information are helpful for addressing this task.",
"Finally, we believe that this new problem introduced in the paper will spur interest and deeper investigation from the research community.5.",
"Note that it is non-negative and is row normalized.",
"6. For all practical purposes he is deemed a philosopher in FB15K-237.",
"7. Julius Caesar belongs to profession Politician in FB15K-237"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.19512194395065308,
0.1666666567325592,
0.0952380895614624,
0.2222222238779068,
0.09999999403953552,
0.1818181723356247,
0.1599999964237213,
0.07999999821186066,
0.48275861144065857,
0.13793103396892548,
0.06666666269302368,
0,
0.1538461446762085,
0.08510638028383255,
0,
0.10526315122842789,
0.1621621549129486,
0.2222222238779068,
0.11320754140615463,
0.0624999962747097,
0.12121211737394333,
0.0833333283662796,
0.10526315867900848,
0.06666666269302368,
0.31578946113586426,
0.09090908616781235,
0.1428571343421936,
0.1666666567325592,
0.06896550953388214,
0.1428571343421936,
0.1428571343421936,
0.1666666567325592,
0.0833333283662796,
0,
0.07692307233810425,
0.13333332538604736,
0,
0,
0.0714285671710968,
0,
0.0555555522441864,
0.2857142686843872,
0,
0.1818181723356247,
0.23529411852359772,
0.43478259444236755,
0.060606054961681366,
0.4000000059604645,
0.1428571343421936,
0.10526315122842789,
0.19999998807907104,
0.0624999962747097,
0,
0.08695651590824127,
0.09999999403953552
] | BJlh0x9ppQ | true | [
"Prediction of numerical attribute values associated with entities in knowledge bases."
] |
[
"We propose procedures for evaluating and strengthening contextual embedding alignment and show that they are useful in analyzing and improving multilingual BERT.",
"In particular, after our proposed alignment procedure, BERT exhibits significantly improved zero-shot performance on XNLI compared to the base model, remarkably matching pseudo-fully-supervised translate-train models for Bulgarian and Greek.",
"Further, to measure the degree of alignment, we introduce a contextual version of word retrieval and show that it correlates well with downstream zero-shot transfer.",
"Using this word retrieval task, we also analyze BERT and find that it exhibits systematic deficiencies, e.g. worse alignment for open-class parts-of-speech and word pairs written in different scripts, that are corrected by the alignment procedure.",
"These results support contextual alignment as a useful concept for understanding large multilingual pre-trained models.",
"Figure 1: t-SNE (Maaten & Hinton, 2008) visualization of the embedding space of multilingual BERT for English-German word pairs (left: pre-alignment, right: post-alignment).",
"Each point is a different instance of the word in the Europarl corpus.",
"This figure suggests that BERT begins already somewhat aligned out-of-the-box but becomes much more aligned after our proposed procedure.",
"Embedding alignment was originally studied for word vectors with the goal of enabling cross-lingual transfer, where the embeddings for two languages are in alignment if word translations, e.g. cat and Katze, have similar representations (Mikolov et al., 2013a; Smith et al., 2017) .",
"Recently, large pretrained models have largely subsumed word vectors based on their accuracy on downstream tasks, partly due to the fact that their word representations are context-dependent, allowing them to more richly capture the meaning of a word (Peters et al., 2018; Howard & Ruder, 2018; Radford et al., 2018; Devlin et al., 2018) .",
"Therefore, with the same goal of cross-lingual transfer but for these more complex models, we might consider contextual embedding alignment, where we observe whether word pairs within parallel sentences, e.g. cat in \"The cat sits\" and Katze in \"Die Katze sitzt,\" have similar representations.",
"One model relevant to these questions is multilingual BERT, a version of BERT pre-trained on 104 languages that achieves remarkable transfer on downstream tasks.",
"For example, after the model is fine-tuned on the English MultiNLI training set, it achieves 74.3% accuracy on the test set in Spanish, which is only 7.1% lower than the English accuracy (Devlin et al., 2018; Conneau et al., 2018b) .",
"Furthermore, while the model transfers better to languages similar to English, it still achieves reasonable accuracies even on languages with different scripts.",
"However, given the way that multilingual BERT was pre-trained, it is unclear why we should expect such high zero-shot performance.",
"Compared to monolingual BERT which exhibits no zero-shot transfer, multilingual BERT differs only in that (1) during pre-training (i.e. masked word prediction), each batch contains sentences from all of the languages, and (2) it uses a single shared vocabulary, formed by WordPiece on the concatenated monolingual corpora (Devlin et al., 2019) .",
"Therefore, we might wonder: (1) How can we better understand BERT's multilingualism?",
"(2) Can we further improve BERT's cross-lingual transfer?",
"In this paper, we show that contextual embedding alignment is a useful concept for addressing these questions.",
"First, we propose a contextual version of word retrieval to evaluate the degree of alignment, where a model is presented with two parallel corpora, and given a word within a sentence in one corpus, it must find the correct word and sentence in the other.",
"Using this metric of alignment, we show that multilingual BERT achieves zero-shot transfer because its embeddings are partially aligned, as depicted in Figure 1 , with the degree of alignment predicting the degree of downstream transfer.",
"Next, using between 10K and 250K sentences per language from the Europarl corpus as parallel data (Koehn, 2005) , we propose a fine-tuning-based alignment procedure and show that it significantly improves BERT as a multilingual model.",
"Specifically, on zero-shot XNLI, where the model is trained on English MultiNLI and tested on other languages (Conneau et al., 2018b) , the aligned model improves accuracies by 2.78% on average over the base model, and it remarkably matches translate-train models for Bulgarian and Greek, which approximate the fully-supervised setting.",
"To put our results in the context of past work, we also use word retrieval to compare our finetuning procedure to two alternatives: (1) fastText augmented with sentence and aligned using rotations (Bojanowski et al., 2017; Rücklé et al., 2018; Artetxe et al., 2018) , and (2) BERT aligned using rotations (Aldarmaki & Diab, 2019; Schuster et al., 2019; Wang et al., 2019) .",
"We find that when there are multiple occurences per word, fine-tuned BERT outperforms fastText, which outperforms rotation-aligned BERT.",
"This result supports the intuition that contextual alignment is more difficult than its non-contextual counterpart, given that a rotation, at least when applied naively, is no longer sufficient to produce strong alignments.",
"In addition, when there is only one occurrence per word, fine-tuned BERT matches the performance of fastText.",
"Given that context disambiguation is no longer necessary, this result suggests that our fine-tuning procedure is able to align BERT at the type level to a degree that matches non-contextual approaches.",
"Finally, we use the contextual word retrieval task to conduct finer-grained analysis of multilingual BERT, with the goal of better understanding its strengths and shortcomings.",
"Specifically, we find that base BERT has trouble aligning open-class compared to closed-class parts-of-speech, as well as word pairs that have large differences in usage frequency, suggesting insight into the pre-training procedure that we explore in Section 5.",
"Together, these experiments support contextual alignment as an important task that provides useful insight into large multilingual pre-trained models.",
"Given that the degree of alignment is causally predictive of downstream cross-lingual transfer, contextual alignment proves to be a useful concept for understanding and improving multilingual pretrained models.",
"Given small amounts of parallel data, our alignment procedure improves multilingual BERT and corrects many of its systematic deficiencies.",
"Contextual word retrieval also provides useful new insights into the pre-training procedure, opening up new avenues for analysis.",
"Table 5 : Zero-shot accuracy on the XNLI test set with more languages, where we use 20K parallel sentences for each language paired with English.",
"This result confirms that the alignment method works for distant languages and a variety of parallel corpora, including Europarl, MultiUN, and Tanzil, which contains sentences from the Quran (Koehn, 2005; Eisele & Chen, 2010; Tiedemann, 2012) .",
"A APPENDIX"
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.6521739363670349,
0.2181818187236786,
0.2800000011920929,
0.16949151456356049,
0.24390242993831635,
0.1666666567325592,
0.052631575614213943,
0.045454539358615875,
0.1249999925494194,
0.057971008121967316,
0.17910447716712952,
0.16326530277729034,
0.06666666269302368,
0.08695651590824127,
0.17391303181648254,
0.1315789371728897,
0.054054051637649536,
0.11764705181121826,
0.3255814015865326,
0.1666666567325592,
0.24561403691768646,
0.2711864411830902,
0.14705881476402283,
0.05405404791235924,
0.0952380895614624,
0.1428571343421936,
0.04651162400841713,
0.07547169178724289,
0.16326530277729034,
0.10169491171836853,
0.2666666507720947,
0.307692289352417,
0.13636362552642822,
0.23255813121795654,
0.11999999731779099,
0.1666666567325592
] | r1xCMyBtPS | true | [
"We propose procedures for evaluating and strengthening contextual embedding alignment and show that they both improve multilingual BERT's zero-shot XNLI transfer and provide useful insights into the model."
] |
[
"Neural networks have reached outstanding performance for solving various ill-posed inverse problems in imaging.",
"However, drawbacks of end-to-end learning approaches in comparison to classical variational methods are the requirement of expensive retraining for even slightly different problem statements and the lack of provable error bounds during inference.",
"Recent works tackled the first problem by using networks trained for Gaussian image denoising as generic plug-and-play regularizers in energy minimization algorithms.",
"Even though this obtains state-of-the-art results on many tasks, heavy restrictions on the network architecture have to be made if provable convergence of the underlying fixed point iteration is a requirement.",
"More recent work has proposed to train networks to output descent directions with respect to a given energy function with a provable guarantee of convergence to a minimizer of that energy.",
"However, each problem and energy requires the training of a separate network.\n",
"In this paper we consider the combination of both approaches by projecting the outputs of a plug-and-play denoising network onto the cone of descent directions to a given energy.",
"This way, a single pre-trained network can be used for a wide variety of reconstruction tasks.",
"Our results show improvements compared to classical energy minimization methods while still having provable convergence guarantees.",
"In many image processing tasks an observed image f is modeled as the result of the transformation of a clean imageû under a known (linear) operator A and unknown noise ξ, f = Aû + ξ.",
"(",
"In most cases, the problem of reconstructingû from f and A is ill-posed and can thus not be solved by a simple inversion of A, giving rise to the field of regularization theory with iterative or variational methods, see e.g. [2] for an overview.",
"In recent years neural networks were very successful in learning a direct mapping G(f ) ≈û for a variety of problems such as deblurring [32, 28] , denoising [34] , super-resolution [8] , demosaicing [9] and MRI-or CT-reconstruction [33, 14] .",
"Even though this works well in practice, there are rarely any guarantees on the behaviour of neural networks on unseen data, making them difficult to use in safety-critical applications.",
"Moreover, for each problem and type of noise a separate network has to be trained.",
"In contrast, classical variational methods try to find the solution by the minimization of a suitable energy function of the formû",
"where H f is a data fidelity term, for example commonly chosen as H f (u) = 1 2 ||Au − f || 2 , and R is a regularization function that models prior knowledge about the solution, e.g. the popular total variation (TV) regularization, R(u) = ∇u 1 , [24] .",
"While minimizers of (2) come with many desirable theoretical guarantees, regularizations like the TV often cannot perfectly capture the complex structure of the space of natural images.",
"To combine the advantages of powerful feed-forward networks and model-based approaches like (2), authors have considered various hybrid models like learning regularizers (e.g. [23, 1, 11, 5] ), designing networks architectures that resemble the structure of minimization algorithms or differential equations, e.g. [25, 36, 15, 6] , interleaving networks with classical optimization steps [16, 17] , or using the parametrization of networks as a regularization for (2), see e.g. [29, 12] .",
"A particularly flexible approach arises from [7, 37, 30, 13] , where proximal operators with respect to the regularizer are replaced by arbitrary denoising operators, with recent works focusing on the use of denoising networks [18, 4, 35] .",
"While such approaches allow to tackle different inverse problems with the same neural network, the derivation of theoretical guarantees -even in terms of the convergence of the resulting algorithmic scheme -remains difficult, see [3, 27] or some discussion in [20] , unless the denoiser satisfies particular properties [22] .",
"We combine deep learning and energy minimization methods for solving inverse problems in image reconstruction into a provably convergent algorithmic scheme.",
"Still, our approach is able to generalize to different problems with a single denoising network and without the need to retrain if that problem changes.",
"We were able to reach better results than the energy minimization baseline in our experiments, and are happy to elaborate on the above aspects in the NeurIPS workshop."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.23529411852359772,
0.11999999731779099,
0.5238094925880432,
0.08163265138864517,
0.23255813121795654,
0.060606054961681366,
0.13636362552642822,
0.11428570747375488,
0.2222222238779068,
0.07843136787414551,
0.06557376682758331,
0.24561403691768646,
0.1702127605676651,
0.11428570747375488,
0.10526315122842789,
0.06451612710952759,
0.04651162400841713,
0.14999999105930328,
0.145454540848732,
0.16393442451953888,
0.39024388790130615,
0.1395348757505417,
0.1818181723356247
] | SJxRjQncLH | true | [
"We use neural networks trained for image denoising as plug-and-play priors in energy minimization algorithms for image reconstruction problems with provable convergence."
] |
[
"Knowledge graph embedding research has overlooked the problem of probability calibration.",
"We show popular embedding models are indeed uncalibrated.",
"That means probability estimates associated to predicted triples are unreliable.",
"We present a novel method to calibrate a model when ground truth negatives are not available, which is the usual case in knowledge graphs.",
"We propose to use Platt scaling and isotonic regression alongside our method.",
"Experiments on three datasets with ground truth negatives show our contribution leads to well calibrated models when compared to the gold standard of using negatives.",
"We get significantly better results than the uncalibrated models from all calibration methods.",
"We show isotonic regression offers the best the performance overall, not without trade-offs.",
"We also show that calibrated models reach state-of-the-art accuracy without the need to define relation-specific decision thresholds.",
"Knowledge graph embedding models are neural architectures that learn vector representations (i.e. embeddings) of nodes and edges of a knowledge graph.",
"Such knowledge graph embeddings have applications in knowledge graph completion, knowledge discovery, entity resolution, and link-based clustering, just to cite a few (Nickel et al., 2016a) .",
"Despite burgeoning research, the problem of calibrating such models has been overlooked, and existing knowledge graph embedding models do not offer any guarantee on the probability estimates they assign to predicted facts.",
"Probability calibration is important whenever you need the predictions to make probabilistic sense, i.e., if the model predicts a fact is true with 80% confidence, it should to be correct 80% of the times.",
"Prior art suggests to use a sigmoid layer to turn logits returned by models into probabilities (Nickel et al., 2016a ) (also called the expit transform), but we show that this provides poor calibration.",
"Figure 1 shows reliability diagrams for off-the-shelf TransE and ComplEx.",
"The identity function represents perfect calibration.",
"Both models are miscalibrated: all TransE combinations in Figure 1a under-forecast the probabilities (i.e. probabilities are too small), whereas ComplEx under-forecasts or over-forecasts according to which loss is used (Figure1b).",
"Calibration is crucial in high-stakes scenarios such as drug-target discovery from biological networks, where end-users need trustworthy and interpretable decisions.",
"Moreover, since probabilities are not calibrated, when classifying triples (i.e. facts) as true or false, users must define relationspecific thresholds, which can be awkward for graphs with a great number of relation types.",
"To the best of our knowledge, this is the first work to focus on calibration for knowledge embeddings.",
"Our contribution is two-fold: First, we use Platt Scaling and isotonic regression to calibrate knowledge graph embedding models on datasets that include ground truth negatives.",
"One peculiar feature of knowledge graphs is that they usually rely on the open world assumption (facts not present are not necessarily false, they are simply unknown).",
"This makes calibration troublesome because of the lack of ground truth negatives.",
"For this reason, our second and main contribution is a calibration heuristics that combines Platt-scaling or isotonic regression with synthetically generated negatives.",
"Experimental results show that we obtain better-calibrated models and that it is possible to calibrate knowledge graph embedding models even when ground truth negatives are not present.",
"We also experiment with triple classification, and we show that calibrated models reach state-of-the-art accuracy without the need to define relation-specific decision thresholds.",
"We propose a method to calibrate knowledge graph embedding models.",
"We target datasets with and without ground truth negatives.",
"We experiment on triple classification datasets and apply Platt scaling and isotonic regression with and without synthetic negatives controlled by our heuristics.",
"All calibration methods perform significantly better than uncalibrated scores.",
"We show that isotonic regression brings better calibration performance, but it is computationally more expensive.",
"Additional experiments on triple classification shows that calibration allows to use a single decision threshold, reaching state-of-the-art results without the need to learn per-relation thresholds.",
"Future work will evaluate additional calibration algorithms, such as beta calibration (Kull et al., 2017) or Bayesian binning (Naeini et al., 2015) .",
"We will also experiment on ensembling of knowledge graph embedding models, inspired by (Krompaß & Tresp, 2015) .",
"The rationale is that different models operate on different scales, but calibrating brings them all to the same probability scale, so their output can be easily combined."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2857142686843872,
0.23999999463558197,
0.07407406717538834,
0.4000000059604645,
0.27586206793785095,
0.19999998807907104,
0.19999998807907104,
0.20689654350280762,
0.3529411852359772,
0.3243243098258972,
0.19512194395065308,
0.2978723347187042,
0.2083333283662796,
0.15686273574829102,
0,
0,
0.1304347813129425,
0.05405404791235924,
0.07843136787414551,
0.23529411852359772,
0.2857142686843872,
0.1463414579629898,
0.1428571343421936,
0.05128204822540283,
0.2857142686843872,
0.29999998211860657,
0.7407407164573669,
0.1538461446762085,
0.10810810327529907,
0,
0.0624999962747097,
0.24390242993831635,
0,
0.29411762952804565,
0.1395348757505417
] | S1g8K1BFwS | true | [
"We propose a novel method to calibrate knowledge graph embedding models without the need of negative examples."
] |
[
"As an emerging topic in face recognition, designing margin-based loss functions can increase the feature margin between different classes for enhanced discriminability.",
"More recently, absorbing the idea of mining-based strategies is adopted to emphasize the misclassified samples and achieve promising results.",
"However, during the entire training process, the prior methods either do not explicitly emphasize the sample based on its importance that renders the hard samples not fully exploited or explicitly emphasize the effects of semi-hard/hard samples even at the early training stage that may lead to convergence issues.",
"In this work, we propose a novel Adaptive Curriculum Learning loss (CurricularFace) that embeds the idea of curriculum learning into the loss function to achieve a novel training strategy for deep face recognition, which mainly addresses easy samples in the early training stage and hard ones in the later stage.",
"Specifically, our CurricularFace adaptively adjusts the relative importance of easy and hard samples during different training stages.",
"In each stage, different samples are assigned with different importance according to their corresponding difficultness.",
"Extensive experimental results on popular benchmarks demonstrate the superiority of our CurricularFace over the state-of-the-art competitors.",
"Code will be available upon publication.",
"The success of Convolutional Neural Networks (CNNs) on face recognition can be mainly credited to : enormous training data, network architectures, and loss functions.",
"Recently, designing appropriate loss functions that enhance discriminative power is pivotal for training deep face CNNs.",
"Current state-of-the-art face recognition methods mainly adopt softmax-based classification loss.",
"Since the learned features with the original softmax is not discriminative enough for the open-set face recognition problem, several margin-based variants have been proposed to enhance features' discriminative power.",
"For example, explicit margin, i.e., CosFace (Wang et al., 2018a) , Sphereface (Li et al., 2017) , ArcFace (Deng et al., 2019) , and implicit margin, i.e., Adacos (Zhang et al., 2019a) , supplement the original softmax function to enforce greater intra-class compactness and inter-class discrepancy, which are shown to result in more discriminate features.",
"However, these margin-based loss functions do not explicitly emphasize each sample according to its importance.",
"As demonstrated in Chen et al. (2019) , hard sample mining is also a critical step to further improve the final accuracy.",
"Recently, Triplet loss (Schroff et al., 2015) and SV-Arc-Softmax (Wang et al., 2018b) integrate the motivations of both margin and mining into one framework for deep face recognition.",
"Triplet loss adopts a semi-hard mining strategy to obtain semi-hard triplets and enlarge the margin between triplet samples.",
"SV-Arc-Softmax (Wang et al., 2018b) clearly defines hard samples as misclassified samples and emphasizes them by increasing the weights of their negative cosine similarities with a preset constant.",
"In a nutshell, mining-based loss functions explicitly emphasize the effects of semi-hard or hard samples.",
"However, there are drawbacks in training strategies of both margin-and mining-based loss functions.",
"For margin-based methods, mining strategy is ignored and thus the difficultness of each sample is not fully exploited, which may lead to convergence issues when using a large margin on small backbones, e.g., MobileFaceNet (Chen et al., 2018) .",
"As shown in Fig. 1 , the modulation coefficient for the negative cosine similarities I(·) is fixed as a constant 1 in ArcFace for all samples during the entire training process.",
"For mining-based methods, over-emphasizing hard samples in early training Figure 1: Different training strategies for modulating negative cosine similarities of hard samples (i.e., the mis-classified sample) in ArcFace, SV-Arc-Softmax and our CurricularFace.",
"Left: The modulation coefficients I(t, cos θj) for negative cosine similarities of hard samples in different methods, where t is an adaptively estimated parameter and θj denotes the angle between the hard sample and the non-ground truth j-class center.",
"Right: The corresponding hard samples' negative cosine similarities N (t, cos θj) = I(t, cos θj) cos θj + c after modulation, where c indicates a constant.",
"On one hand, during early training stage (e.g., t is close to 0), hard sample's negative cosine similarities is usually reduced and thus leads to smaller hard sample loss than the original one.",
"Therefore, easier samples are relatively emphasized; during later training stage (e.g., t is close to 1), the hard sample's negative cosine similarities are enhanced and thus leads to larger hard sample loss.",
"On the other hand, in the same training stage, we modulate the hard samples' negative cosine similarities with cos θj.",
"Specifically, the smaller the angle θj is, the larger the modulation coefficient should be.",
"stage may hinder the model to converge.",
"As SV-Arc-Softmax claimed, the manually defined constant t plays a key role in the model convergence property and a slight larger value (e.g., >1.4) may cause the model difficult to converge.",
"Thus t needs to be carefully tuned.",
"In this work, we propose a novel adaptive curriculum learning loss, termed CurricularFace, to achieve a novel training strategy for deep face recognition.",
"Motivated by the nature of human learning that easy cases are learned first and then come the hard ones (Bengio et al., 2009) , our CurricularFace incorporates the idea of Curriculum Learning (CL) into face recognition in an adaptive manner, which differs from the traditional CL in two aspects.",
"First, the curriculum construction is adaptive.",
"In traditional CL, the samples are ordered by the corresponding difficultness, which are often defined by a prior and then fixed to establish the curriculum.",
"In CurricularFace, the samples are randomly selected in each mini-batch, while the curriculum is established adaptively via mining the hard samples online, which shows the diversity in samples with different importance.",
"Second, the importance of hard samples are adaptive.",
"On one hand, the relative importance between easy and hard samples is dynamic and could be adjusted in different training stages.",
"On the other hand, the importance of each hard sample in current mini-batch depends on its own difficultness.",
"Specifically, the mis-classified samples in mini-batch are chosen as hard samples and weighted by adjusting the modulation coefficients I(t, cosθ j ) of cosine similarities between the sample and the non-ground truth class center vectors, i.e., negative cosine similarity N (t, cosθ j ).",
"To achieve the goal of adaptive curricular learning in the entire training, we design a novel coefficient function I(·) that is determined by two factors:",
"1) the adaptively estimated parameter t that utilizes moving average of positive cosine similarities between samples and the corresponding ground-truth class center to unleash the burden of manually tuning; and",
"2) the angle θ j that defines the difficultness of hard samples to achieve adaptive assignment.",
"To sum up, the contributions of this work are:",
"• We propose an adaptive curriculum learning loss for face recognition, which automatically emphasizes easy samples first and hard samples later.",
"To the best of our knowledge, it is the first work to introduce the idea of adaptive curriculum learning for face recognition.",
"• We design a novel modulation coefficient function I(·) to achieve adaptive curriculum learning during training, which connects positive and negative cosine similarity simultaneously without the need of manually tuning any additional hyper-parameter.",
"• We conduct extensive experiments on popular facial benchmarks, which demonstrate the superiority of our CurricularFace over the state-of-the-art competitors.",
"Comparison with ArcFace and SV-Arc-Softmax We first discuss the difference between our CurricularFace and the two competitors, ArcFace and SV-Arc-Softmax, from the perspective of the decision boundary in Tab.",
"1.",
"ArcFace introduces a margin function T (cos θ yi ) = cos(θ yi + m) from the perspective of positive cosine similarity.",
"As shown in Fig. 4 , its decision condition changes from cos θ yi = cos θ j (i.e., blue line) to cos(θ yi + m) = cos θ j (i.e., red line) for each sample.",
"SV-Arc-Softmax introduces additional margin from the perspective of negative cosine similarity for hard samples, and the decision boundary becomes cos(θ yi + m) = t cos θ j + t − 1 (i.e., green line).",
"Conversely, we adaptively adjust the weights of hard samples in different training stages.",
"The decision condition becomes cos(θ yi +m) = (t+cos θ j ) cos θ j (i.e., purple line).",
"During the training stage, the decision boundary for hard samples changes from one purple line (early stage) to another (later stage), which emphasizes easy samples first and hard samples later.",
"Comparison with Focal loss Focal loss is a soft mining-based loss, which is formulated as:",
"β , where α and β are modulating factors that need to be tuned manually.",
"The definition of hard samples in Focal loss is ambiguous, since it always focuses on relatively hard samples by reducing the weight of easier samples during the entire training process.",
"In contrast, the definition of hard samples in our CurricularFace is more clear, i.e., mis-classified samples.",
"Meanwhile, the weights of hard samples are adaptively determined in different training stages.",
"In this paper, we propose a novel Adaptive Curriculum Learning Loss that embeds the idea of adaptive curriculum learning into deep face recognition.",
"Our key idea is to address easy samples in the early training stage and hard ones in the later stage.",
"Our method is easy to implement and robust to converge.",
"Extensive experiments on popular facial benchmarks demonstrate the effectiveness of our method compared to the state-of-the-art competitors.",
"Following the main idea of this work, future research can be expanded in various aspects, including designing a better function N (·) for negative cosine similarity that shares similar adaptive characteristic during training, and investigating the effects of noise samples that could be optimized as hard samples."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.1875,
0,
0,
0.3137255012989044,
0,
0,
0,
0,
0.1764705777168274,
0.307692289352417,
0.29999998211860657,
0.1666666567325592,
0,
0.07999999821186066,
0,
0.277777761220932,
0.07407406717538834,
0,
0.07999999821186066,
0.08695651590824127,
0,
0.0555555522441864,
0.04999999701976776,
0.04444444179534912,
0,
0.04878048226237297,
0.04878048226237297,
0,
0,
0,
0,
0,
0.32258063554763794,
0.14814814925193787,
0,
0,
0,
0,
0,
0,
0,
0.05882352590560913,
0,
0,
0,
0.20000000298023224,
0.20689654350280762,
0.04651162400841713,
0,
0,
0,
0.052631575614213943,
0.045454543083906174,
0,
0,
0.0555555522441864,
0.09090908616781235,
0,
0.05714285373687744,
0,
0,
0.42424240708351135,
0,
0,
0,
0.038461536169052124
] | B1eksh4KvH | true | [
"A novel Adaptive Curriculum Learning loss for deep face recognition"
] |
[
"Adversarial examples are perturbed inputs designed to fool machine learning models.",
"Adversarial training injects such examples into training data to increase robustness.",
"To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model's loss.\n",
"We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss.",
"The model thus learns to generate weak perturbations, rather than defend against strong ones.",
"As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step.\n",
"We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models.",
"On ImageNet, Ensemble Adversarial Training yields models with strong robustness to black-box attacks.",
"In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks.",
"Machine learning (ML) models are often vulnerable to adversarial examples, maliciously perturbed inputs designed to mislead a model at test time BID4 BID36 BID15 BID29 .",
"Furthermore, BID36 showed that these inputs transfer across models: the same adversarial example is often misclassified by different models, thus enabling simple black-box attacks on deployed models BID23 .Adversarial",
"training BID36 increases robustness by augmenting training data with adversarial examples. showed that",
"adversarially trained models can be made robust to white-box attacks (i.e., with knowledge of the model parameters) if the perturbations computed during training closely maximize the model's loss. However, prior",
"attempts at scaling this approach to ImageNet-scale tasks BID12 ) have proven unsuccessful BID20 .It is thus natural",
"to ask whether it is possible, at scale, to achieve robustness against the class of black-box adversaries Towards this goal, BID20 adversarially trained an Inception v3 model BID38 on ImageNet using a \"single-step\" attack based on a linearization of the model's loss BID15 . Their trained model",
"is robust to single-step perturbations but remains vulnerable to more costly \"multi-step\" attacks. Yet, BID20 found that",
"these attacks fail to reliably transfer between models, and thus concluded that the robustness of their model should extend to black-box adversaries. Surprisingly, we show",
"that this is not the case.We demonstrate, formally and empirically, that adversarial training with single-step methods admits a degenerate global minimum, wherein the model's loss can not be reliably approximated by a linear function. Specifically, we find",
"that the model's decision surface exhibits sharp curvature near the data points, thus degrading attacks based on a single gradient computation. In addition to the model",
"of BID20 , we reveal similar overfitting in an adversarially trained Inception ResNet v2 model BID37 , and a variety of models trained on MNIST BID22 .We harness this result in",
"two ways. First, we show that adversarially",
"trained models using single-step methods remain vulnerable to simple attacks. For black-box adversaries, we find",
"that perturbations crafted on an undefended model often transfer to an adversarially trained one. We also introduce a simple yet powerful",
"single-step attack that applies a small random perturbation-to escape the nonsmooth vicinity of the data point-before linearizing the model's loss. While seemingly weaker than the Fast Gradient",
"Sign Method of BID15 , our attack significantly outperforms it for a same perturbation norm, for models trained with or without adversarial training.Second, we propose Ensemble Adversarial Training, a training methodology that incorporates perturbed inputs transferred from other pre-trained models. Our approach decouples adversarial example generation",
"from the parameters of the trained model, and increases the diversity of perturbations seen during training. We train Inception v3 and Inception ResNet v2 models",
"on ImageNet that exhibit increased robustness to adversarial examples transferred from other holdout models, using various single-step and multi-step attacks BID15 BID7 BID19 . We also show that our methods globally reduce the dimensionality",
"of the space of adversarial examples BID40 . Our Inception ResNet v2 model won the first round of the NIPS 2017",
"competition on Defenses Against Adversarial Attacks BID21 , where it was evaluated on other competitors' attacks in a black-box setting. BID16 BID24 BID31 BID28 BID10 and many remain vulnerable to adaptive",
"attackers BID7 b; BID3 . Adversarial training BID36 BID15 BID20 appears to hold the greatest",
"promise for learning robust models. show that adversarial training on MNIST yields models that are robust",
"to whitebox attacks, if the adversarial examples used in training closely maximize the model's loss. Moreover, recent works by BID34 , BID33 and BID18 even succeed in providing",
"certifiable robustness for small perturbations on MNIST. As we argue in Appendix C, the MNIST dataset is peculiar in that there exists",
"a simple \"closed-form\" denoising procedure (namely feature binarization) which leads to similarly robust models without adversarial training. This may explain why robustness to white-box attacks is hard to scale to tasks",
"such as ImageNet BID20 . We believe that the existence of a simple robust baseline for MNIST can be useful",
"for understanding some limitations of adversarial training techniques. BID36 found that adversarial examples transfer between models, thus enabling blackbox",
"attacks on deployed models. showed that black-box attacks could succeed with no access to training data, by exploiting",
"the target model's predictions to extract BID39 a surrogate model. Some prior works have hinted that adversarially trained models may remain vulnerable to black-box",
"attacks: BID15 found that an adversarial maxout network on MNIST has slightly higher error on transferred examples than on white-box examples. further showed that a model trained on small perturbations can be evaded by transferring perturbations",
"of larger magnitude. Our finding that adversarial training degrades the accuracy of linear approximations of the model's loss",
"is as an instance of a gradient-masking phenomenon BID30 , which affects other defensive techniques BID31 BID7 BID28 BID5 BID2 ."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1621621549129486,
0.2222222238779068,
0.1599999964237213,
0.23076923191547394,
0.14999999105930328,
0.25,
0.27272728085517883,
0.25641024112701416,
0.08888888359069824,
0.11999999731779099,
0.2181818187236786,
0.2631579041481018,
0.178571417927742,
0.045454539358615875,
0.0923076868057251,
0.2857142686843872,
0.23999999463558197,
0.2666666507720947,
0.11999999731779099,
0.07547169178724289,
0.12121211737394333,
0.3414634168148041,
0.17391303181648254,
0.07999999821186066,
0.1764705777168274,
0.17391303181648254,
0.37931033968925476,
0.0952380895614624,
0.2142857164144516,
0.14999999105930328,
0.20512820780277252,
0.19230768084526062,
0.04347825422883034,
0.2222222238779068,
0.13333332538604736,
0.1818181723356247,
0.2790697515010834,
0.16326530277729034,
0.1428571343421936,
0.1463414579629898,
0
] | rkZvSe-RZ | true | [
"Adversarial training with single-step methods overfits, and remains vulnerable to simple black-box and white-box attacks. We show that including adversarial examples from multiple sources helps defend against black-box attacks."
] |
[
"An adversarial feature learning (AFL) is a powerful framework to learn representations invariant to a nuisance attribute, which uses an adversarial game between a feature extractor and a categorical attribute classifier.",
"It theoretically sounds in term of it maximize conditional entropy between attribute and representation.",
"However, as shown in this paper, the AFL often causes unstable behavior that slows down the convergence.",
"We propose an {\\em attribute perception matching} as an alternative approach, based on the reformulation of conditional entropy maximization as {\\em pair-wise distribution matching}. Although the naive approach for realizing the pair-wise distribution matching requires the significantly large number of parameters, the proposed method requires the same number of parameters with AFL but has a better convergence property.",
"Experiments on both toy and real-world dataset prove that our proposed method converges to better invariant representation significantly faster than AFL. ",
"How to learn representations invariant to nuisance attribute a is technical challenges raised in domain generalizaton BID1 BID9 BID4 BID8 , fair classification, privacy-protection BID2 BID5 , and many other area.",
"Assume that we are given a training dataset made of pairs S = (x i , y i , a i ) DISPLAYFORM0 , where x is an observation, y is a target of x, and a is a corresponding intrinsic attribute of K-way categorical variable A. The goal of invariant representation learning is to obtain an encoder E that reduces information about attribute a while maintaining information about y.An adversarial game between a feature extractor and an attribute classifier, called adversarial feature learning BID11 , is a powerful framework for this purpose.",
"The key of AFL is to measure the invariance by leveraging the discriminative power of neural network beyond the pre-defined metric such as l 2 distance or maximum mean discrepancy.",
"That is, if the external network (also referred to as a discriminator) can predict a from z = E(x), AFL regards z to have considerable information about a.",
"Formally, the AFL solves the following optimization problem: DISPLAYFORM1 where q M and q D is the conditional probability that M and D gives a correct estimation of y and a respectively.",
"As BID11 explained, this alternating procedure can be regarded as a way to maximize the conditional entropy H(A|Z) = a∈A,z∈Z −p(a, z) log p(a|z), where A and Z is a support set of a and z.",
"BID11 also showed that the min-max game has an equilibrium, in which E maximize the conditional entropy H(A|Z).",
"It has been show superior performance in fairclassification, privacy-protection, and domain generalization tasks BID3 BID2 BID11 BID5 , compared to the predifined metric approaches BID12 BID7 BID8 .Despite",
"the theoretical justifications, the above min-max formulation is suspicious for several practical issues. Namely,",
"the gradient from the discriminator vanishes if the discriminator sufficiently trained since E[log q D (a|z=E(x))] is small then. Besides",
", in mathematical level, it only keeps away representations from the non-desired point where we can predict a label correctly, but not ensure that it approaches the desired invariant point. Please",
"also refer FIG1 for visualization of the instability.Note that, Generative Adversarial Networks community, which utilize similar formulation to generate realistic images, evade similar issues by the incorporating alternative objectives, such as the Wasserstein distance BID0 . However",
", the Wasserstein distance is defined over two distributions and applying to our setting (consisting of multiple distributions) is not trivial.This paper holds the following contributions to the invariant feature learning problem. First,",
"we empirically show that AFL is suffered from practical issues that significantly slow down the convergence. We then",
"reformulate the optimization problem of AFL as pair-wise distribution matching and derive parameter practical realization of pairwise distribution matching while inheriting the merit of AFL that leveraging the discriminative power to measure the invariance. It is worth",
"mentioning that the reformulation enable us to use Wasserstein metric in theory, however, it is still computationally infeasible in practice because a naive way to calculate the Wasserstein distance between all the pair of the distributions requires O(K 2 ) discriminators, where K = |A|, which raise computational issues both in terms of parameter size and forward/backward time. Finally, we",
"empirically validate the superior performance of our proposed method on both artificial dataset and real-world datasets.2 CONVERGENCE",
"ISSUES OF AFL FIG1 -(a-e) visualize a behavior of AFL optimization on synthesized data. Each figure",
"corresponds to the different timestep of the alternating optimization. The dataset",
"consists of samples from three Gaussian distributions with different means ([sin( DISPLAYFORM2 , for i ∈ 1, 2, 3, respectively) and the same variance, assuming that each distribution corresponds to different attributes. In each figure",
", dots represent the data point, color represents the attribute (domain id), and the contour plot represents the discriminator's decision boundary. A float value",
"on the top of figures is the negative log-likelihood (NLL) of the dataset measured by the discriminator D (the multi-layer perceptron with 100 hidden units followed by a ReLU activation). Similarly, a",
"float value in parenthesis on the top of figures is an NLL of a post-hoc classifier D eval that have the same architecture as D. To be more specific, we first train the discriminator 100 times with 128 batch size and train D and E iteratively with stochastic gradient descent with learning rate=0.1. FIG1 -(f,g)",
"shows the gradient vector fields of different time steps for a = blue, where the arrow represents the direction of the gradient, and the norm represents its magnitude. For simplicity",
", we only show the vector fields of a = blue, but the observations are quite similar for the other a.The figure reveals two practical issues in AFL optimization. (1) The distribution",
"alignment is initially quite slow (compare with the behavior of the proposed method shown in Figure 2 ). This is because the",
"gradient is small when the discriminator correctly distinguishes a a. (2) AFL behavior is",
"unstable. The distributions somewhat",
"align after 40 steps (given 0.683 NLL with the post-hoc classifier), but it is catastrophically failed five steps later because the discriminator did not capture the true conditional entropy (implied by the mostly similar counterplot of D) and therefore gave a false gradient as shown in (f) and (g). The intuitive",
"reason",
"for",
"this phenomenon is that AFLs loss essentially tries to pull a distribution apart from the non-desired point, i.e., the point where we can correctly predict the label. The problem of AFL is that",
"it only keeps away a distribution from the non-desired point, but not ensure it approaches the desired invariant point. After several steps, D starts",
"to follow the change of the distribution (as shown in FIG1 . The instability of the AFL also",
"appears in the significant gap between the NLL of the D and D eval . Note that the second issue may",
"be alleviated if D has a sufficiently large capacity and is trained many times at each iteration. However, this is not a realistic",
"assumption since it is fair to say that real datasets are more complicated than this toy situations, making it more challenging to find the supremum.",
"This paper proposes a new approach to incorporating desired invariance to representations learning, based on the observations that the current state-of-the-art AFL has practical issues.",
"Empirical results on both toy and real-world datasets support the stable performance of the proposed method to learn invariant features and superior performance on domain generalization tasks.A PROOF OF THE THEOREM 1Proof.",
"Using the Lagrange multiplier method, the derivative of DISPLAYFORM0 is equal to zero for the maximum entropy H(A|Z).",
"Solving the simultaneous equations, we can say p(a=1|z) = p(a=2|z) = · · · = p(a=K|z) = 1 K for all z ∈ Z when the conditional entropy is maximized, and based on the definition, the conditional entropy become − log DISPLAYFORM1 holds ∀i = j ∈ A and z ∈ Z."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.1249999925494194,
0,
0.1538461446762085,
0.21212120354175568,
0.17777776718139648,
0.11538460850715637,
0.07058823108673096,
0.1599999964237213,
0.1702127605676651,
0.1702127605676651,
0.1071428507566452,
0.14999999105930328,
0.11764705181121826,
0.1666666567325592,
0.04999999329447746,
0.19607841968536377,
0.14035087823867798,
0.15094339847564697,
0.25,
0.23076923191547394,
0.13333332538604736,
0.09756097197532654,
0.15789473056793213,
0.12121211737394333,
0.1071428507566452,
0.04651162400841713,
0.12244897335767746,
0.11428570747375488,
0.08510638028383255,
0.23076923191547394,
0.0952380895614624,
0.17142856121063232,
0,
0.05714285373687744,
0.18518517911434174,
0.13333332538604736,
0.1621621549129486,
0.10256409645080566,
0.09090908616781235,
0.13333332538604736,
1,
0.11538460850715637,
0.10256409645080566,
0.10169491171836853
] | r1ew74mluN | true | [
"This paper proposes a new approach to incorporating desired invariance to representations learning, based on the observations that the current state-of-the-art AFL has practical issues."
] |
[
"We study the properties of common loss surfaces through their Hessian matrix.",
"In particular, in the context of deep learning, we empirically show that the spectrum of the Hessian is composed of two parts: (1) the bulk centered near zero, (2) and outliers away from the bulk.",
"We present numerical evidence and mathematical justifications to the following conjectures laid out by Sagun et.",
"al. (2016): Fixing data, increasing the number of parameters merely scales the bulk of the spectrum; fixing the dimension and changing the data (for instance adding more clusters or making the data less separable) only affects the outliers.",
"We believe that our observations have striking implications for non-convex optimization in high dimensions.",
"First, the *flatness* of such landscapes (which can be measured by the singularity of the Hessian) implies that classical notions of basins of attraction may be quite misleading.",
"And that the discussion of wide/narrow basins may be in need of a new perspective around over-parametrization and redundancy that are able to create *large* connected components at the bottom of the landscape.",
"Second, the dependence of a small number of large eigenvalues to the data distribution can be linked to the spectrum of the covariance matrix of gradients of model outputs.",
"With this in mind, we may reevaluate the connections within the data-architecture-algorithm framework of a model, hoping that it would shed light on the geometry of high-dimensional and non-convex spaces in modern applications.",
"In particular, we present a case that links the two observations: small and large batch gradient descent appear to converge to different basins of attraction but we show that they are in fact connected through their flat region and so belong to the same basin.",
"In this paper, we study the geometry of the loss surface of supervised learning problems through the lens of their second order properties.",
"To introduce the framework, suppose we are given data in the form of input-label pairs, D = {(x i , y i )} N i=1 where x ∈ R d and y ∈ R that are sampled i.i.d. from a possibly unknown distribution ν, a model that is parametrized by w ∈ R M ; so that the number of examples is N and the number of parameters of the system is M .",
"Suppose also that there is a predictor f (w,",
"x).",
"The supervised learning process aims to solve for w so that f (w,",
"x) ≈ y.",
"To make the '≈' precise, we use a non-negative loss function that measures how close the predictor is to the true label, (f (w,",
"x),",
"y).",
"We wish to find a parameter w * such that w * = arg min L(w) where, DISPLAYFORM0 (f (w, x i ), y i ).In",
"particular, one is curious about the relationship between L(w) andL(w) := d(ν). By",
"the law of large numbers, at a given point w, L w →L w almost surely as N → ∞ for fixed M . However",
"in modern applications, especially in deep learning, the number of parameters M is comparable to the number of examples N (if not much larger). And the",
"behaviour of the two quantities may be drastically different (for a recent analysis on provable estimates see BID18 ).A classical",
"algorithm to find w * is gradient descent (GD), in which the optimization process is carried out using the gradient of L. A new parameter is found iteratively by taking a step in the direction of the negative gradient whose size is scaled with a constant step size η that is chosen from line-search minimization. Two problems",
"emerge: (1) Gradient computation can be expensive, (2) Line-search can be expensive. More involved",
"algorithms, such as Newton-type methods, make use of second-order information BID19 . Under sufficient",
"regularity conditions we may observe: L(w + ∆w) ≈ L(w) + ∆w∇L(w) + ∆w T ∇ 2 L(w)∆w. A third problem",
"emerges beyond an even more expansive computational cost of the Hessian: (3) Most methods require the Hessian to be non-degenerate to a certain extent.When the gradients are computationally expensive, one can alternatively use its stochastic version (SGD) that replaces the above gradient with the gradient of averages of losses over subsets (such a subset will be called the mini-batch) of D (see BID5 for a classical reference). The benefit of",
"SGD on real-life time limits is obvious, and GD may be impractical for practical purposes in many problems. In any case, the",
"stochastic gradient can be seen as an approximation to the true gradient, and hence it is important to understand how the two directions are related to one another. Therefore, the discussion",
"around the geometry of the loss surface can be enlightening in the comparison of the two algorithms: Does SGD locate solutions of a different nature than GD? Do they follow different",
"paths? If so, which one is better",
"in terms of generalization performance?For the second problem of expensive",
"line-search, there are two classical solutions: using a small, constant step size, or scheduling the step size according to a certain rule. In practice, in the context of deep",
"learning, the values for both approaches are determined heuristically, by trial and error. More involved optimal step size choices",
"involve some kind of second-order information that can be obtained from the Hessian of the loss function BID24 . From a computational point of view, obtaining",
"the Hessian is extremely expensive, however obtaining some of its largest and smallest eigenvalues and eigenvectors are not that expensive. Is it enough to know only those eigenvalues and",
"eigenvectors that are large in magnitude? How do they change through training? Would such",
"a method work in SGD as well as it would",
"on GD?For the third problem, let's look at the Hessian a little",
"closer. A critical point is defined by w such that ||∇L(w)|| = 0",
"and the nature of it can be determined by looking at the signs of its Hessian matrix. If all eigenvalues are positive the point is called a local",
"minimum, if r of them are negative and the rest are positive, then it is called a saddle point with index r. At the critical point, the eigenvectors indicate the directions",
"in which the value of the function locally changes. Moreover, the changes are proportional to the corresponding -signed-eigenvalue",
". Under sufficient regularity conditions, it is rather straightforward to show",
"that gradient-based methods converge to points where the gradient is zero. Recently BID15 showed that they indeed converge to minimizers. However, a significant",
"and untested assumption to establish these convergence results",
"is that the Hessian of the loss is non-degenerate. A relaxation of the above convergence to the case of non-isolated critical points can",
"be found in BID20 . What about the critical points of machine learning loss functions? Do they satisfy the",
"non-degeneracy assumptions? If they don't, can we still apply the",
"results of provable theorems to gain intuition?",
"Finally, we revisit the issue through the lens of the following question: What does overparametrization imply on the discussion around GD vs. SGD (or large batch vs small batch) especially for their generalization properties?",
"In this final section, we will argue that, contrary to what is believed in BID13 and BID7 the two algorithms do not have to be falling into different basins.As noted in the introduction, for a while the common sense explanation on why SGD works well (in fact better) than GD (or large batch methods) was that the non-convex landscape had local minima at high energies which would trap large-batch or full-batch methods.",
"Something that SGD with small batch shouldn't suffer due to the inherent noise in the algorithm.",
"However, there are various experiments that have been carried out in the past that show that, for reasonable large systems, this is not the case.",
"For instance, BID22 demonstrate that a two hidden layer fully connected network on MNIST can be trained by GD to reach at the same level of loss values as SGD 1 .",
"In fact, when the step size is fixed to the same value for both of the algorithms, they reach the same loss value at the same number of iterations.",
"The training accuracy for both algorithms are the same, and the gap between test accuracies diminish as the size of the network increase with GD falling ever so slightly behind.",
"It is also shown in BID13 that training accuracies for both large and small batch methods are comparably good.",
"Furthermore, BID26 demonstrates that training landscape is easy to optimize even when there is no clear notion of generalization.",
"Such observations are consistent with our observations: over-parametrization (due to the architecture of the model) leads to flatness at the bottom of the landscape which is easy to optimize.When we turn our attention to generalization, BID13 note that LB methods find a basin that is different than the one found by SB methods, and they are characterized by how wide the basin is.",
"As noted in FIG0 , indeed the large eigenvalues are larger in LB than in SB, but is it enough to justify that they are in different basins, especially given the fact that the number of flat directions are enormous.",
"One of the most striking implications of flatness may be the connected structure of the solution space.",
"We may wonder whether two given solutions can be connected by a continuous path of solutions.",
"This question has been explored in a recent work: in BID9 it is shown that for one hidden layer rectified neural networks the solution space is connected which is consistent with the flatness of the landscape.",
"The classical notion of basins of attractions may not be the suitable objects to study for neural networks.",
"Rather, we may look at the exploration of interiors of level sets of the landscape.",
"We may be tempted to speculate that such an exploration may indeed result in point that generalizes better.",
"However, the flat space itself is very high dimensional which comes with its own computational issues.The training curve can be seen as composed of two parts: (1) high gain part where the norm of the gradients are large, (2) noise of the gradients is larger relative to the size of the stochastic gradients (see BID25 for a recent reference).",
"We speculate that the first part is relatively easy and even a large batch method can locate a large level set that contains points that generalize better than what's initially found.",
"From a practical point of view, using larger batches with larger step sizes can, in fact, accelerate training.",
"An example of this can be found in BID10 , where training Imagenet with a minibatch size of 8192 can match small batch performance.",
"On a final note for further consideration, we remark that we used standard pre-processing and initialization methods that are commonly used in practice.",
"Fixing these two aspects, we modified the data, model, and algorithm in order to study their relative effects.",
"However, the effects of pre-processing and initialization on the Hessian is highly non-trivial and deserves a separate attention.",
"We have shown that the level of the singularity of the Hessian cannot be ignored from theoretical considerations.",
"Furthermore, we use the generalized Gauss-Newton decomposition of the Hessian to argue the cluster of zero eigenvalues are to be expected in practical applications.",
"This allows us to reconsider the division between initial fast decay and final slow progress of training.",
"We see that even large batch methods are able to get to the same basin where small batch methods go.",
"As opposed to the common intuition, the observed generalization gap between the two is not due to small batch finding a different, better, wider basin.",
"Instead, the two solutions appear to be in the same basin.",
"This lack of a barrier between solutions is demonstrated by finding paths between the two points that lie in the same level set.",
"To conclude, we propose a major shift in perspective on considerations of the energy landscape in deep learning problems."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.0714285671710968,
0.09090908616781235,
0.0624999962747097,
0.043478257954120636,
0,
0,
0.09090908616781235,
0.10810810327529907,
0.04444443807005882,
0.1818181723356247,
0.11428570747375488,
0.0923076868057251,
0.1599999964237213,
0.06896550953388214,
0,
0.10526315122842789,
0,
0.13793103396892548,
0.05128204822540283,
0.05405404791235924,
0,
0.03389830142259598,
0,
0,
0,
0.054794516414403915,
0.10526315122842789,
0.1395348757505417,
0.1463414579629898,
0.08695651590824127,
0,
0.0952380895614624,
0.11428570747375488,
0.052631575614213943,
0.1428571343421936,
0.13333332538604736,
0,
0,
0.06896550953388214,
0.1463414579629898,
0.1428571343421936,
0.06666666269302368,
0.07692307233810425,
0.0555555522441864,
0.0833333283662796,
0.12121211737394333,
0.060606054961681366,
0,
0,
0.1304347813129425,
0.0952380895614624,
0.12903225421905518,
0.20512819290161133,
0.04255318641662598,
0.10810810327529907,
0.1860465109348297,
0.34285715222358704,
0.1764705777168274,
0.0937499925494194,
0.1249999925494194,
0,
0.06451612710952759,
0.04255318641662598,
0.060606054961681366,
0,
0,
0.0937499925494194,
0.1860465109348297,
0,
0.10526315122842789,
0.1111111044883728,
0.05882352590560913,
0.1249999925494194,
0,
0.0555555522441864,
0.12121211737394333,
0.24242423474788666,
0.21052631735801697,
0.07692307233810425,
0.1621621549129486,
0
] | rJrTwxbCb | true | [
"The loss surface is *very* degenerate, and there are no barriers between large batch and small batch solutions."
] |
[
"The allocation of computation resources in the backbone is a crucial issue in object detection.",
"However, classification allocation pattern is usually adopted directly to object detector, which is proved to be sub-optimal.",
"In order to reallocate the engaged computation resources in a more efficient way, we present CR-NAS (Computation Reallocation Neural Architecture Search) that can learn computation reallocation strategies across different feature resolution and spatial position diectly on the target detection dataset.",
"A two-level reallocation space is proposed for both stage and spatial reallocation.",
"A novel hierarchical search procedure is adopted to cope with the complex search space.",
"We apply CR-NAS to multiple backbones and achieve consistent improvements.",
"Our CR-ResNet50 and CR-MobileNetV2 outperforms the baseline by 1.9% and 1.7% COCO AP respectively without any additional computation budget.",
"The models discovered by CR-NAS can be equiped to other powerful detection neck/head and be easily transferred to other dataset, e.g. PASCAL VOC, and other vision tasks, e.g. instance segmentation.",
"Our CR-NAS can be used as a plugin to improve the performance of various networks, which is demanding.",
"Object detection is one of the fundamental tasks in computer vision.",
"The backbone feature extractor is usually taken directly from classification literature (Girshick, 2015; Ren et al., 2015; Lin et al., 2017a; Lu et al., 2019) .",
"However, comparing with classification, object detection aims to know not only what but also where the object is.",
"Directly taking the backbone of classification network for object detectors is sub-optimal, which has been observed in .",
"To address this issue, there are many approaches either manually or automatically modify the backbone network.",
"proposes a neural architecture search (NAS) framework for detection backbone to avoid expert efforts and design trails.",
"However, previous works rely on the prior knowledge for classification task, either inheriting the backbone for classification, or designing search space similar to NAS on classification.",
"This raises a natural question: How to design an effective backbone dedicated to detection tasks?",
"To answer this question, we first draw a link between the Effective Receptive Field (ERF) and the computation allocation of backbone.",
"The ERF is only small Gaussian-like factor of theoretical receptive field (TRF), but it dominates the output (Luo et al., 2016) .",
"The ERF of image classification task can be easily fulfilled, e.g. the input size is 224×224 for the ImageNet data, while the ERF of object detection task need more capacities to handle scale variance across the instances, e.g. the input size is 800×1333 and the sizes of objects vary from 32 to 800 for the COCO dataset.",
"Lin et al. (2017a) allocates objects of different scales into different feature resolutions to capture the appropriate ERF in each stage.",
"Here we conduct an experiment to study the differences between the ERF of several FPN features.",
"As shown in Figure 1 , we notice the allocation of computation across different resolutions has a great impact on the ERF.",
"Furthermore, appropriate computation allocation across spacial position (Dai et al., 2017) boost the performance of detector by affecting the ERF.",
"Figure 1: Following the instructions in Luo et al. (2016) , we draw the ERF of FPN in different resolution features.",
"The size of base plate is 512×512, with respective anchor boxes ({64, 128, 256} for {p 3 , p 4 , p 5 }) drawn in.",
"The classification CNNs ResNet50 tends to have redundant ERF for high resolution features p 3 and limited ERF for low resolution features p 5 .",
"After stage reallocation, our SCR-ResNet50 has more balanced ERF across all resolutions which leads to a high performance.",
"Based on the above observation, in this paper, we aim to automatically design the computation allocation of backbone for object detectors.",
"Different from existing detection NAS works (Ghiasi et al., 2019; Ning Wang & Shen, 2019 ) which achieve accuracy improvement by introducing higher computation complexity, we reallocate the engaged computation cost in a more efficient way.",
"We propose computation reallocation NAS (CR-NAS) to search the allocation strategy directly on the detection task.",
"A two-level reallocation space is conducted to reallocate the computation across different resolution and spatial position.",
"In stage level, we search for the best strategy to distribute the computation among different resolution.",
"In operation level, we reallocate the computation by introducing a powerful search space designed specially for object detection.",
"The details about search space can be found in Sec. 3.2.",
"We propose a hierarchical search algorithm to cope with the complex search space.",
"Typically in stage reallocation, we exploit a reusable search space to reduce stage-level searching cost and adapt different computational requirements.",
"Extensive experiments show the effectiveness of our approach.",
"Our CR-NAS offers improvements for both fast mobile model and accurate model, such as ResNet (He et al., 2016) , MobileNetV2 (Sandler et al., 2018) , ResNeXt (Xie et al., 2017) .",
"On the COCO dataset, our CR-ResNet50 and CR-MobileNetV2 can achieve 38.3% and 33.9% AP, outperforming the baseline by 1.9% and 1.7% respectively without any additional computation budget.",
"Furthermore, we transfer our CR-ResNet and CR-MobileNetV2 into the another ERF-sensitive task, instance segmentation, by using the Mask RCNN framework.",
"Our CR-ResNet50 and CR-MobileNetV2 yields 1.3% and 1.2% COCO segmentation AP improvement over baseline.",
"To summarize, the contributions of our paper are three-fold:",
"• We propose computation reallocation NAS(CR-NAS) to reallocate engaged computation resources.",
"To our knowledge, we are the first to dig inside the computation allocation across different resolution.",
"• We develop a two-level reallocation space and hierarchical search paradigm to cope with the complex search space.",
"Typically in stage reallocation, we exploit a reusable model to reduce stage-level searching cost and adapt different computational requirements.",
"• Our CR-NAS offers significant improvements for various types of networks.",
"The discovered models show great transferablity over other detection neck/head, e.g. NAS-FPN (Cai & Vasconcelos, 2018) , other dataset, e.g. PASCAL VOC (Everingham et al., 2015) and other vision tasks, e.g. instance segmentation .",
"In this paper, we present CR-NAS (Computation Reallocation Neural Architecture Search) that can learn computation reallocation strategies across different resolution and spatial position.",
"We design a two-level reallocation space and a novel hierarchical search procedure to cope with the complex search space.",
"Extensive experiments show the effectiveness of our approach.",
"The discovered model has great transfer-ability to other detection neck/head, other dataset and other vision tasks.",
"Our CR-NAS can be used as a plugin to other detection backbones to further booster the performance under certain computation resources."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2142857164144516,
0.06896550953388214,
0.4615384638309479,
0.1599999964237213,
0.07407406717538834,
0.3333333432674408,
0.12121211737394333,
0.1538461446762085,
0.1249999925494194,
0.07999999821186066,
0,
0.06451612710952759,
0.06451612710952759,
0,
0.12903225421905518,
0.0555555522441864,
0.0714285671710968,
0.11764705181121826,
0,
0.0714285671710968,
0.1764705777168274,
0.06896550953388214,
0.17142856121063232,
0.11764705181121826,
0.1818181723356247,
0.052631575614213943,
0.1818181723356247,
0.0624999962747097,
0.1764705777168274,
0.1599999964237213,
0.27586206793785095,
0.5333333015441895,
0.27586206793785095,
0.1249999925494194,
0.07692307233810425,
0.23076923191547394,
0.23529411852359772,
0,
0.09999999403953552,
0.09999999403953552,
0.060606054961681366,
0.0714285671710968,
0,
0.5833333134651184,
0.27586206793785095,
0.19999998807907104,
0.24242423474788666,
0.07999999821186066,
0.045454539358615875,
0.37837836146354675,
0.19999998807907104,
0,
0.1428571343421936,
0.23529411852359772
] | SkxLFaNKwB | true | [
"We propose CR-NAS to reallocate engaged computation resources in different resolution and spatial position."
] |
[
"Uncertainty is a very important feature of the intelligence and helps the brain become a flexible, creative and powerful intelligent system.",
"The crossbar-based neuromorphic computing chips, in which the computing is mainly performed by analog circuits, have the uncertainty and can be used to imitate the brain.",
"However, most of the current deep neural networks have not taken the uncertainty of the neuromorphic computing chip into consideration.",
"Therefore, their performances on the neuromorphic computing chips are not as good as on the original platforms (CPUs/GPUs).",
"In this work, we proposed the uncertainty adaptation training scheme (UATS) that tells the uncertainty to the neural network in the training process.",
"The experimental results show that the neural networks can achieve comparable inference performances on the uncertain neuromorphic computing chip compared to the results on the original platforms, and much better than the performances without this training scheme.",
"Uncertainty reasoning is the essence of human thinking activities and a key aspect of the intelligence.",
"There are two kind of uncertainties in intelligent systems.",
"One is the fuzziness, the other is the stochasticity.",
"The fuzziness helps the brain deal with the real world efficiently by ignoring the enormous redundant information.",
"When we try to distinguish a cat or a dog, we do not need to know the expressions and the number of the legs.",
"Although such information can be easily captured by our visual system with a glance, it will be ignored for efficiency.",
"The stochasticity endows the brain the creativity and enables us not always failed in an unfamiliar field.",
"Our decisions may change when we do not sure.",
"These characteristics are not available in most existing artificial intelligence (AI) systems, such as a classifier based on a deep neural network (DNN).",
"The 32-bit or 64-bit floating numbers are used to describe the weights and activations.",
"While some researchers found that the 8-bit integer is enough for many applications Banner et al. (2018) ; .",
"Moreover, after the training procedure, the results will be the same no matter how many times it performs, although the margin is very small and the answer is wrong.",
"There are some methods to address these issues, such as the network quantization and the Bayesian network.",
"In addition, the neuromorphic computing chip has provide a hardware approach to supplement the missing uncertainty in DNN.",
"In recent years, the emerging nanotechnology device and crossbar structure based neuromorphic computing chips have developed a lot Fuller et al. (2019) ; ; Yao et al. (2017) .",
"The Ohms law and Kirchhoffs law make the crossbar structure very efficient when doing the vectormatrix multiplication (VMM), and the emerging nanoscale nonvolatile memory (NVM) device at each cross point provides additional storage capability (Figure 1 ).",
"The crossbar holds the devices conductances as memory in peacetime, and performs the computing function when applied voltages.",
"The so-called computing in memory (CIM) architecture can relieve the memory bottleneck, which is the most serious problem in the von Neumann architecture, and make the neuromorphic computing chips more energy and area efficiency.",
"Therefore, the neuromorphic computing has become a promising approach to realize the AI applications, which is full of VMMs and great memory requirement.",
"Besides the energy and area efficiency, the uncertainty is also an important and intrinsic feature of the neuromorphic computing chips and is not well utilized.",
"Figure 1 : The crossbar structure.",
"V is the applied voltage that correspond to the input x, G is the conductance of devices that correspond to the weight W, I is the output current, which can indicates the output y according to the Ohms law and Kirchhoffs law.",
"The uncertainty in the neuromorphic computing chips comes from two aspects.",
"The fuzziness is mainly caused by the analog to digital converters (ADCs) and the stochasticity is mainly induced by the NVM devices.",
"According to the Kirchhoffs law, the VMM result is indicated as the summarization of the currents, which is an analog output.",
"It is necessary to use the ADC to convert the analog currents to digital voltages for data transferring.",
"The function of ADC is similar as the activation quantization in the network.",
"The stochasticity of the NVM device is due to the intrinsic physical mechanism Zhao et al. (2017) ; Lin et al. (2018) .",
"The random movement of the particles in the device makes the conductance varied.",
"The output current will be different even applying the same voltage.",
"The stochasticity of the device is usually simulated as a non-ideal factor that makes the network perform worse Prezioso et al. (2015) ; Ambrogio et al. (2018); Tang et al. (2017) .",
"In this work, we proposed a training scheme that utilizes the stochasticity to improve the performance of the neuromorphic computing chips.",
"The uncertainty is very important in the intelligent system.",
"The Bayesian network is a very useful method to build an uncertain neural network.",
"However, it usually requires that the distribution of each weight is controllable.",
"This is hard to be realized by the neuromorphic computing chip due to the distribution is determined by the devices.",
"Although there may be some methods to manipulate the conductance distribution of the device, it is not as convenient as UATS, which has no additional circuit required.",
"We have tried a series of distributions to model the device stochasticity besides the Gaussian distribution, such as the Laplacian distribution, the uniform distribution, and the asymmetrical distributions, such as the lognormal distribution, the asymmetric Laplacian distribution, and the Bernoulli distribution for devices that have two stable states or the random telegraph noise (RTN).",
"Although the modeled behavior of the device from different distributions is significantly different, the performance of network using each type of distribution with the same mean and variance is similar.",
"It is because the VMM transform the individual distribution of each device to a summarization of a large number of random parameters.p",
"The computation intension of UATS may be a little strong due to the requirement of a large number of random numbers.",
"There are some methods to reduce the requirement of random numbers.",
"Such as samples the weight for every input or every batch instead of the every VMM and using the uncertainty model of VMM results instead of the weights.",
"The simulation speed can be accelerated and achieve similar results."
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.19999998807907104,
0.23529411852359772,
0.25,
0.17142856121063232,
0.2978723347187042,
0,
0,
0,
0.0624999962747097,
0,
0.1111111044883728,
0,
0,
0.10256409645080566,
0,
0.05714285373687744,
0.04878048226237297,
0,
0.1764705777168274,
0.1428571343421936,
0.03999999538064003,
0.05882352590560913,
0.22727271914482117,
0.10256409645080566,
0.21621620655059814,
0,
0.08888888359069824,
0.2857142686843872,
0,
0,
0,
0,
0,
0,
0,
0.04651162400841713,
0.277777761220932,
0.07692307233810425,
0.06666666269302368,
0.06896550953388214,
0.1249999925494194,
0,
0.03703703358769417,
0.04878048226237297,
0,
0,
0,
0.0555555522441864,
0.07407406717538834
] | Byekm0VtwS | true | [
"A training method that can make deep learning algorithms work better on neuromorphic computing chips with uncertainty"
] |
[
"An important property of image classification systems in the real world is that they both accurately classify objects from target classes (``knowns'') and safely reject unknown objects (``unknowns'') that belong to classes not present in the training data.",
"Unfortunately, although the strong generalization ability of existing CNNs ensures their accuracy when classifying known objects, it also causes them to often assign an unknown to a target class with high confidence.",
"As a result, simply using low-confidence detections as a way to detect unknowns does not work well.",
"In this work, we propose an Unknown-aware Deep Neural Network (UDN for short) to solve this challenging problem.",
"The key idea of UDN is to enhance existing CNNs to support a product operation that models the product relationship among the features produced by convolutional layers.",
"This way, missing a single key feature of a target class will greatly reduce the probability of assigning an object to this class.",
"UDN uses a learned ensemble of these product operations, which allows it to balance the contradictory requirements of accurately classifying known objects and correctly rejecting unknowns.",
"To further improve the performance of UDN at detecting unknowns, we propose an information-theoretic regularization strategy that incorporates the objective of rejecting unknowns into the learning process of UDN.",
"We experiment on benchmark image datasets including MNIST, CIFAR-10, CIFAR-100, and SVHN, adding unknowns by injecting one dataset into another.",
"Our results demonstrate that UDN significantly outperforms state-of-the-art methods at rejecting unknowns by 25 percentage points improvement in accuracy, while still preserving the classification accuracy.",
"Motivation.",
"In recent years, Convolutional Neural Networks (CNN) have been used with great success for a rich variety of classification problems, particularly when dealing with high dimensional, complex data such as images or time series (Goodfellow et al., 2016) .",
"A CNN classifier (Krizhevsky et al., 2012) typically classifies test objects as one of the target classes supplied in the training set.",
"In this, state-of-the-art classifiers make the implicit assumption that all testing objects belong to one of the target classes.",
"However, this assumption is rarely true in real-world deployments of CNN classifiers.",
"Consider for example, an autonomous car or healthcare system: it is extremely likely that the system will be exposed to objects that were not in its training set.",
"We call such objects \"unknowns\".",
"Clearly, blindly assigning these unknowns into one of the target classes degrades the prediction accuracy.",
"Worst yet, it can lead to serious safety concerns.",
"For example, in a collaboration with a top hospital in the US (name removed due to anonymity), we have been developing a seizure detector that classifies patients into different types of seizures based on EEG signals collected during the clinical observation of 4,000 patients.",
"The detector was trained based on 6 types of seizures observed in the training data.",
"However, when deployed, the CNN classifier may encounter patients who have types of seizures that do not exist in the training data because they are rare or even unknown by the medical community.",
"Misclassifying these patients into the existing types of seizures brings serious risks and potential harm due to the potential for mistreatment of these patients.",
"Ideally, in this case, the unknowns would be recognized and rejected by the classifier.",
"In this work, we focus on this important problem, describing a deep neural network that not only accurately classifies test objects into known target classes, but also correctly rejects unknowns.",
"State-of-the-Art.",
"In a typical CNN, the output of the last fully connected layer is fed into a softmax layer to generate a class probability in [0, 1] for each target class.",
"An object will then be assigned to the class with the maximal probability.",
"Intuitively, unknowns would be detected by leveraging this confidence, as was done in Bendale & Boult (2016) ; Hendrycks & Gimpel (2017) ; Liang et al. (2018) .",
"Since unknowns should not exhibit as many features of a target class versus known objects, the CNN should report a lower confidence.",
"In prior work (Bendale & Boult, 2016; Hendrycks & Gimpel, 2017; Liang et al., 2018) , the maximal probability or the largest value in the input vector to the softmax layer (maximal weighted sum) is used as a confidence to detect unknowns.",
"In particular, an object will be rejected as an unknown if its confidence is smaller than a predetermined cutoff threshold ct.",
"However, as shown in our experiments (Sec. 5), these state-of-the-art methods are not particularly effective at rejecting unknowns.",
"This is because CNNs achieve high classification accuracy by providing a strong ability to generalize, allowing it to overcome the gap between the training and testing data (Goodfellow et al., 2016) .",
"Unfortunately, this strength here is also a weakness, because it increases the chance of erroneously assigning an unknown to some target class even if it is quite different from the training objects in any target class.",
"More specifically, the maximal probability (or maximal weighted sum) in a CNN is computed by the weighted sum operation on the multiple features produced by the convolutional layers.",
"Because of this sum operation, an unknown can be classified to a target class with high confidence even if it matches some key features of a target class only by chance.",
"Therefore, the requirements of accurately classifying the knowns and correctly rejecting the unknowns conflict with each other.",
"Proposed Approach and Contributions.",
"In this work we propose an Unknown-aware Deep Neural Network (UDN for short) to overcome this problem.",
"The key intuition of UDN is to modify the CNN to use a product operation which models the product relationship among the features produced by the convolutional layers.",
"This way, similar to the product rule in probability theory (Stroock, 2010) , if just one feature indicative of a target class is not matched, the probability of assigning an object to this class is greatly reduced.",
"Since an unknown is unlikely to match most of the features of a target class, the chance of assigning an unknown to a target class with high confidence is reduced.",
"Therefore, the confidence produced by UDN should more effectively detect unknowns than the typical maximal probability/maximal weighted sum produced by classical CNNs.",
"In UDN, the product operations are learned as a set of product relationship (PR) subnets leveraging the hierarchical nature of the binary tree structure.",
"The strong bias of the classification decisions made via the product operations and the generalization ability introduced by the ensemble nature of multiple PR subsets together balance the contradictory requirements of accurately classifying known objects and correctly rejecting unknowns.",
"In addition, we propose an information-theoretic regularization strategy that actively incorporates the objective of unknown rejection into the learning process of UDN.",
"This further improves the accuracy of UDN at rejecting unknowns by enlarging the confidence gap between unknown and known objects.",
"We then show that the final loss function of UDN is fully differentiable.",
"Therefore, UDN can be learned by following the common practice of back-propagation in deep neural networks.",
"We demonstrate the effectiveness of UDN using a rich variety of benchmark datasets including MNIST, CIFAR-10, CIFAR-100, and SVHN.",
"UDN outperforms the state-of-the-art up to 20 points in the accuracy of unknown rejection -while preserving the accuracy of the underlying CNN at classifying objects from the target classes.",
"In this work, we proposed an augmentation to CNNs, UDN, which effectively rejects unknown objects that do not belong to any class seen in the training data.",
"UDN achieves this by replacing softmax layer in traditional CNNs with a novel tree ensemble that takes the product of feature values, balancing the contradictory requirements of accurately classifying knowns and correctly rejecting unknowns in one network structure.",
"A regularization strategy is proposed for UDN to further enhance its unknown rejection capacity."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.17777776718139648,
0.04651162400841713,
0.0714285671710968,
0,
0.1111111044883728,
0.0624999962747097,
0.1621621549129486,
0.1666666567325592,
0.0624999962747097,
0.21621620655059814,
0,
0.3529411852359772,
0.19999998807907104,
0.1666666567325592,
0.20512820780277252,
0.11764705181121826,
0.1538461446762085,
0.0952380895614624,
0.11999999731779099,
0.14814814925193787,
0.1860465109348297,
0.06451612710952759,
0.23999999463558197,
0.24390242993831635,
0.10810810327529907,
0.0833333283662796,
0.10810810327529907,
0.1875,
0.12244897335767746,
0,
0.19999998807907104,
0.0476190447807312,
0.1395348757505417,
0.1764705777168274,
0.05128204822540283,
0.14814814925193787,
0,
0,
0.11428570747375488,
0.09302325546741486,
0.060606054961681366,
0.12903225421905518,
0.0624999962747097,
0.13636362552642822,
0.1249999925494194,
0.19354838132858276,
0.1599999964237213,
0.2142857164144516,
0.06666666269302368,
0.22857142984867096,
0.2631579041481018,
0.1702127605676651,
0.07692307233810425
] | rkguLC4tPB | true | [
"A CNN architecture that can effective rejects the unknowns in test objects"
] |
[
"Multivariate time series with missing values are common in areas such as healthcare and finance, and have grown in number and complexity over the years.",
"This raises the question whether deep learning methodologies can outperform classical data imputation methods in this domain.",
"However, naive applications of deep learning fall short in giving reliable confidence estimates and lack interpretability.",
"We propose a new deep sequential latent variable model for dimensionality reduction and data imputation.",
"Our modeling assumption is simple and interpretable: the high dimensional time series has a lower-dimensional representation which evolves smoothly in time according to a Gaussian process.",
"The non-linear dimensionality reduction in the presence of missing data is achieved using a VAE approach with a novel structured variational approximation.",
"We demonstrate that our approach outperforms several classical and deep learning-based data imputation methods on high-dimensional data from the domains of computer vision and healthcare, while additionally improving the smoothness of the imputations and providing interpretable uncertainty estimates.",
"Multivariate medical time series, consisting of multiple correlated univariate time series or channels, give rise to two distinct ways of imputing missing information: (1) by exploiting temporal correlations within each channel, and (2) by exploiting correlations across channels, for example by using lower-dimensional representations of the data.",
"An ideal imputation model for medical time series should take both of these sources of information into account.",
"Another desirable property of such models is to offer a probabilistic interpretation, allowing for uncertainty estimation.",
"Unfortunately, current imputation approaches fall short with respect to at least one of these desiderata.",
"While there are many time-tested statistical methods for multivariate time series analysis (e.g., Gaussian processes (Roberts et al., 2013) ), these methods are generally not applicable when features are missing.",
"On the other hand, classical methods for time series imputation often do not take the potentially complex interactions between the different channels into account (Little and Rubin, 2002; Pedersen et al., 2017) .",
"Finally, recent work has explored the use of non-linear dimensionality reduction using variational autoencoders for i.i.d. data points with missing values (Ainsworth et al., 2018; Ma et al., 2018; Nazabal et al., 2018) , but this work has not considered temporal data and strategies for sharing statistical strength across time.",
"A more comprehensive analysis of existing approaches and their shortcomings is deferred to the appendix (Sec. A).",
"In this paper, we propose an architecture that combines deep variational autoencoders (VAEs) with Gaussian process (GP) to efficiently model the latent dynamics at multiple time scales.",
"Moreover, our inference approach makes use of efficient structured variational approximations, where we fit another multivariate Gaussian process in order to approximate the intractable true posterior.",
"We make the following contributions:",
"• A new model.",
"We propose a VAE architecture for multivariate time series imputation with a GP prior in the latent space to capture temporal dynamics.",
"• Efficient inference.",
"We use a structured variational approximation that models posterior correlations in the time domain.",
"• Benchmarking on real-world data.",
"We carry out extensive comparisons to classical imputation methods as well as state-of-the-art deep learning approaches, and perform experiments on data from different domains."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
0.1818181723356247,
0.10256409645080566,
0,
0.3243243098258972,
0.260869562625885,
0.23255813121795654,
0.14814814925193787,
0.16393442451953888,
0.20512819290161133,
0.10526315122842789,
0.1621621549129486,
0.19607841968536377,
0.11320754140615463,
0.15625,
0.05128204822540283,
0.3265306055545807,
0.25,
0.07407407462596893,
0.07692307233810425,
0.41860464215278625,
0.07999999821186066,
0.2222222238779068,
0.14814814925193787,
0.2666666507720947
] | H1xXYy3VKr | true | [
"We perform amortized variational inference on a latent Gaussian process model to achieve superior imputation performance on multivariate time series with missing data."
] |
[
"In practice it is often found that large over-parameterized neural networks generalize better than their smaller counterparts, an observation that appears to conflict with classical notions of function complexity, which typically favor smaller models.",
"In this work, we investigate this tension between complexity and generalization through an extensive empirical exploration of two natural metrics of complexity related to sensitivity to input perturbations.",
"Our experiments survey thousands of models with different architectures, optimizers, and other hyper-parameters, as well as four different image classification datasets.\n\n",
"We find that trained neural networks are more robust to input perturbations in the vicinity of the training data manifold, as measured by the input-output Jacobian of the network, and that this correlates well with generalization.",
"We further establish that factors associated with poor generalization -- such as full-batch training or using random labels -- correspond to higher sensitivity, while factors associated with good generalization -- such as data augmentation and ReLU non-linearities -- give rise to more robust functions.",
"Finally, we demonstrate how the input-output Jacobian norm can be predictive of generalization at the level of individual test points.",
"The empirical success of deep learning has thus far eluded interpretation through existing lenses of computational complexity BID2 , numerical optimization BID4 BID8 BID5 and classical statistical learning theory (Zhang et al., 2016) : neural networks are highly non-convex models with extreme capacity that train fast and generalize well.",
"In fact, not only do large networks demonstrate good test performance, but larger networks often generalize better, counter to what would be expected from classical measures, such as VC dimension.",
"This phenomenon has been observed in targeted experiments BID29 , historical trends of Deep Learning competitions BID3 , and in the course of this work ( Figure 1 ).This",
"observation is at odds with Occam's razor, the principle of parsimony, as applied to the intuitive notion of function complexity (see §A.2 for extended discussion). One",
"resolution of the apparent contradiction is to examine complexity of functions in conjunction with the input domain. f (",
"x) = x 3 sin(x) may seem decisively more complex than g(x) = x. But",
"restrained to a narrow input domain of [−0.01, 0 .01] they appear differently: g remains a linear function of the input, while f (x) = O x 4 resembles a constant 0. In",
"this work we find that such intuition applies to neural networks, that behave very differently close to the data manifold than away from it ( §4.1).We",
"therefore analyze the complexity of models through their capacity to distinguish different inputs in the neighborhood of datapoints, or, in other words, their sensitivity. We",
"study two simple metrics presented in §3 and find that one of them, the norm of the input-output Jacobian, correlates with generalization in a very wide variety of scenarios. Train",
"loss Figure 1 : 2160 networks trained to 100% training accuracy on CIFAR10 (see §A.5.5 for experimental details). Left:",
"while increasing capacity of the model allows for overfitting (top), very few models do, and a model with the maximum parameter count yields the best generalization (bottom right). Right",
": train loss does not correlate well with generalization, and the best model (minimum along the y-axis) has training loss many orders of magnitude higher than models that generalize worse (left). This",
"observation rules out underfitting as the reason for poor generalization in low-capacity models. See",
"BID29 for similar findings in the case of achievable 0 training loss.This work considers sensitivity only in the context of image classification tasks. We",
"interpret the observed correlation with generalization as an expression of a universal prior on (natural) image classification functions that favor robustness (see §A.2 for details). While",
"we expect a similar prior to exist in many other perceptual settings, care should be taken when extrapolating our findings to tasks where such a prior may not be justified (e.g. weather forecasting).",
"We have investigated sensitivity of trained neural networks through the input-output Jacobian norm and linear regions counting in the context of image classification tasks.",
"We have presented extensive experimental evidence indicating that the local geometry of the trained function as captured by the input-output Jacobian can be predictive of generalization in many different contexts, and that it varies drastically depending on how close to the training data manifold the function is evaluated.",
"We further established a connection between the cross-entropy loss and the Jacobian norm, indicating that it can remain informative of generalization even at the level of individual test points.",
"Interesting directions for future work include extending our investigation to more complex architectures and other machine learning tasks."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
0,
0.1538461446762085,
0.05714285373687744,
0.21739129722118378,
0.12244897335767746,
0.1818181723356247,
0.03278687968850136,
0,
0.09756097197532654,
0.04999999701976776,
0.0624999962747097,
0,
0.08888888359069824,
0.09756097197532654,
0.1111111044883728,
0.1463414579629898,
0.05714285373687744,
0.1463414579629898,
0.08888888359069824,
0.13793103396892548,
0.10810810327529907,
0.0952380895614624,
0,
0.2702702581882477,
0.2142857164144516,
0.2926829159259796,
0.060606054961681366
] | HJC2SzZCW | true | [
"We perform massive experimental studies characterizing the relationships between Jacobian norms, linear regions, and generalization."
] |
[
"Deep reinforcement learning (RL) methods generally engage in exploratory behavior through noise injection in the action space.",
"An alternative is to add noise directly to the agent's parameters, which can lead to more consistent exploration and a richer set of behaviors.",
"Methods such as evolutionary strategies use parameter perturbations, but discard all temporal structure in the process and require significantly more samples.",
"Combining parameter noise with traditional RL methods allows to combine the best of both worlds.",
"We demonstrate that both off- and on-policy methods benefit from this approach through experimental comparison of DQN, DDPG, and TRPO on high-dimensional discrete action environments as well as continuous control tasks.",
"Exploration remains a key challenge in contemporary deep reinforcement learning (RL).",
"Its main purpose is to ensure that the agent's behavior does not converge prematurely to a local optimum.",
"Enabling efficient and effective exploration is, however, not trivial since it is not directed by the reward function of the underlying Markov decision process (MDP).",
"Although a plethora of methods have been proposed to tackle this challenge in high-dimensional and/or continuous-action MDPs, they often rely on complex additional structures such as counting tables BID6 , density modeling of the state space BID22 , learned dynamics models BID0 BID29 , or self-supervised curiosity BID23 ).An",
"orthogonal way of increasing the exploratory nature of these algorithms is through the addition of temporally-correlated noise, for example as done in bootstrapped DQN BID20 . Along",
"the same lines, it was shown that the addition of parameter noise leads to better exploration by obtaining a policy that exhibits a larger variety of behaviors BID32 Salimans et al., 2017) . We discuss",
"these related approaches in greater detail in Section 5. Their main",
"limitation, however, is that they are either only proposed and evaluated for the on-policy setting with relatively small and shallow function approximators (Rückstieß et al., 2008) or disregard all temporal structure and gradient information (Salimans et al., 2017; BID17 BID28 . This paper",
"investigates how parameter space noise can be effectively combined with off-the-shelf deep RL algorithms such as DQN BID19 , DDPG BID18 , and TRPO (Schulman et al., 2015b) to improve their exploratory behavior. Experiments",
"show that this form of exploration is applicable to both high-dimensional discrete environments and continuous control tasks, using on-and off-policy methods. Our results",
"indicate that parameter noise outperforms traditional action space noise-based baselines, especially in tasks where the reward signal is extremely sparse.",
"In this work, we propose parameter space noise as a conceptually simple yet effective replacement for traditional action space noise like -greedy and additive Gaussian noise.",
"This work shows that parameter perturbations can successfully be combined with contemporary on-and off-policy deep RL algorithms such as DQN, DDPG, and TRPO and often results in improved performance compared to action noise.",
"Experimental results further demonstrate that using parameter noise allows solving environments with very sparse rewards, in which action noise is unlikely to succeed.",
"Our results indicate that parameter space noise is a viable and interesting alternative to action space noise, which is still the de facto standard in most reinforcement learning applications.",
"A EXPERIMENTAL SETUP"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.21621620655059814,
0.1860465109348297,
0.0476190410554409,
0.2222222238779068,
0.03999999538064003,
0.1249999925494194,
0.052631575614213943,
0.13636362552642822,
0.11764705181121826,
0.09090908616781235,
0.19230768084526062,
0,
0,
0.14814814925193787,
0.13636362552642822,
0.09756097197532654,
0.09090908616781235,
0.22641508281230927,
0.1395348757505417,
0.2083333283662796,
0
] | ByBAl2eAZ | true | [
"Parameter space noise allows reinforcement learning algorithms to explore by perturbing parameters instead of actions, often leading to significantly improved exploration performance."
] |
[
"Pre-trained deep neural network language models such as ELMo, GPT, BERT and XLNet have recently achieved state-of-the-art performance on a variety of language understanding tasks.",
"However, their size makes them impractical for a number of scenarios, especially on mobile and edge devices.",
"In particular, the input word embedding matrix accounts for a significant proportion of the model's memory footprint, due to the large input vocabulary and embedding dimensions.",
"Knowledge distillation techniques have had success at compressing large neural network models, but they are ineffective at yielding student models with vocabularies different from the original teacher models.",
"We introduce a novel knowledge distillation technique for training a student model with a significantly smaller vocabulary as well as lower embedding and hidden state dimensions.",
"Specifically, we employ a dual-training mechanism that trains the teacher and student models simultaneously to obtain optimal word embeddings for the student vocabulary.",
"We combine this approach with learning shared projection matrices that transfer layer-wise knowledge from the teacher model to the student model.",
"Our method is able to compress the BERT-BASE model by more than 60x, with only a minor drop in downstream task metrics, resulting in a language model with a footprint of under 7MB.",
"Experimental results also demonstrate higher compression efficiency and accuracy when compared with other state-of-the-art compression techniques.",
"Recently, contextual-aware language models such as ELMo (Peters et al., 2018) , GPT (Radford et al., 2019) , BERT (Devlin et al., 2018) and XLNet have shown to greatly outperform traditional word embedding models including Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) in a variety of NLP tasks.",
"These pre-trained language models, when finetuned on downstream language understanding tasks such as sentiment classification (Socher et al., 2013) , natural language inference (Williams et al., 2018) and reading comprehension (Rajpurkar et al., 2016; Lai et al., 2017) , have achieved state-of-the-art performance.",
"However, the large number of parameters in these models, often above hundreds of millions, makes it impossible to host them on resource-constrained tasks such as doing real-time inference on mobile and edge devices.",
"Besides utilizing model quantization techniques (Gong et al., 2014; Lin et al., 2016) which aim to reduce the floating-point accuracy of the parameters, significant recent research has focused on knowledge distillation (Ba & Caruana, 2014; Hinton et al., 2015) techniques.",
"Here, the goal is to train a small-footprint student model by borrowing knowledge, such as through a soft predicted label distribution, from a larger pre-trained teacher model.",
"However, a significant bottleneck that has been overlooked by previous efforts is the input vocabulary size and its corresponding word embedding matrix, often accounting for a significant proportion of all model parameters.",
"For instance, the embedding table of the BERT BASE model, comprising over 30K WordPiece tokens (Wu et al., 2016b) , accounts for over 21% of the model size.",
"While there has been existing work on reducing NLP model vocabulary sizes (Sennrich et al., 2016) , distillation techniques cannot utilize these, since they require the student and teacher models to share the same vocabulary and output space.",
"This profoundly limits their potential to further reduce model sizes.",
"We present two novel ideas to improve the effectiveness of knowledge distillation, in particular for BERT, with the focus on bringing down model sizes to as much as a few mega-bytes.",
"Our model is among the first to propose to use a significantly smaller vocabulary for the student model learned during distillation.",
"In addition, instead of distilling solely on the teacher model's final-layer outputs, our model leverages layer-wise teacher model parameters to directly optimize the parameters of the corresponding layers in the student model.",
"Specifically, our contributions are:",
"• Dual Training: Our teacher and student models have different vocabularies and incompatible tokenizations for the same sequence.",
"To address this during distillation, we feed the teacher model a mix of teacher vocabulary-tokenized and student vocabulary-tokenized words within a single sequence.",
"Coupled with the masked language modeling task, this encourages an implicit alignment of the teacher and student WordPiece embeddings, since the student vocabulary embedding may be used as context to predict a word tokenized by the teacher vocabulary and vice versa.",
"• Shared Variable Projections: To minimize the loss of information from reducing the hidden state dimension, we introduce a separate loss to align the teacher and student models' trainable variables.",
"This allows for more direct layer-wise transfer of knowledge to the student model.",
"Using the combination of dual training and shared variable projections, we train a 12-layer highlycompressed student BERT model, achieving a maximum compression ratio of ∼61.94x (with 48 dimension size) compared to the teacher BERT BASE model.",
"We conduct experiments for measuring both generalized language modeling perspective and for downstream tasks, demonstrating competitive performance with high compression ratios for both families of tasks.",
"Shared projections and model performance: We see that for downstream task performance, dual training still consistently improves upon the direct fine-tuning approach for virtually all experiments.",
"The effect of shared variable projection, however, is less pronounced, with consistent improvements visible only for MRPC and for the 48-dimensional models i.e. the smallest dataset and models respectively in our experiments.",
"This aligns with our intuition for variable projection as a more direct way to provide a training signal from the teacher model internals, which can assume more importance for a low-data or small-model scenario.",
"However, for larger models and more data, the linear projection of parameters may be reducing the degrees of freedom available to the model, since linear projection is a fairly simple function to align the teacher and student parameter spaces.",
"A related comparison of interest is between up-projection and down-projection of the model variables: we note up-projection does visibly better on the language modeling task and slightly better on the downstream tasks.",
"The parameters of a well-trained teacher model represent a high-quality local minimum in the teacher space, which may be easier to search for during up-projection.",
"Vocabulary size tradeoffs: Issues with input vocabulary size are peculiar to problems in natural language processing: they do not always apply to other areas such as computer vision, where a small fixed number of symbols can encode most inputs.",
"There has been some work on reducing input vocabulary sizes for NLP, but typically not targeting model compression.",
"One concern with reducing the vocabularies of NLP models is it pushes the average tokenized sequence lengths up, making model training harder.",
"In this work, however, we consider classification tasks on shorter texts, which are not as affected by input sequence lengths as, say, tasks such as machine translation are.",
"Furthermore, many real-world applications revolve around short text inputs, which is why a better trade-off between vocabulary size and sequence lengths may be worthwhile for such applications.",
"Order of distillation and fine-tuning: Most of the existing work on distilling language models such as BERT and reporting results on downstream tasks, including some of the baselines in this work, first fine-tune a teacher model on the downstream tasks, and then distill this model.",
"Our goal in this work, however, is to explore the limits to which BERT's language modeling capacity itself, and how much of it is driven by its large WordPiece vocabulary.",
"We leave experiments on distilling fine-tuned teacher models, potentially yielding better results on downstream tasks, to future work.",
"We proposed two novel ideas to improve the effectiveness of knowledge distillation for BERT, focusing on using a significantly smaller vocabulary, as well as smaller embedding and hidden dimensions for the student BERT language models.",
"Our dual training mechanism encourages implicit alignment of the teacher and student WordPiece embeddings, and shared variable projection allows for the faster and direct layer-wise transfer of knowledge to the student BERT model.",
"Combining the two techniques, we trained a series of highly-compressed 12-layer student BERT models.",
"Experiments on these models, to evaluate both generalized language perspective and four standardized downstream tasks, demonstrate the effectiveness of our proposed methods on both model accuracy and compression efficiency.",
"One future direction of interest is to combine our approach with existing work to reduce the number of layers in the student models and explore other approaches such as low-rank matrix factorization to transfer model parameters from the teacher space to the student space.",
"In addition, taking into account the frequency distribution of the WordPiece tokens while training embeddings may help optimize the model size further."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.17777776718139648,
0.052631575614213943,
0.04651162400841713,
0.2978723347187042,
0.3181818127632141,
0.1904761791229248,
0.19999998807907104,
0.20408162474632263,
0.1666666567325592,
0.09836065024137497,
0.072727270424366,
0.038461532443761826,
0.072727270424366,
0.08888888359069824,
0.11764705181121826,
0.04444443807005882,
0.178571417927742,
0,
0.16326530277729034,
0.10256409645080566,
0.04444443807005882,
0,
0.2631579041481018,
0.09756097197532654,
0.145454540848732,
0.0833333283662796,
0.05882352590560913,
0.14814814925193787,
0.1818181723356247,
0.17391303181648254,
0.11999999731779099,
0.07843136787414551,
0.11538460850715637,
0.04347825422883034,
0,
0.03448275476694107,
0,
0.1904761791229248,
0.04347825422883034,
0.04255318641662598,
0.14814814925193787,
0.08163265138864517,
0.052631575614213943,
0.26923075318336487,
0.1666666567325592,
0.17142856121063232,
0.04255318641662598,
0.1428571343421936,
0.04878048226237297
] | S1x6ueSKPr | true | [
"We present novel distillation techniques that enable training student models with different vocabularies and compress BERT by 60x with minor performance drop."
] |
[
"Human reasoning involves recognising common underlying principles across many examples by utilising variables.",
"The by-products of such reasoning are invariants that capture patterns across examples such as \"if someone went somewhere then they are there\" without mentioning specific people or places.",
"Humans learn what variables are and how to use them at a young age, and the question this paper addresses is whether machines can also learn and use variables solely from examples without requiring human pre-engineering.",
"We propose Unification Networks that incorporate soft unification into neural networks to learn variables and by doing so lift examples into invariants that can then be used to solve a given task.",
"We evaluate our approach on four datasets to demonstrate that learning invariants captures patterns in the data and can improve performance over baselines.",
"Humans have the ability to process symbolic knowledge and maintain symbolic thought (Unger & Deacon, 1998) .",
"When reasoning, humans do not require combinatorial enumeration of examples but instead utilise invariant patterns with placeholders replacing specific entities.",
"Symbolic cognitive models (Lewis, 1999) embrace this perspective with the human mind seen as an information processing system operating on formal symbols such as reading a stream of tokens in natural language.",
"The language of thought hypothesis (Morton & Fodor, 1978) frames human thought as a structural construct with varying sub-components such as \"X went to Y\".",
"By recognising what varies across examples, humans are capable of lifting examples into invariant principles that account for other instances.",
"This symbolic thought with variables is learned at a young age through symbolic play (Piaget, 2001 ).",
"For instance a child learns that a sword can be substituted with a stick (Frost et al., 2004) and engage in pretend play.",
"Although variables are inherent in models of computation and symbolic formalisms, as in first-order logic (Russell & Norvig, 2016) , they are pre-engineered and used to solve specific tasks by means of unification or assignments that bound variables to given values.",
"However, when learning from data only, being able to recognise when and which symbols should take on different values, i.e. symbols that can act as variables, is crucial for lifting examples into general principles that are invariant across multiple instances.",
"Figure 1 shows the invariant learned by our approach: if someone is the same thing as someone else then they have the same colour.",
"With this invariant, our approach can solve all of the training and test examples in task 16 of the bAbI dataset (Weston et al., 2016) .",
"In this paper we address the question of whether a machine can learn and use the notion of a variable, i.e. a symbol that can take on different values.",
"For instance, given an example of the form \"bernhard is a frog\" the machine would learn that the token \"bernhard\" could be someone else and the token \"frog\" could be something else.",
"If we consider unification a selection of the most appropriate value for a variable given a choice of values, we can reframe it as a form of attention.",
"Attention models (Bahdanau et al., 2015; Luong et al., 2015; Chaudhari et al., 2019) allow neural networks to focus, attend to certain parts of the input often for the purpose of selecting a relevant portion.",
"Since attention mechanisms are also differentiable they are often jointly learned within a task.",
"This perspective motivates our idea of a unification mechanism that utilises attention and is therefore fully differentiable which we refer to as soft unification.",
"Hence, we propose an end-to-end differentiable neural network approach for learning and utilising the notion of a variable that in return can lift examples into invariants used by the network to perform reasoning tasks.",
"Specifically, we",
"(i) propose a novel architecture capable of learning and using variables by lifting a given example through soft unification,",
"(ii) present the empirical results of our approach on four datasets and",
"(iii) analyse the learned invariants that capture the underlying patterns present in the tasks.",
"Our implementation using Chainer (Tokui et al., 2015) is publicly available at [link removed](anonymous link provided with submission).",
"We presented a new approach for learning variables and lifting examples into invariants through the usage of soft unification.",
"Evaluating on four datasets, we analysed how Unification Networks perform comparatively to existing similar architectures while having the benefit of lifting examples into invariants that capture underlying patterns present in the tasks.",
"Since our approach is end-toend differentiable, we plan to apply this technique to multi-modal tasks in order to yield multi-modal invariants for example in visual question answering.",
"A MODEL DETAILS"
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1875,
0.4888888895511627,
0.11999999731779099,
0.1249999925494194,
0.0476190410554409,
0,
0.20512819290161133,
0.1599999964237213,
0.2380952388048172,
0.25641024112701416,
0.11428570747375488,
0.04878048226237297,
0.18518517911434174,
0.21052631735801697,
0.307692289352417,
0.09302324801683426,
0.045454539358615875,
0.09090908616781235,
0.09756097197532654,
0.04255318641662598,
0.1249999925494194,
0.0952380895614624,
0.11764705181121826,
0.1621621549129486,
0.06451612710952759,
0,
0.052631575614213943,
0.21052631735801697,
0.07999999821186066,
0,
0
] | r1xwA34KDB | true | [
"End-to-end learning of invariant representations with variables across examples such as if someone went somewhere then they are there."
] |
[
"We propose a new learning-based approach to solve ill-posed inverse problems in imaging.",
"We address the case where ground truth training samples are rare and the problem is severely ill-posed---both because of the underlying physics and because we can only get few measurements.",
"This setting is common in geophysical imaging and remote sensing.",
"We show that in this case the common approach to directly learn the mapping from the measured data to the reconstruction becomes unstable.",
"Instead, we propose to first learn an ensemble of simpler mappings from the data to projections of the unknown image into random piecewise-constant subspaces.",
"We then combine the projections to form a final reconstruction by solving a deconvolution-like problem.",
"We show experimentally that the proposed method is more robust to measurement noise and corruptions not seen during training than a directly learned inverse.",
"A variety of imaging inverse problems can be discretized to a linear system y = Ax + η where y ∈ R M is the measured data, A ∈ R M ×N is the imaging or forward operator, x ∈ X ⊂ R N is the object being probed by applying A (often called the model), and η is the noise.",
"Depending on the application, the set of plausible reconstructions X could model natural, seismic, or biomedical images.",
"In many cases the resulting inverse problem is ill-posed, either because of the poor conditioning of A (a consequence of the underlying physics) or because M N .A",
"classical approach to solve ill-posed inverse problems is to minimize an objective functional regularized via a certain norm (e.g. 1 , 2 , total variation (TV) seminorm) of the model. These",
"methods promote general properties such as sparsity or smoothness of reconstructions, sometimes in combination with learned synthesis or analysis operators, or dictionaries BID44 ).In this",
"paper, we address situations with very sparse measurement data (M N ) so that even a coarse reconstruction of the unknown model is hard to get with traditional regularization schemes. Unlike",
"artifact-removal scenarios where applying a regularized pseudoinverse of the imaging operator already brings out considerable structure, we look at applications where standard techniques cannot produce a reasonable image (Figure 1 ). This highly",
"unresolved regime is common in geophysics and requires alternative, more involved strategies BID12 ).An appealing",
"alternative to classical regularizers is to use deep neural networks. For example,",
"generative models (GANs) based on neural networks have recently achieved impressive results in regularization of inverse problems BID7 , BID29 ). However, a difficulty",
"in geophysical applications is that there are very few examples of ground truth models available for training (sometimes none at all). Since GANs require many",
", they cannot be applied to such problems. This suggests to look",
"for methods that are not very sensitive to the training dataset. Conversely, it means",
"that the sought reconstructions are less detailed than what is expected in data-rich settings; for Figure 1 : We reconstruct an image x from its tomographic measurements. In moderately ill-posed",
"problems, conventional methods based on the pseudoinverse and regularized non-negative least squares (x ∈ [0, 1] N , N is image dimension) give correct structural information. In fact, total variation",
"(TV) approaches give very good results. A neural network BID23 )",
"can be trained to directly invert and remove the artifacts (NN). In a severely ill-posed",
"problem on the other hand (explained in FIG2 ) with insufficient ground truth training data, neither the classical techniques nor a neural network recover salient geometric features.an example, see the reconstructions of the Tibetan plateau BID51 ).In this paper, we propose",
"a two-stage method to solve ill-posed inverse problems using random low-dimensional projections and convolutional neural networks. We first decompose the inverse",
"problem into a collection of simpler learning problems of estimating projections into random (but structured) low-dimensional subspaces of piecewise-constant images. Each projection is easier to learn",
"in terms of generalization error BID10 ) thanks to its lower Lipschitz constant.In the second stage, we solve a new linear inverse problem that combines the estimates from the different subspaces. We show that this converts the original",
"problem with possibly non-local (often tomographic) measurements into an inverse problem with localized measurements, and that in fact, in expectation over random subspaces the problem becomes a deconvolution. Intuitively, projecting into piecewise-constant",
"subspaces is equivalent to estimating local averages-a simpler problem than estimating individual pixel values. Combining the local estimates lets us recover the",
"underlying structure. We believe that this technique is of independent",
"interest in addressing inverse problems.We test our method on linearized seismic traveltime tomography BID8 BID20 ) with sparse measurements and show that it outperforms learned direct inversion in quality of achieved reconstructions, robustness to measurement errors, and (in)sensitivity to the training data. The latter is essential in domains with insufficient",
"ground truth images.",
"We proposed a new approach to regularize ill-posed inverse problems in imaging, the key idea being to decompose an unstable inverse mapping into a collection of stable mappings which only estimate Figure 7 : Reconstructions on checkerboards and x-rays with 10dB measurement SNR tested on 10dB trained networks.",
"Red annotations highlight where the direct net fails to reconstruct correct geometry.",
"low-dimensional projections of the model.",
"By using piecewise-constant Delaunay subspaces, we showed that the projections can indeed be accurately estimated.",
"Combining the projections leads to a deconvolution-like problem.",
"Compared to directly learning the inverse map, our method is more robust against noise and corruptions.",
"We also showed that regularizing via projections allows our method to generalize across training datasets.",
"Our reconstructions are better both quantitatively in terms of SNR and qualitatively in the sense that they estimate correct geometric features even when measurements are corrupted in ways not seen at training time.",
"Future work involves getting precise estimates of Lipschitz constants for various inverse problems, regularizing the reformulated problem using modern regularizers BID46 ), studying extensions to non-linear problems and developing concentration bounds for the equivalent convolution kernel."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.29411762952804565,
0.21276594698429108,
0,
0.09999999403953552,
0.2857142686843872,
0.22857142984867096,
0.13333332538604736,
0.1538461446762085,
0.1621621549129486,
0.1395348757505417,
0.31372547149658203,
0.08888888359069824,
0.15686273574829102,
0.07692307233810425,
0,
0,
0.13333332538604736,
0.17391303181648254,
0.0624999962747097,
0.05714285373687744,
0.15686273574829102,
0.03999999538064003,
0,
0.1111111044883728,
0.19999998807907104,
0.39024388790130615,
0.22727271914482117,
0.178571417927742,
0.2083333283662796,
0.09999999403953552,
0.12903225421905518,
0.17910447716712952,
0.1666666716337204,
0.25,
0.060606054961681366,
0.3076923191547394,
0.1111111044883728,
0.13793103396892548,
0.10810810327529907,
0.1111111044883728,
0.07843136787414551,
0.145454540848732
] | HyGcghRct7 | true | [
"We solve ill-posed inverse problems with scarce ground truth examples by estimating an ensemble of random projections of the model instead of the model itself."
] |
[
"Designing architectures for deep neural networks requires expert knowledge and substantial computation time.",
"We propose a technique to accelerate architecture selection by learning an auxiliary HyperNet that generates the weights of a main model conditioned on that model's architecture.",
"By comparing the relative validation performance of networks with HyperNet-generated weights, we can effectively search over a wide range of architectures at the cost of a single training run.",
"To facilitate this search, we develop a flexible mechanism based on memory read-writes that allows us to define a wide range of network connectivity patterns, with ResNet, DenseNet, and FractalNet blocks as special cases.",
"We validate our method (SMASH) on CIFAR-10 and CIFAR-100, STL-10, ModelNet10, and Imagenet32x32, achieving competitive performance with similarly-sized hand-designed networks.",
"The high performance of deep neural nets is tempered by the cost of extensive engineering and validation to find the best architecture for a given problem.",
"High-level design decisions such as depth, units per layer, and layer connectivity are not always obvious, and the success of models such as Inception (Szegedy et al., 2016) , ResNets BID12 , FractalNets BID18 and DenseNets BID14 demonstrates the benefits of intricate design patterns.",
"Even with expert knowledge, determining which design elements to weave together requires ample experimentation.In this work, we propose to bypass the expensive procedure of fully training candidate models by instead training an auxiliary model, a HyperNet BID11 , to dynamically generate the weights of a main model with variable architecture.",
"Though these generated weights are worse than freely learned weights for a fixed architecture, we leverage the observation BID19 ) that the relative performance of different networks early in training (i.e. some distance from the eventual optimum) often provides a meaningful indication of performance at optimality.",
"By comparing validation performance for a set of architectures using generated weights, we can approximately rank numerous architectures at the cost of a single training run.To facilitate this search, we develop a flexible scheme based on memory read-writes that allows us to define a diverse range of architectures, with ResNets, DenseNets, and FractalNets as special cases.",
"We validate our one-Shot Model Architecture Search through HyperNetworks (SMASH) for Convolutional Neural Networks (CNN) on CIFAR-10 and CIFAR-100 (Krizhevsky and Hinton, 2009), Imagenet32x32 BID6 , ModelNet10 (Wu et al., 2015) , and STL-10 BID7 , achieving competitive performance with similarly-sized hand-designed networks.",
"In this work, we explore a technique for accelerating architecture selection by learning a model over network parameters, conditioned on the network's parametric form.",
"We introduce a flexible scheme for defining network connectivity patterns and generating network weights for highly variable architectures.",
"Our results demonstrate a correlation between performance using suboptimal weights generated by the auxiliary model and performance using fully-trained weights, indicating that we can efficiently explore the architectural design space through this proxy model.",
"Our method achieves competitive, though not state-of-the-art performance on several datasets."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.12903225421905518,
0.3414634168148041,
0.1395348757505417,
0.039215680211782455,
0,
0.2857142686843872,
0.07407406717538834,
0.25806450843811035,
0.16949151456356049,
0.11764705181121826,
0.035087715834379196,
0.3414634168148041,
0.11764705181121826,
0.1249999925494194,
0
] | rydeCEhs- | true | [
"A technique for accelerating neural architecture selection by approximating the weights of each candidate architecture instead of training them individually."
] |
[
"Textual entailment (or NLI) data has proven useful as pretraining data for tasks requiring language understanding, even when building on an already-pretrained model like RoBERTa.",
"The standard protocol for collecting NLI was not designed for the creation of pretraining data, and it is likely far from ideal for this purpose.",
"With this application in mind we propose four alternative protocols, each aimed at improving either the ease with which annotators can produce sound training examples or the quality and diversity of those examples.",
"Using these alternatives and a simple MNLIbased baseline, we collect and compare five new 9k-example training sets.",
"Our primary results are largely negative, with none of these new methods showing major improvements in transfer learning.",
"However, we make several observations that should inform future work on NLI data, such as that the use of automatically provided seed sentences for inspiration improves the quality of the resulting data on most measures, and all of the interventions we investigated dramatically reduce previously observed issues with annotation artifacts.",
"The task of natural language inference (NLI; also known as textual entailment) has been widely used as an evaluation task when developing new methods for language understanding tasks, but it has recently become clear that high-quality NLI data can be useful in transfer learning as well.",
"Several recent papers have shown that training large neural network models on natural language inference data, then fine-tuning them for other language understanding tasks often yields substantially better results on those target tasks (Conneau et al., 2017; Subramanian et al., 2018) .",
"This result holds even when starting from large models like BERT (Devlin et al., 2019) that have already been pretrained extensively on unlabeled data (Phang et al., 2018; Clark et al., 2019; Liu et al., 2019b) .",
"The largest general-purpose corpus for NLI, and the one that has proven most successful in this setting, is the Multi-Genre NLI Corpus (MNLI Williams et al., 2018) .",
"MNLI was designed for use in a benchmark task, rather than as a resource for use in transfer learning and as far as we know, it was not developed on the basis of any kind of deliberate experimentation.",
"Further, data collected under MNLI's data collection protocol has known issues with annotation artifacts which make it possible to perform much better than chance using only one of the sentences in each pair (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018) .",
"This work begins to ask what would be involved in collecting a similar dataset that is explicitly designed with transfer learning in mind.",
"In particular, we consider four potential changes to the original MNLI data collection protocol that are designed to improve either the ease with which annotators can produce sound examples, or the quality and diversity of those examples, and evaluate their effects on transfer.",
"We collect a baseline dataset of about 10k examples that follows the MNLI protocol with our annotator pool, followed by four additional datasets of the same size which isolate each of our candidate changes.",
"We then compare all five in a set of transfer learning experiments that look at our ability to use each of these datasets to improve performance on the eight downstream language understanding tasks in the SuperGLUE (Wang et al., 2019b) benchmark.",
"All five of our datasets are consistent with the task definition that was used in MNLI, which is in turn based on the definition introduced by .",
"In this task, each example consists of a pair of short texts, called the premise and the hypothesis.",
"The model is asked to read both texts and make a three-way classification decision: Given the premise, would a reasonable person infer that hypothesis must be true (entailment), infer that that it must be false (contradiction), or decide that there is not enough information to make either inference (neutral).",
"While it is certainly not clear that this framing is optimal for pretraining, we leave a more broad-based exploration of task definitions for future work.",
"Our BASE data collection protocol ( Figure 1 ) follows MNLI closely in asking annotators to read a premise sentence and then write three corresponding hypothesis sentences in empty text boxes corresponding to the three different labels (entailment, contradiction, and neutral).",
"When an annotator follows this protocol, they produce three sentence pairs at once, all sharing a single premise.",
"Our PARAGRAPH protocol tests the effect of supplying annotators with complete paragraphs, rather than sentences, as premises.",
"Longer texts offer the potential for discourse-level inferences, the addition of which should yield a dataset which is more difficult, more diverse, and less likely to contain trivial artifacts.",
"However, reading full paragraphs adds a potential cost in added annotator time and effort, which could potentially be better spent constructing more sentence-level examples.",
"Our EDITPREMISE and EDITOTHER protocols test the effect of pre-filling a single seed text in each of the three text boxes that annotators are asked to fill out.",
"By reducing the raw amount of typing required, this could allow annotators to produce good examples more quickly.",
"By encouraging them to keep the three sentences similar, it could also encourage minimal-pair-like examples that minimize artifacts.",
"We test two variants of this idea: One uses a copy of the premise sentence as a seed text and the second retrieves a new sentence from an existing corpus that is similar to the premise sentence, and uses that.",
"Our CONTRAST protocol tests the effect of adding artificial constraints on the kinds of hypothesis sentences annotators can write.",
"Giving annotators difficult and varying constraints could encourage creativity and prevent annotators from falling into repeating ruts or patterns in their writing that could lead to easier, more repetitive data.",
"However, as with the use of longer contexts in BASE, this protocol risks substantially slowing the annotation process.",
"We experiment with a procedure inspired by that used to create the language-andvision dataset NLVR2 (Suhr et al., 2019) , in which in which annotators must write sentences that are valid entailments (or contradictions) for a given premise, but not valid entailments for a second, similar, distractor premise.",
"In evaluations on transfer learning with the SuperGLUE benchmark, all of these four methods offer substantial improvements in transfer ability over a plain RoBERTa model, but that only EDITOTHER and CONTRAST offering consistent improvements over BASE, and only by very small margins.",
"While this is largely a negative result for our primary focus on transfer, we also observe that all four of these methods are able to produce data of comparable subjective quality while significantly reducing the incidence of previously reported annotation artifacts, and that PARAGRAPH, EDITPREMISE, and EDITOTHER all accomplish this without significantly increasing the time cost of annotation.",
"Our chief results on transfer learning are negative: None of our four interventions consistently improve upon the base MNLI data collection protocol by more than a marginal degree, though we see suggestive evidence that methods that supply annotators with retrieved non-premise seed sentences for inspiration offer small improvements.",
"However, we also observe that all four of our interventions, and especially the use of longer contexts or pre-filled seed sentences, help reduce the prevalence of artifacts in the generated hypotheses that reveal the label, and the use of longer premises or seed sentences in particular do this without increasing the time cost of annotation.",
"This suggests that these methods may be valuable in the collection of high-quality evaluation data, if combined with additional validation methods to ensure high human agreement with the collected labels.",
"The need and opportunity that motivated this work remains compelling: Human-annotated data like MNLI has already proven itself as a valuable tool in teaching machines general-purpose skils for language understanding, and discovering ways to more effectively build and use such data could further accelerate the field's already fast progress toward robust, general-purpose language understanding technologies.",
"Further work along this line of research could productively follow a number of directions: General work on incentive structures and task design for crowdsourcing could help to address more general questions about how to collect data that is simultaneously creative and consistently labeled.",
"Machine learning methods work on transfer learning could help to better understand and exploit the effects that drive the successes we have seen with NLI data so far.",
"Finally, there remains room for further empirical work investigating the kinds of task definitions and data collection protocols most likely to yield positive transfer."
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1395348757505417,
0.2380952388048172,
0.11999999731779099,
0.05714285373687744,
0.10810810327529907,
0.2950819730758667,
0.1666666567325592,
0.0357142798602581,
0.03999999538064003,
0.04347825422883034,
0.08163265138864517,
0.1355932205915451,
0.04878048226237297,
0.10526315122842789,
0.12244897335767746,
0.10526315122842789,
0.0476190410554409,
0.05714285373687744,
0,
0.0476190410554409,
0.0363636314868927,
0.05405404791235924,
0.1111111044883728,
0.08888888359069824,
0,
0.045454539358615875,
0.05405404791235924,
0.05405404791235924,
0.16326530277729034,
0.0555555522441864,
0.04347825422883034,
0.1666666567325592,
0.03389830142259598,
0.1071428507566452,
0.1492537260055542,
0.09090908616781235,
0.24137930572032928,
0.08695651590824127,
0.0882352888584137,
0.10526315122842789,
0.13333332538604736,
0.09302324801683426
] | rc8gt9r_TTB | true | [
"We propose four new ways of collecting NLI data. Some help slightly as pretraining data, all help reduce annotation artifacts."
] |
[
"Predicting the future in real-world settings, particularly from raw sensory observations such as images, is exceptionally challenging.",
"Real-world events can be stochastic and unpredictable, and the high dimensionality and complexity of natural images requires the predictive model to build an intricate understanding of the natural world.",
"Many existing methods tackle this problem by making simplifying assumptions about the environment.",
"One common assumption is that the outcome is deterministic and there is only one plausible future.",
"This can lead to low-quality predictions in real-world settings with stochastic dynamics.",
"In this paper, we develop a stochastic variational video prediction (SV2P) method that predicts a different possible future for each sample of its latent variables.",
"To the best of our knowledge, our model is the first to provide effective stochastic multi-frame prediction for real-world video.",
"We demonstrate the capability of the proposed method in predicting detailed future frames of videos on multiple real-world datasets, both action-free and action-conditioned.",
"We find that our proposed method produces substantially improved video predictions when compared to the same model without stochasticity, and to other stochastic video prediction methods.",
"Our SV2P implementation will be open sourced upon publication.",
"Understanding the interaction dynamics of objects and predicting what happens next is one of the key capabilities of humans which we heavily rely on to make decisions in everyday life BID3 .",
"A model that can accurately predict future observations of complex sensory modalities such as vision must internally represent the complex dynamics of real-world objects and people, and therefore is more likely to acquire a representation that can be used for a variety of visual perception tasks, such as object tracking and action recognition BID31 BID25 BID7 .",
"Furthermore, such models can be inherently useful themselves, for example, to allow an autonomous agent or robot to decide how to interact with the world to bring about a desired outcome BID27 .However",
", modeling future distributions over images is a challenging task, given the high dimensionality of the data and the complex dynamics of the environment. Hence,",
"it is common to make various simplifying assumptions. One particularly",
"common assumption is that the environment is deterministic and that there is only one possible future BID5 BID31 BID1 BID25 . Models conditioned",
"on the actions of an agent frequently make this assumption, since the world is more deterministic in these settings BID27 BID10 . However, most real-world",
"prediction tasks, including the action-conditioned settings, are in fact not deterministic, and a deterministic model can lose many of the nuances that are present in real physical interactions. Given the stochastic nature",
"of video prediction, any deterministic model is obliged to predict a statistic of all the possible outcomes. For example, deterministic",
"models trained with a mean squared error loss function generate the expected value of all the possibilities for each pixel independently, which is inherently blurry BID26 . Figure 1: Importance of stochasticity",
"in video prediction. In each video, a random shape follows",
"a random direction (first row). Given only the first frame, the deterministic",
"model from BID10 predicts the average of all the possibilities. The third row is the output of SV2P with latent",
"sampled from approximated posterior which predicts the correct motion. Last two rows are stochastic outcomes using random",
"latent values sampled from assumed prior. As observed, these outcomes are random but within",
"the range of possible futures. Second sample of Figure 1c shows a case where the",
"model predicts the average of more than one outcome.Our main contribution in this paper is a stochastic variational method for video prediction, named SV2P, that predicts a different plausible future for each sample of its latent random variables. We also provide a stable training procedure for training",
"a neural network based implementation of this method. To the extent of our knowledge, SV2P is the first latent",
"variable model to successfully predict multiple frames in real-world settings. Our model also supports action-conditioned predictions,",
"while still being able to predict stochastic outcomes of ambiguous actions, as exemplified in our experiments. We evaluate SV2P on multiple real-world video datasets,",
"as well as a carefully designed toy dataset that highlights the importance of stochasticity in video prediction (see Figure 1 ). In both our qualitative and quantitative comparisons, SV2P",
"produces substantially improved video predictions when compared to the same model without stochasticity, with respect to standard metrics such as PSNR and SSIM. The stochastic nature of SV2P is most apparent when viewing",
"the predicted videos. Therefore, we highly encourage the reader to check the project",
"website https://goo.gl/iywUHc to view the actual videos of the experiments. The TensorFlow BID0 implementation of this project will be open",
"sourced upon publication.",
"We proposed stochastic variational video prediction (SV2P), an approach for multi-step video prediction based on variational inference.",
"Our primary contributions include an effective stochastic prediction method with latent variables, a network architecture that succeeds on natural videos, and a training procedure that provides for stable optimization.",
"The source code for our method will be released upon acceptance.",
"We evaluated our proposed method on three real-world datasets in actionconditioned and action-free settings, as well as one toy dataset which has been carefully designed to highlight the importance of the stochasticity in video prediction.",
"Both qualitative and quantitative results indicate higher quality predictions compared to other deterministic and stochastic baselines.SV2P can be expanded in numerous ways.",
"First, the current inference network design is fully convolutional, which exposes multiple limitations, such as unmodeled spatial correlations between the latent variables.",
"The model could be improved by incorporating the spatial correlation induced by the convolutions into the prior, using a learned structured prior in place of the standard spherical Gaussian.",
"Time-variant posterior approximation to reflect the new information that is revealed as the video progresses, is another possible SV2P improvement.",
"However, as discussed in Section 3, this requires incentivizing the inference network to incorporate the latent information at training time.",
"This would allow time-variant latent distributions which is more aligned with generative neural models for time-series BID19 BID12 BID22 .Another",
"exciting direction for future research would be to study how stochastic predictions can be used to act in the real world, producing model-based reinforcement learning methods that can execute risk-sensitive behaviors from raw image observations. Accounting",
"for risk in this way could be especially important in safety-critical settings, such as robotics. In lack of",
"actions and therefore high stochasticity, BID10 only blurs the robotic arm out while the proposed method predicts sharper frames on each sampling. SV2P also",
"predicts the interaction dynamics between random movements of the arm and the objects. BID10 . This",
"is mostly",
"evident in zoomed in objects which have been pushed by the arm. Figure 10: Prediction",
"results on the action-free Human3.6M dataset. SV2P predicts a different",
"outcome on each sampling given the latent. In the left example, the",
"model predicts walking as well as stopping which result in different outputs in predicted future frames. Similarly, the right example",
"demonstrates various outcomes including spinning. BID29 with SV2P on the robotic",
"pushing dataset. We use the same best PSNR out",
"of 100 random samples for both methods. Besides stochastic movements",
"of the pushed objects, another source of stochasticity is the starting lag in movements of the robotic arm. SV2P generates sharper images",
"compared to BID10 (notice the pushed objects in zoomed images) with less noise compared to BID29 (look at the accumulated noise in later frames).A TRAINING DETAILS FIG2 contains",
"details of the network architectures used as generative and inference models. In all of the experiments we used",
"the same set of hyper-parameters which can be found in TAB1 . In the first step of training, we",
"disable the inference network and instead sample latent values from N (0, I). In step 2, the latent values will",
"be sampled from the approximated posterior q φ (z|x 0:T ) = N µ(x 0:T ), σ(x 0:T ) . Please note that the inference network",
"approximates log(σ) instead of σ for numerical stability. To gradually switch from Step 2 of training",
"procedure to Step 3, we increase β linearly from its starting value to its end value over the length of training."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1666666567325592,
0,
0,
0,
0.31578946113586426,
0.19354838132858276,
0.23999999463558197,
0.1428571343421936,
0.12903225421905518,
0,
0.05714285373687744,
0.037735845893621445,
0,
0,
0,
0,
0.20000000298023224,
0.11428570747375488,
0.07999999821186066,
0,
0.3529411852359772,
0,
0,
0,
0,
0,
0.125,
0,
0.27272728085517883,
0.19354838132858276,
0.17142856121063232,
0.052631575614213943,
0,
0,
0,
0.2857142686843872,
0.05882352590560913,
0,
0.20512820780277252,
0.06896551698446274,
0,
0.0624999962747097,
0.07999999821186066,
0.07692307233810425,
0,
0.04999999701976776,
0.0833333283662796,
0,
0,
0.0952380895614624,
0,
0,
0.07999999821186066,
0,
0,
0,
0.07999999821186066,
0.06451612710952759,
0,
0.08695651590824127,
0,
0,
0,
0
] | rk49Mg-CW | true | [
"Stochastic variational video prediction in real-world settings."
] |
[
"In multi-agent systems, complex interacting behaviors arise due to the high correlations among agents.",
"However, previous work on modeling multi-agent interactions from demonstrations is primarily constrained by assuming the independence among policies and their reward structures. \n",
"In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework with explicit modeling of correlated policies by approximating opponents’ policies, which can recover agents' policies that can regenerate similar interactions.",
"Consequently, we develop a Decentralized Adversarial Imitation Learning algorithm with Correlated policies (CoDAIL), which allows for decentralized training and execution.",
"Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators and outperforms state-of-the-art multi-agent imitation learning methods.",
"Our code is available at \\url{https://github.com/apexrl/CoDAIL}.",
"Modeling complex interactions among intelligent agents from the real world is essential for understanding and creating intelligent multi-agent behaviors, which is typically formulated as a multiagent learning (MAL) problem in multi-agent systems.",
"When the system dynamics are agnostic and non-stationary due to the adaptive agents with implicit goals, multi-agent reinforcement learning (MARL) is the most commonly used technique for MAL.",
"MARL has recently drawn much attention and achieved impressive progress on various non-trivial tasks, such as multi-player strategy games (OpenAI, 2018; Jaderberg et al., 2018) , traffic light control (Chu et al., 2019) , taxi-order dispatching etc.",
"A central challenge in MARL is to specify a good learning goal, as the agents' rewards are correlated and thus cannot be maximized independently (Bu et al., 2008) .",
"Without explicit access to the reward signals, imitation learning could be the most intuitive solution for learning good policies directly from demonstrations.",
"Conventional solutions such as behavior cloning (BC) (Pomerleau, 1991) learn the policy in a supervised manner by requiring numerous data while suffering from compounding error (Ross & Bagnell, 2010; Ross et al., 2011) .",
"Inverse reinforcement learning (IRL) (Ng et al., 2000; Russell, 1998) alleviates these shortcomings by recovering a reward function but is always expensive to obtain the optimal policy due to the forward reinforcement learning procedure in an inner loop.",
"Generative adversarial imitation learning (GAIL) (Ho & Ermon, 2016 ) leaves a better candidate for its model-free structure without compounding error, which is highly effective and scalable.",
"However, real-world multi-agent interactions could be much challenging to imitate because of the strong correlations among adaptive agents' policies and rewards.",
"Consider if a football coach wants to win the league, he must make targeted tactics against various opponents, in addition to the situation of his team.",
"Moreover, the multi-agent environment tends to give rise to more severe compounding errors with more expensive running costs.",
"Motivated by these challenges, we investigate the problem of modeling complicated multi-agent interactions from a pile of off-line demonstrations and recover their on-line policies, which can regenerate analogous multi-agent behaviors.",
"Prior studies for multi-agent imitation learning typically limit the complexity in demonstrated interactions by assuming isolated reward structures (Barrett et al., 2017; Le et al., 2017; Lin et al., 2014; Waugh et al., 2013) and independence in per-agent policies that overlook the high correlations among agents (Song et al., 2018; Yu et al., 2019) .",
"In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework with correlated policies by approximating opponents' policies, in order to reach inaccessible opponents' actions due to concurrently execution of actions among agents when making decisions.",
"Consequently, with approximated opponents model, we develop a Decentralized Adversarial Imitation Learning algorithm with Correlated policies (CoDAIL) suitable for learning correlated policies under our proposed framework, which allows for decentralized training and execution.",
"We prove that our framework treats the demonstrator interactions as one of -Nash Equilibrium ( -NE) solutions under the recovered reward.",
"In experiments, we conduct multi-dimensional comparisons for both the reward gap between learned agents and demonstrators, along with the distribution divergence between demonstrations and regenerated interacted trajectories from learned policies.",
"Furthermore, the results reveal that CoDAIL can better recover correlated multi-agent policy interactions than other state-of-the-art multi-agent imitation learning methods in several multi-agent scenarios.",
"We further illustrate the distributions of regenerated interactions, which indicates that CoDAIL yields the closest interaction behaviors to the demonstrators.",
"In this paper, we focus on modeling complex multi-agent interactions via imitation learning on demonstration data.",
"We develop a decentralized adversarial imitation learning algorithm with correlated policies (CoDAIL) with approximated opponents modeling.",
"CoDAIL allows for decentralized training and execution and is more capable of modeling correlated interactions from demonstrations shown by multi-dimensional comparisons against other state-of-the-art multi-agent imitation learning methods on several experiment scenarios.",
"In the future, we will consider covering more imitation learning tasks and modeling the latent variables of policies for diverse multi-agent imitation learning."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1249999925494194,
0.2926829159259796,
0.5714285373687744,
0.10526315122842789,
0.25641024112701416,
0,
0.21276594698429108,
0.13636362552642822,
0,
0.08510638028383255,
0.21052631735801697,
0.038461532443761826,
0.07547169178724289,
0.08888888359069824,
0.20512819290161133,
0.0476190410554409,
0.11764705181121826,
0.21739129722118378,
0.20000000298023224,
0.4363636374473572,
0.2083333283662796,
0.21052631735801697,
0.09090908616781235,
0.25,
0.0555555522441864,
0.3636363446712494,
0.3636363446712494,
0.3265306055545807,
0.31578946113586426
] | B1gZV1HYvS | true | [
"Modeling complex multi-agent interactions under multi-agent imitation learning framework with explicit modeling of correlated policies by approximating opponents’ policies. "
] |
[
"Plagiarism and text reuse become more available with the Internet development.",
"Therefore it is important to check scientific papers for the fact of cheating, especially in Academia.",
"Existing systems of plagiarism detection show the good performance and have a huge source databases.",
"Thus now it is not enough just to copy the text as is from the source document to get the original work.",
"Therefore, another type of plagiarism become popular -- cross-lingual plagiarism.",
"We present a CrossLang system for such kind of plagiarism detection for English-Russian language pair.",
"We introduced CrossLang -a framework for cross-lingual plagiarism detection for English Russian language pair.",
"We decomposed the problem of cross-lingual plagiarism detection into several stages and provide a service, consists of a set of microservices.",
"The CrossLang use a monolingual approachreducing the problem to the one language.",
"For this purpose we trained the neural machine translation system.",
"Another two main algoithmic components are Source Retrieval and Document Comparison stages.",
"For the Source Retrieval problem we used a modification of shingling method that allow us to deal with ambiguity after translation.",
"For the Document Comparison stage we used phrase embeddings that were trained with slight supervision.",
"We evaluated the effectiveness of main stages."
] | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.08695651590824127,
0.1818181723356247,
0,
0.25,
0.380952388048172,
0.4000000059604645,
0.23999999463558197,
0,
0.11764705181121826,
0,
0,
0,
0
] | BkxiG6qqIr | true | [
"A system for cross-lingual (English-Russian) plagiarism detection"
] |
[
"Data parallelism has become a dominant method to scale Deep Neural Network (DNN) training across multiple nodes. ",
"Since the synchronization of the local models or gradients can be a bottleneck for large-scale distributed training, compressing communication traffic has gained widespread attention recently. ",
"Among several recent proposed compression algorithms, \nResidual Gradient Compression (RGC) is one of the most successful approaches---it can significantly compress the transmitting message size (0.1% of the gradient size) of each node and still preserve accuracy.",
"However, the literature on compressing deep networks focuses almost exclusively on achieving good compression rate, while the efficiency of RGC in real implementation has been less investigated.",
"In this paper, we develop an RGC method that achieves significant training time improvement in real-world multi-GPU systems.",
"Our proposed RGC system design called RedSync, introduces a set of optimizations to reduce communication bandwidth while introducing limited overhead.",
"We examine the performance of RedSync on two different multiple GPU platforms, including a supercomputer and a multi-card server.",
"Our test cases include image classification on Cifar10 and ImageNet, and language modeling tasks on Penn Treebank and Wiki2 datasets.",
"For DNNs featured with high communication to computation ratio, which has long been considered with poor scalability, RedSync shows significant performance improvement.",
"For training large-scale deep neural networks (DNNs) on multiple computing nodes, data parallelism has emerged as the most popular choice due to its simplicity and effectiveness BID5 ; BID13 ).",
"However, the communication bandwidth of network fabric has become the bottleneck limiting data parallel performance.",
"On one hand, models of DNNs, which already contain tens to hundreds of layers and totaling 10-20 million parameters today, continue to grow bigger.",
"Therefore, the requirement of communicating model parameter updates among all computing nodes poses a higher challenge to network bandwidth.",
"On the other hand, the development of DNN training accelerators has shifted the bottleneck of training towards communication across models.",
"As the evolution of the inter-connected network bandwidth is not as fast as computing hardware, synchronization overhead has become the bottleneck of data parallelism on distributed systems using new computing hardware.Many recent studies focused on reducing the communication cost between nodes by reducing the size of the gradients to be transmitted.",
"One line of work BID15 ; BID2 ; Wen et al. (2017) ) propose to quantize the gradients to low-precision values.",
"Considering compression ratio (ratio of compressed gradients size to their original size) achieved by quantization is limited, another line of research orthogonal to quantization is to sparsify communication gradients and restrict weight-updates to a small subset of parameters.",
"Residual Gradient Compression (RGC) method (Strom (2015) ; BID0 ; BID4 ; BID9 ; BID14 ) is currently the most promising pruning method to achieve good compression ratio while ensuring no loss of training accuracy.",
"It transmits only a small subset of gradients and maintains the remaining gradients locally as residuals to be added to gradients of the next iteration.",
"The first RGC implementation is proposed by Strom (2015) and uses a threshold-based method to only send gradients larger than a predefined constant threshold for fully-connected layers.",
"Considering a predefined threshold is hard to be chosen appropriately, BID0 improve the robustness of RGC by selecting top 1% gradients to communicate according to their magnitude.",
"Because these two implementations are tuned for some specific network structures, applying them to other DNNs will lead to accuracy loss as indicated in BID4 .",
"Based on their work, the latest RGC variants, such as BID14 ; BID4 ; BID9 ), are able to achieve a 0.1% compression ratio on local gradients while ensuring almost no loss of model accuracy on a variety of DNN structures after introducing some key modifications.Despite of good model accuracy achieved with simulation experiments, no recent studies have discussed the potential performance gain after integrating the latest RCG methods to real distributed training system, especially to the multi-GPU systems equipped with high-quality network infrastructures.",
"The challenges of applying RGC to distributed GPU systems come from two aspects.",
"First, there is no efficient compression algorithm proposed for RGC method.",
"According to our experimental results, selecting top-0.1% elements with the state-of-the-art GPU-based top-k algorithm are so expensive that the overhead of compression is much higher than the benefits of network bandwidth reduction.",
"Second, synchronization of sparse data structures is nontrivial to be supported with existing efficient communication libraries, such as Message Passing Interface (MPI), which are designed for dense data structures.Targeting multi-GPU systems, a highly-efficient RGC implementation called RedSync is proposed.",
"Our contributions are listed as follows:• We combined pruning and quantization techniques together to compress transmitting gradients.",
"A set of parallel-friendly top-0.1% selection methods are designed to support pruning operations inside GPU device memory, which are orders of magnitude faster than the stateof-the-art GPU-based top-k selection method.•",
"Considering the distribution characteristics of communication data, we apply allgather operation using MPI for a sparse synchronization scheme. A",
"cost model is derived to analyze both communication cost and calculation overhead. Based",
"on it, we pointed out potential performance gain and the bottleneck of our implementation.• RedSync",
"is able to ensure almost no accuracy loss to train a set of DNNs after integrating with the latest algorithm improvements. This is the",
"first work, as far as we known, to evaluate the performance of RGC method on the scale of 128 GPUs. RedSync provides",
"significant performance improvements for communication-intensive networks, like VGG, AlexNet and some LSTMs.",
"This paper proposes a distributed implementation called RedSync to accelerate data parallel DNN training by utilizing a type of gradient sparsification method named as Residual Gradient Compression (RGC).",
"We solved two major obstacles to implement RGC on multi-GPU systems : high overhead of compression using GPU and lack of support for collective communication implementation for sparse data structures.",
"We tested the performance of RedSync on two GPU platforms, including a supercomputer system and a multi-GPU server.",
"For AlexNet, VGG16, and LSTM, we observed significant speedup for large-scale DNN training.",
"The left part of FIG3 illustrates how sparse allgather works by recursive doubling method.",
"We assume the compression rate on all of the node is the same as D. If we use threshold binary search for communication-set selection, D here should be the average compression ratio of all nodes for a good approximation.",
"In the first step, nodes that are a distance 1 apart exchange their compressed residuals, the size of which is M × D. In the second step, nodes that are a distance 2 apart exchange their own data as well as the data they received in the previous step, which is 2M × D in total.",
"In the third step, nodes that are a distance 4 apart exchange their own data as well the data they received in the previous two steps.",
"In this way, for a power-of-two number of processes, all processes get all the data in lgp steps.",
"The amount of data exchanged by each node is M × D in the first step, 2M × D in the second step, and so forth, up to 2 lg(p)−1 M × D in the last step.",
"Therefore, The time for message transfer taken by this algorithm is T transf er = lg(p)α + (p − 1)M × Dβ.",
"After including decompressing overhead γ for collected p different compressed residuals and communication selection overhead T select , the time for all-gather based synchronization should be T transf er DISPLAYFORM0 As shown in the right part of FIG3 , the Rabenseifners algorithm is adopted for allreduce operation on messages.",
"It does a reduce-scatter followed by an allgather.",
"Reduce-scatter is a variant of reduce in which the result, instead of being stored at the root, is scattered among all p nodes.",
"We use a recursive halving algorithm, which is analogous to the recursive doubling algorithm used for allgather but in reverse way.",
"In the first step, each node exchanges data with a node that is a distance p/2 away: Each process sends the data needed by all processes in the other half, which is of size M/2.",
"They also receives the data needed by all processes in its own half, and performs the reduction operation on the received data.",
"In the second step, each process exchanges data with a process that is a distance p/4 away.",
"This procedure continues recursively, halving the data communicated at each step, for a total of lgp steps.",
"After reduce-scatter, allgather phase will have the the same bandwidth and latency requirements.",
"The time taken by Rabenseifners algorithm is the sum of the times taken by reduce-scatter (recursive halving), allgather and reduction operations.",
"The total time should be DISPLAYFORM1"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.12121211737394333,
0.04999999701976776,
0.0416666641831398,
0.04999999701976776,
0.12121211737394333,
0.22857142984867096,
0.060606054961681366,
0,
0.1111111044883728,
0.13333332538604736,
0.27586206793785095,
0.05405404791235924,
0.1764705777168274,
0.19354838132858276,
0.2142857164144516,
0.05882352590560913,
0.13333332538604736,
0.08695651590824127,
0.05714285373687744,
0.19512194395065308,
0.09999999403953552,
0.05128204822540283,
0.0714285671710968,
0.0714285671710968,
0.07692307233810425,
0.08888888359069824,
0.19230768084526062,
0.1249999925494194,
0.045454539358615875,
0.05882352590560913,
0.14814814925193787,
0.06451612710952759,
0.0555555522441864,
0.05882352590560913,
0,
0.380952388048172,
0.23255813121795654,
0.0624999962747097,
0.1428571343421936,
0.06896550953388214,
0.043478257954120636,
0.04081632196903229,
0.052631575614213943,
0.0624999962747097,
0.1428571343421936,
0.05405404791235924,
0.035087715834379196,
0.17391303181648254,
0,
0.11428570747375488,
0.09090908616781235,
0.11764705181121826,
0.06666666269302368,
0.0624999962747097,
0.07407406717538834,
0.060606054961681366,
0
] | rkxJus0cFX | true | [
"We proposed an implementation to accelerate DNN data parallel training by reducing communication bandwidth requirement."
] |
[
"Backpropagation is driving today's artificial neural networks.",
"However, despite extensive research, it remains unclear if the brain implements this algorithm.",
"Among neuroscientists, reinforcement learning (RL) algorithms are often seen as a realistic alternative.",
"However, the convergence rate of such learning scales poorly with the number of involved neurons.",
"Here we propose a hybrid learning approach, in which each neuron uses an RL-type strategy to learn how to approximate the gradients that backpropagation would provide.",
"We show that our approach learns to approximate the gradient, and can match the performance of gradient-based learning on fully connected and convolutional networks.",
"Learning feedback weights provides a biologically plausible mechanism of achieving good performance, without the need for precise, pre-specified learning rules.",
"It is unknown how the brain solves the credit assignment problem when learning: how does each neuron know its role in a positive (or negative) outcome, and thus know how to change its activity to perform better next time?",
"Biologically plausible solutions to credit assignment include those based on reinforcement learning (RL) algorithms [4] .",
"In these approaches a globally distributed reward signal provides feedback to all neurons in a network.",
"However these methods have not been demonstrated to operate at scale.",
"For instance, variance in the REINFORCE estimator scales with the number of units in the network.",
"This drives the hypothesis that learning in the brain must rely on additional structures beyond a global reward signal.",
"In artificial neural networks, credit assignment is performed with gradient-based methods computed through backpropagation.",
"This is significantly more efficient than RL-based algorithms.",
"However there are well known problems with implementing backpropagation in biologically realistic neural networks.",
"For instance backpropagation requires a feedback structure with the same weights as the feedforward network to communicate gradients (so-called weight transport).",
"Yet such structures are not observed in neural circuits.",
"Despite this, backpropagation is the only method known to solve learning problems at scale.",
"Thus modifications or approximations to backpropagation that are more plausible have been the focus of significant recent attention [8, 3] .",
"Notably, it turns out that weight transport can be avoided by using fixed, random feedback weights, through a phenomenon called feedback alignment [8] .",
"However feedback alignment does not work in larger, more complicated network architectures (such as convolutional networks).",
"Here we propose to use an RL algorithm to train a feedback system to enable learning.",
"We propose to use a REINFORCE-style perturbation approach to train a feedback signal to approximate what would have been provided by backpropagation.",
"We demonstrate that our model learns as well as regular backpropagation in small models, overcomes the limitations of fixed random feedback weights (\"feedback alignment\") on more complicated feedforward networks, and can be utilized in convolutional networks.",
"Our method illustrates a biologically realistic way the brain could perform gradient descent-like learning.",
"Here we implement a perturbation-based synthetic gradient method to train neural networks.",
"We show that this hybrid approach can be used in both fully connected and convolutional networks.",
"By removing both the symmetric feedforward, feedback weight requirement imposed by backpropagation this approach is a step towards more biologically-plausible deep learning.",
"In contrast to many perturbation-based methods, this hybrid approach can solve large-scale problems.",
"We thus believe this approach can provide powerful and biologically plausible learning algorithms.",
"While previous research has provided some insight and theory for how feedback alignment works [8, 3, 2] the effect remains somewhat mysterious, and not applicable in some network architectures.",
"Recent studies have shown that some of these weaknesses can be addressed by instead imposing sign congruent feedforward and feedback matrices [10] .",
"Yet what mechanism may produce congruence in biological networks is unknown.",
"Here we show that the shortcomings of feedback alignment can be addressed in another way: the system can learn to adjust weights as needed to provide a useful error signal.",
"Our work is closely related to Akrout et al 2019 [1] , which also uses perturbations to learn feedback weights.",
"However our approach does not divide learning into two phases, and training of the feedback weights does not occur in a layer-wise fashion.",
"Here we tested our method in an idealized setting, however it is consistent with neurobiology in two important ways.",
"First, it involves the separate learning of feedforward and feedback weights.",
"This is possible in cortical networks where complex feedback connections exist between layers, and where pyramidal cells have apical and basal compartments that allow for separate integration of feedback and feedforward signals [5] .",
"Second, noisy perturbations are common in neural learning models.",
"There are many mechanisms by which noise can be measured or approximated [4, 7] , or neurons could use a learning rule that does not require knowing the noise [6] .",
"While our model involves the subtraction of a baseline loss to reduce the variance of the estimator, this does not affect the expected value of the estimator; technically the baseline could be removed or approximated [7] .",
"Thus we believe our approach could be implemented in neural circuits.",
"There is a large space of plausible learning rules that can learn feedback signals in order to more efficiently learn.",
"These promise to inform both models of learning in the brain and learning algorithms in artificial networks.",
"Here we take an early step in this direction.",
"It is worth making the following points on each of the assumptions:",
"• A1.",
"In the paper we assume ξ is Gaussian.",
"Here we prove the more general result of convergence for any subgaussian random variable.",
"• A2.",
"In practice this may be a fairly restrictive assumption, since it precludes using relu non-linearities.",
"Other common choices, such as hyperbolic tangent and sigmoid non-linearities with an analytic cost function do satisfy this assumption, however.",
"• A3.",
"It is hard to establish general conditions under whichẽ n (ẽ n ) T will be full rank.",
"While it may be a reasonable assumption in some cases.",
"Extensions of Theorem 2 to a non-linear network may be possible.",
"However, the method of proof used here is not immediately applicable because the continuous mapping theorem can not be applied in such a straightforward fashion as in Equation (10) .",
"In the non-linear case the resulting sums over all observations are neither independent or identically distributed, which makes applying any law of large numbers complicated."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.09090908616781235,
0,
0,
0,
0.09999999403953552,
0.4324324131011963,
0.11428570747375488,
0.0833333283662796,
0.13333332538604736,
0.13333332538604736,
0.07692307233810425,
0,
0.060606054961681366,
0,
0,
0.06896550953388214,
0.17142856121063232,
0,
0.06896550953388214,
0.05714285373687744,
0.1621621549129486,
0.12903225421905518,
0.13793103396892548,
0.11764705181121826,
0.3265306055545807,
0,
0.14814814925193787,
0.5161290168762207,
0.05405404791235924,
0.1428571343421936,
0.1428571343421936,
0.0952380895614624,
0.21621620655059814,
0.07692307233810425,
0.2857142686843872,
0.23529411852359772,
0.1666666567325592,
0,
0.23076923191547394,
0.13636362552642822,
0,
0.09302324801683426,
0.09302324801683426,
0.07692307233810425,
0.29411762952804565,
0.19999998807907104,
0,
0.07692307233810425,
0,
0,
0.06666666269302368,
0.05714285373687744,
0.1249999925494194,
0.07999999821186066,
0.1538461446762085,
0.1463414579629898,
0.05128204822540283
] | ByxfNXF8Ir | true | [
"Perturbations can be used to learn feedback weights on large fully connected and convolutional networks."
] |
[
"Most deep neural networks (DNNs) require complex models to achieve high performance.",
"Parameter quantization is widely used for reducing the implementation complexities.",
"Previous studies on quantization were mostly based on extensive simulation using training data.",
"We choose a different approach and attempt to measure the per-parameter capacity of DNN models and interpret the results to obtain insights on optimum quantization of parameters.",
"This research uses artificially generated data and generic forms of fully connected DNNs, convolutional neural networks, and recurrent neural networks.",
"We conduct memorization and classification tests to study the effects of the number and precision of the parameters on the performance.",
"The model and the per-parameter capacities are assessed by measuring the mutual information between the input and the classified output.",
"We also extend the memorization capacity measurement results to image classification and language modeling tasks.",
"To get insight for parameter quantization when performing real tasks, the training and test performances are compared.",
"Deep neural networks (DNNs) have achieved impressive performance on various machine learning tasks.",
"Several DNN architectures are known, and the most famous ones are fully connected DNNs (FCDNNs), convolutional neural networks (CNNs), and recurrent neural networks (RNNs).It",
"is known that neural networks do not need full floating-point precision for inference BID10 BID16 BID23 . A",
"32-bit floating-point parameter can be reduced to 8-bit, 4-bit, 2-bit, or 1-bit, but this can incur performance degradation. Therefore",
", precision should be optimized, which is primarily conducted by extensive computer simulations using training data. This not",
"only takes much time for optimization but also can incorrectly predict the performance in real environments when the characteristics of input data are different from the training data.In this study, we attempt to measure the capacity of DNNs, including FCDNN, CNN, and RNN, using a memorization and classification task that applies random binary input data. The per-parameter",
"capacities of various models are estimated by measuring the mutual information between the input data and the classification output. Then, the fixed-point",
"performances of the models are measured to determine the relation between the quantization sensitivity and the per-parameter capacity. The memorization capacity",
"analysis results are extended to real models for performing image classification and language modeling, by which the parameter quantization sensitivity is compared between memorization and generalization tasks.The contributions of this paper are as follows.• We experimentally measure",
"the memorization capacity of DNNs and estimate the perparameter capacity. The capacity per parameter",
"is between 2.3 bits to 3.7 bits, according to the network structure, which is FCDNN, CNN, or RNN. The value is fairly independent",
"of the model size.• We show that the performance of",
"the quantized networks is closely related to the capacity per parameter, and FCDNNs show the most resilient quantization performance while RNNs suffer most from parameter quantization. The network size hardly effects the",
"quantization performance when DNN models are trained to use full capacity.• We explain that severe quantization",
", such as binary or ternary weights, can be employed without much performance degradation when the networks are in the over-parameter region.• We suggest the sufficient number of",
"bits for representing weights of neural networks, which are approximately 6 bits, 8 bits, and 10 bits for FCDNNs, CNNs, and RNNs, respectively. This estimate of the number of bits for",
"implementing neural networks is very important considering that many accelerators are designed without any specific training data or applications.• The study with real-models shows that",
"neural networks are more resilient to quantization when performing generalization tasks than conducting memorization. Thus, the optimum bits obtained with the",
"memorization tasks are conservative and safe estimate when solving real problems.The paper is organized as follows. In Section 2, previous works on neural network",
"capacity and fixedpoint optimization are briefly presented. Section 3 explains the capacity measurement methods",
"for DNN models. Section 4 presents parameter capacity measurement results",
"for FCDNNs, CNNs, and RNNs. The quantization performances measured on DNNs are presented",
"in Section 5. Concluding remarks follow in Section 6."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.13793103396892548,
0,
0.2380952388048172,
0.10810810327529907,
0.2857142686843872,
0.17142856121063232,
0.1764705777168274,
0.3333333134651184,
0,
0.19999998807907104,
0.0555555522441864,
0,
0,
0.2028985470533371,
0.21052631735801697,
0.2222222238779068,
0.25,
0.2666666507720947,
0.09999999403953552,
0.2142857164144516,
0.08695651590824127,
0.17142856121063232,
0.3478260934352875,
0.4285714328289032,
0.045454539358615875,
0.25641024112701416,
0.3181818127632141,
0.1875,
0.06896550953388214,
0.25,
0
] | HylIcj0qFQ | true | [
"We suggest the sufficient number of bits for representing weights of DNNs and the optimum bits are conservative when solving real problems."
] |
[
"Inspired by the combination of feedforward and iterative computations in the visual cortex, and taking advantage of the ability of denoising autoencoders to estimate the score of a joint distribution, we propose a novel approach to iterative inference for capturing and exploiting the complex joint distribution of output variables conditioned on some input variables.",
"This approach is applied to image pixel-wise segmentation, with the estimated conditional score used to perform gradient ascent towards a mode of the estimated conditional distribution.",
"This extends previous work on score estimation by denoising autoencoders to the case of a conditional distribution, with a novel use of a corrupted feedforward predictor replacing Gaussian corruption.",
"An advantage of this approach over more classical ways to perform iterative inference for structured outputs, like conditional random fields (CRFs), is that it is not any more necessary to define an explicit energy function linking the output variables.",
"To keep computations tractable, such energy function parametrizations are typically fairly constrained, involving only a few neighbors of each of the output variables in each clique.",
"We experimentally find that the proposed iterative inference from conditional score estimation by conditional denoising autoencoders performs better than comparable models based on CRFs or those not using any explicit modeling of the conditional joint distribution of outputs.",
"Based on response timing and propagation delays in the brain, a plausible hypothesis is that the visual cortex can perform fast feedforward BID21 inference when an answer is needed quickly and the image interpretation is easy enough (requiring as little as 200ms of cortical propagation for some object recognition tasks, i.e., just enough time for a single feedforward pass) but needs more time and iterative inference in the case of more complex inputs BID23 .",
"Recent deep learning research and the success of residual networks BID9 BID8 point towards a similar scenario BID16 : early computation which is feedforward, a series of non-linear transformations which map low-level features to high-level ones, while later computation is iterative (using lateral and feedback connections in the brain) in order to capture complex dependencies between different elements of the interpretation.",
"Indeed, whereas a purely feedforward network could model a unimodal posterior distribution (e.g., the expected target with some uncertainty around it), the joint conditional distribution of output variables given inputs could be more complex and multimodal.",
"Iterative inference could then be used to either sample from this joint distribution or converge towards a dominant mode of that distribution, whereas a unimodaloutput feedfoward network would converge to some statistic like the conditional expectation, which might not correspond to a coherent configuration of the output variables when the actual conditional distribution is multimodal.This paper proposes such an approach combining a first feedforward phase with a second iterative phase corresponding to searching for a dominant mode of the conditional distribution while tackling the problem of semantic image segmentation.",
"We take advantage of theoretical results BID0 on denoising autoencoder (DAE), which show that they can estimate the score or negative gradient of the energy function of the joint distribution of observed variables: the difference between the reconstruction and the input points in the direction of that estimated gradient.",
"We propose to condition the autoencoder with an additional input so as to obtain the conditional score, Given an input image x, we extract a segmentation candidate y and intermediate feature maps h by applying a pre-trained segmentation network.",
"We add some noise to y and train a DAE that takes as input both y and h by minimizing Eq. 3. Training scenario 1",
"(a) yields the best results and uses the corrupted prediction as input to the DAE during training.",
"Training scenario 2",
"(b) corresponds to the original DAE prescription in the conditional case, and uses a corruption of the ground truth as input to the DAE during training.i.e., the gradient of the energy of the conditional density of the output variables, given the input variables.",
"The autoencoder takes a candidate output y as well as an input x and outputs a valueŷ so thatŷ − y estimates the direction DISPLAYFORM0 .",
"We can then take a gradient step in that direction and update y towards a lower-energy value and iterate in order to approach a mode of the implicit p(y|x) captured by the autoencoder.",
"We find that instead of corrupting the segmentation target as input of the DAE, we obtain better results by training the DAE with the corrupted feedforward prediction, which is closer to what will be seen as the initial state of the iterative inference process.",
"The use of a denoising autoencoder framework to estimate the gradient of the energy is an alternative to more traditional graphical modeling approaches, e.g., with conditional random fields (CRFs) BID14 BID10 , which have been used to model the joint distribution of pixel labels given an image BID13 .",
"The potential advantage of the DAE approach is that it is not necessary to decide on an explicitly parametrized energy function: such energy functions tend to only capture local interactions between neighboring pixels, whereas a convolutional DAE can potentially capture dependencies of any order and across the whole image, taking advantage of the state-of-the-art in deep convolutional architectures in order to model these dependencies via the direct estimation of the energy function gradient.",
"Note that this is different from the use of convolutional networks for the feedforward part of the network, and regards the modeling of the conditional joint distribution of output pixel labels, given image features.The main contributions of this paper are the following: 1. A novel training framework for modeling structured output conditional distributions, which is an alternative to CRFs, inspired by denoising autoencoder estimation of energy gradients.",
"2. Showing how this framework can be used in an architecture for image pixel-wise segmentation, in which the above energy gradient estimator is used to propose a highly probable segmentation through gradient descent in the output space.",
"3. Demonstrating that this approach to image segmentation outperforms or matches classical alternatives such as combining convolutional nets with CRFs and more recent state-of-theart alternatives on the CamVid dataset.",
"We have proposed to use a novel form of denoising autoencoders for iterative inference in structured output tasks such as image segmentation.",
"The autoencoder is trained to map corrupted predictions to target outputs and iterative inference interprets the difference between the output and the input as a direction of improved output configuration, given the input image.Experiments provide positive evidence for the three questions raised at the beginning of Sec. 4: (1) a conditional DAE can be used successfully as the building block of iterative inference for image segmentation, (2) the proposed corruption model (based on the feedforward prediction) works better than the prescribed target output corruption, and (3) the resulting segmentation system outperforms state-of-the-art methods for obtaining coherent outputs."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
0.20000000298023224,
0.12121211737394333,
0.2702702581882477,
0.12765957415103912,
0,
0.2666666507720947,
0.057971011847257614,
0.032786883413791656,
0.08888888359069824,
0.12820512056350708,
0.0416666641831398,
0.1818181723356247,
0.05882352590560913,
0,
0,
0.0476190447807312,
0,
0.05128204822540283,
0.21276594698429108,
0.11320754140615463,
0,
0.0952380895614624,
0.04651162400841713,
0.10256409645080566,
0.3030303120613098,
0.09756097197532654
] | HklpCzC6- | true | [
"Refining segmentation proposals by performing iterative inference with conditional denoising autoencoders."
] |
[
"Inspired by the success of self attention mechanism and Transformer architecture\n",
"in sequence transduction and image generation applications, we propose novel self\n",
"attention-based architectures to improve the performance of adversarial latent code-\n",
"based schemes in text generation.",
"Adversarial latent code-based text generation\n",
"has recently gained a lot of attention due to their promising results.",
"In this paper,\n",
"we take a step to fortify the architectures used in these setups, specifically AAE\n",
"and ARAE.",
"We benchmark two latent code-based methods (AAE and ARAE)\n",
"designed based on adversarial setups.",
"In our experiments, the Google sentence\n",
"compression dataset is utilized to compare our method with these methods using\n",
"various objective and subjective measures.",
"The experiments demonstrate the\n",
"proposed (self) attention-based models outperform the state-of-the-art in adversarial\n",
"code-based text generation.",
"Text generation is of particular interest in many natural language processing (NLP) applications such as dialogue systems, machine translation, image captioning and text summarization.",
"Recent deep learning-based approaches to this problem can be categorized into three classes: auto-regressive or maximum likelihood estimation (MLE)-based, generative adversarial network (GAN)-based and reinforcement learning (RL)-based approaches.",
"BID26 ) model the text (language) as an auto-regressive process, commonly using RNNs.",
"RNNs compactly represent the samples history in the form of recurrent states.",
"In these models, text is generated by predicting next token (character, word, etc) based on the previously generated ones BID9 .",
"The results of objective and subjective evaluations are presented in Tables 2 to 5.",
"As seen, the proposed self attention-based (SALSA) architectures consistently outperform the non-attention-based benchmarks in terms of diversity (measured by reverse perplexity).",
"Moreover, they often show better performance in terms of output quality (measured by BLEU, self BLEU, preplexity and human evaluations) on the long and complicated sentences of the GSC dataset.As seen in the generated samples TAB0 , human evaluation TAB3 ) and objective metrics (Tables 2 to 4), the original AAE and ARAE setups perform very poorly on GSC with long sentences.",
"With reverse perplexities of over 8000 and high self-BLEU scores close to 0.9, they suffer from a high level of mode collapse (repeated sentences).Human",
"evaluations do not account for lack of diversity. The reason",
"is humans are presented with a number of shuffled sentences and asked to evaluate them independently (without knowing which sentence coming from which model). Hence, in",
"our experiments for the original AAE and ARAE, a model can generate similar sentences (maybe due to mode collapse) and still receives high subjective scores. It seems",
"that, in our experiments, the original ARAE model suffers from mode collapse. We can see",
"that it has slightly higher human evaluation scores, but extremely poor diversity metrics, i.e. very high reverse perplexity and self-BLEU scores. It can also",
"be seen in the randomly selected generated sentences TAB0 , where all the sentences start with \"A man\" and invariably mention he is being arrested or accused of grievous crimes. This is likely",
"because the sentences in the GSC dataset are long and that their structure is elaborate. SALSA-ARAE on",
"the other hand reliably produces sentences of quality with great diversity.SALSA-AAE has both considerably higher individual quality metrics than the original AAE and much better diversity metrics. It is the strongest",
"pure adversarial text model. As seen in TAB3 , SALSA-AAE",
"provides the best grammaticality, semantic consistency and Fluency performance.",
"In this paper, we introduced SALSA-TEXT, a Transformer-based architecture for adversarial codebased text generation.",
"It incorporates self-attention mechanism by utilizing Transformer architecture in autoencoder and GAN setups.",
"Our extensive experiments demonstrate the better performance of our models compared to the state-of-the-art in adversarial code-based text generation (without self-attention).",
"The proposed architectures provide diverse, long and high quality output sentences as confirmed by objective metrics and human evaluations in extensive experiments.As a future direction, it is beneficial to study the performance of self attention in other text generation methods including variational code-based and reinforcement learning-based approaches.",
"Another interesting direction is to experiment with deeper Transformer-based autoencoders to better capture the underlying language model and perform unsupervised pre-training isnpired by the success of BID0 and Radford et al.."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.13793103396892548,
0.20689654350280762,
0.1428571343421936,
0.260869562625885,
0.260869562625885,
0.13333332538604736,
0,
0.0624999962747097,
0.2222222238779068,
0.260869562625885,
0,
0,
0.08695651590824127,
0,
0.07407406717538834,
0.2857142686843872,
0.1428571343421936,
0.08888888359069824,
0.06451612710952759,
0,
0.1621621549129486,
0.1249999925494194,
0,
0.05970148742198944,
0.0952380895614624,
0.0714285671710968,
0.09302324801683426,
0.13636362552642822,
0.060606054961681366,
0.04651162400841713,
0.0416666604578495,
0.11764705181121826,
0.04444443807005882,
0.1428571343421936,
0.07407406717538834,
0.375,
0.25806450843811035,
0.21052631735801697,
0.158730149269104,
0.043478257954120636
] | B1liWh09F7 | true | [
"We propose a self-attention based GAN architecture for unconditional text generation and improve on previous adversarial code-based results."
] |
[
"Watermarks have been used for various purposes.",
"Recently, researchers started to look into using them for deep neural networks.",
"Some works try to hide attack triggers on their adversarial samples when attacking neural networks and others want to watermark neural networks to prove their ownership against plagiarism.",
"Implanting a backdoor watermark module into a neural network is getting more attention from the community.",
"In this paper, we present a general purpose encoder-decoder joint training method, inspired by generative adversarial networks (GANs).",
"Unlike GANs, however, our encoder and decoder neural networks cooperate to find the best watermarking scheme given data samples.",
"In other words, we do not design any new watermarking strategy but our proposed two neural networks will find the best suited method on their own.",
"After being trained, the decoder can be implanted into other neural networks to attack or protect them (see Appendix for their use cases and real implementations).",
"To this end, the decoder should be very tiny in order not to incur any overhead when attached to other neural networks but at the same time provide very high decoding success rates, which is very challenging.",
"Our joint training method successfully solves the problem and in our experiments maintain almost 100\\% encoding-decoding success rates for multiple datasets with very little modifications on data samples to hide watermarks.",
"We also present several real-world use cases in Appendix.",
"Security issues of deep learning have been very actively being studied.",
"It had been already demonstrated that deep learning methods are vulnerable to some carefully devised adversarial attacks BID7 BID4 BID6 BID1 .",
"At the same time, many researchers are also studying about how to make them more robust against such attacks.",
"A couple of recent works, for example, proposed to use watermarks BID9 BID0 to protect neural networks.",
"At the same time, other work wanted to use a similar watermark technique to attack neural networks BID9 .The",
"method of adding watermarks to data samples can be used in various ways to protect deep learning models. First",
", the decoder can be implanted into a trained deep learning model and later one can prove the ownership, when other people copied the model, by showing that the copied model reacts to one's watermarked samples. Second",
", the implanted decoder may allow only legitimately watermarked samples and reject other non-watermarked samples. In this",
"case, only people that have the encoder can access the deep learning model. However",
", there is one very strict requirement that the decoder should be tiny to minimize the incurred overheads by attaching it as part of the main deep learning model. Similar",
"techniques can also be used to attack neural networks.In this paper, we do not propose any specific watermarking techniques. Instead",
", we want the encoder and decoder discuss and decide their watermarking method. Inspired",
"from generative adversarial networks (GANs) BID3 , the encoder and decoder work for the same goal and are jointly trained. They do",
"not perform the adversarial game of GANs. Their relationship",
"is rather cooperative than adversarial in our method. The decoder is a tiny",
"neural network to decode watermarks and the encoder is a high-capacity neural network that can watermark samples in such a way that the tiny neural network can successfully decode. Therefore, those two",
"neural networks should cooperate to find such a watermarking scheme -in GANs, one neural network (generator) tries to fool the other neural network (discriminator). Because the decoder",
"has a limited capacity due to its tiny neural network size, the encoder should not decide the watermarking scheme alone. The encoder should",
"receive feedback from the decoder to revise its watermarking scheme. After training them",
", one should keep the encoder in a secure place but can deploy the decoder to as many places as one wants. We also show that",
"our method can be used for both defences and attacks (refer to Appendix for some of these examples we implemented using our proposed method).We adopt residual",
"blocks BID5 to design the encoder. Each residual block",
"of the encoder is supposed to learn f (x)+x where x is an input to the block. One can consider f",
"(x) as a watermark signal discovered by the joint training of the encoder and the decoder. The signal produced",
"by f (x) should be strong enough to be detected by the decoder but weak enough not to be detected by human eyes. We design our training",
"loss definition to achieve this goal. The encoder should modify",
"original samples to implant watermarks. As more modifications are",
"allowed, stronger watermarks will be implanted but they can be readily detected by human eyes. Our loss definition has a",
"parameter that can be set by user to limit the modifications by the encoder. Our experiments show that",
"we can find a well-balanced watermarking scheme that be detected only by the decoder.We tested many different datasets: face recognition(VGG-Face Data-set), speech recognition BID11 , images with general objects BID7 , and flowers (Flowers Data-set). Two of them are reported",
"in the main paper with the comparison with other watermarking methods and others are introduced in Appendix. During experiments, our",
"methods marked 100% decoding success rates for all datasets (in at least one hyper-parameter configuration). This well outperforms other",
"baseline methods.In addition, we also found that different watermarking schemes are trained for different datasets. For instance, the encoder modified",
"the tone of colors for the face recognition images. For the general object images, however",
", the encoder explicitly marks some dots rather than modifying their color tones (see FIG1 and Figure 4 ). This proves our goal that two neural",
"networks cooperate to find the best suited watermarking method for each dataset.",
"We present a joint training method of the watermark encoder and decoder.",
"Our decoder is a very lowcapacity neural network and the encoder is a very high-capacity neural network.",
"These two skinny and fatty neural networks collaborate to find the best watermarking scheme given data samples.",
"In particular, we use residual blocks to build the encoder because the definition of the residual block is very appropriate for the task of watermarking samples.",
"We demonstrated that two different types of watermarks (one to change the color tone and the other to add dots) are found by them without human interventions.For our experiments with various datasets, our method marked 100% decoding success rates, which means our tiny decoder is able to distinguish watermarked and non-watermarked samples perfectly.We also listed three use cases in Appendix about how to utilize our proposed encoder and decoder for real-world attacks and defenses.",
"Our future research will be to implement those use cases.",
"Figure 4: Examples of watermarking ImageNet images.",
"Some dots are marked explicitly to hide watermarks when γ >= 0.05.",
"Recall that watermarks are hidden in the tone of colors for FR images.",
"This is a very interesting point because our proposed method can discover two very different watermarking schemes for them.",
"This is because adding dots does not make the regularization term greatly exceed the margin γ.",
"When γ = 0.01, a similar watermarking scheme to the FR exmaples will be used.",
"This proves that our method is able to fine the best suited watermarking scheme given data samples.",
"The decoder has 3 convolution layers in these examples.",
"Note that there are more modifications in general as γ increases.",
"For all cases, the trained decoder can successfully decode their watermarks.",
"Figure 5 : The decoding success rate in the ImageNet dataset.",
"We report the decoding success rate for non-watermarked/watermarked cases with our method after varying the convolution numbers in the decoder (i.e. decoder size) and γ.Our method Decoder size = 1 Decoder size = 3 γ = 0.01 γ = 0.05 γ = 0.1 γ = 0.01 γ = 0.05 γ = 0.1 81.2%/100.0% 89.2%/100.0% 92.0%/100.0% 99.0%/100.0% 98.0%/99.4% 99.5%/100.0% A ADDITIONAL EXPERIMENT RESULTS"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.15789473056793213,
0.20408162474632263,
0.19512194395065308,
0.13636362552642822,
0.2222222238779068,
0.3461538553237915,
0.1538461446762085,
0.16949151456356049,
0.035087715834379196,
0.05714285373687744,
0,
0.08510638028383255,
0.08888888359069824,
0.1428571343421936,
0.22727271914482117,
0.045454539358615875,
0.07017543166875839,
0,
0,
0.03703703358769417,
0.3478260934352875,
0.10256409645080566,
0.1304347813129425,
0.17142856121063232,
0.10526315122842789,
0.16326530277729034,
0.25,
0.260869562625885,
0.1538461446762085,
0.12244897335767746,
0.07843136787414551,
0.11428570747375488,
0.04651162400841713,
0.0952380895614624,
0.17391303181648254,
0.0555555522441864,
0.11428570747375488,
0.04444443807005882,
0.04878048226237297,
0.12121211737394333,
0.09302324801683426,
0.04444443807005882,
0.08695651590824127,
0,
0.07692307233810425,
0.15789473056793213,
0.15789473056793213,
0.10526315122842789,
0.23255813121795654,
0.08510638028383255,
0.04444444179534912,
0.0555555522441864,
0.060606058686971664,
0.05128204822540283,
0,
0.09090908616781235,
0.04878048226237297,
0.1904761791229248,
0.1395348757505417,
0,
0.054054051637649536,
0.054054051637649536,
0,
0.02739725634455681
] | ryfDoiR5Ym | true | [
"We propose a novel watermark encoder-decoder neural networks. They perform a cooperative game to define their own watermarking scheme. People do not need to design watermarking methods any more."
] |
[
"We derive an unbiased estimator for expectations over discrete random variables based on sampling without replacement, which reduces variance as it avoids duplicate samples.",
"We show that our estimator can be derived as the Rao-Blackwellization of three different estimators.",
"Combining our estimator with REINFORCE, we obtain a policy gradient estimator and we reduce its variance using a built-in control variate which is obtained without additional model evaluations.",
"The resulting estimator is closely related to other gradient estimators.",
"Experiments with a toy problem, a categorical Variational Auto-Encoder and a structured prediction problem show that our estimator is the only estimator that is consistently among the best estimators in both high and low entropy settings.",
"Put replacement in your basement!",
"We derive the unordered set estimator: an unbiased (gradient) estimator for expectations over discrete random variables based on (unordered sets of) samples without replacement.",
"In particular, we consider the problem of estimating (the gradient of) the expectation of f (x) where x has a discrete distribution p over the domain D, i.e.",
"This expectation comes up in reinforcement learning, discrete latent variable modelling (e.g. for compression), structured prediction (e.g. for translation), hard attention and many other tasks that use models with discrete operations in their computational graphs (see e.g. Jang et al. (2016) ).",
"In general, x has structure (such as a sequence), but we can treat it as a 'flat' distribution, omitting the bold notation, so x has a categorical distribution over D given by p(x), x ∈ D. Typically, the distribution has parameters θ, which are learnt through gradient descent.",
"This requires estimating the gradient ∇ θ E x∼p θ (x) [f (x)], using a set of samples S. A gradient estimate e(S) is unbiased if",
"The samples S can be sampled independently or using alternatives such as stratified sampling which reduce variance to increase the speed of learning.",
"In this paper, we derive an unbiased gradient estimator that reduces variance by avoiding duplicate samples, i.e. by sampling S without replacement.",
"This is challenging as samples without replacement are dependent and have marginal distributions that are different from p(x).",
"We further reduce the variance by deriving a built-in control variate, which maintains the unbiasedness and does not require additional samples.",
"Related work.",
"Many algorithms for estimating gradients for discrete distributions have been proposed.",
"A general and widely used estimator is REINFORCE (Williams, 1992) .",
"Biased gradients based on a continuous relaxations of the discrete distribution (known as Gumbel-Softmax or Concrete) were jointly introduced by Jang et al. (2016) and Maddison et al. (2016) .",
"These can be combined with the straight through estimator (Bengio et al., 2013) if the model requires discrete samples or be used to construct control variates for REINFORCE, as in REBAR (Tucker et al., 2017) or RELAX (Grathwohl et al., 2018) .",
"Many other methods use control variates and other techniques to reduce the variance of REINFORCE (Paisley et al., 2012; Ranganath et al., 2014; Gregor et al., 2014; Mnih & Gregor, 2014; Gu et al., 2016; Mnih & Rezende, 2016) .",
"Some works rely on explicit summation of the expectation, either for the marginal distribution (Titsias & Lázaro-Gredilla, 2015) or globally summing some categories while sampling from the remainder (Liang et al., 2018; Liu et al., 2019) .",
"Other approaches use a finite difference approximation to the gradient (Lorberbom et al., 2018; 2019) .",
"Yin et al. (2019) introduced ARSM, which uses multiple model evaluations where the number adapts automatically to the uncertainty.",
"In the structured prediction setting, there are many algorithms for optimizing a quantity under a sequence of discrete decisions, using (weak) supervision, multiple samples (or deterministic model evaluations), or a combination both (Ranzato et al., 2016; Shen et al., 2016; He et al., 2016; Norouzi et al., 2016; Bahdanau et al., 2017; Edunov et al., 2018; Leblond et al., 2018; Negrinho et al., 2018) .",
"Most of these algorithms are biased and rely on pretraining using maximum likelihood or gradually transitioning from supervised to reinforcement learning.",
"Using Gumbel-Softmax based approaches in a sequential setting is difficult as the bias accumulates because of mixing errors (Gu et al., 2018) .",
"We introduced the unordered set estimator, a low-variance, unbiased gradient estimator based on sampling without replacement, which can be used as an alternative to the popular biased GumbelSoftmax estimator (Jang et al., 2016; Maddison et al., 2016) .",
"Our estimator is the result of RaoBlackwellizing three existing estimators, which guarantees equal or lower variance, and is closely related to a number of other estimators.",
"It has wide applicability, is parameter free (except for the sample size k) and has competitive performance to the best of alternatives in both high and low entropy regimes.",
"In our experiments, we found that REINFORCE with replacement, with multiple samples and a built-in baseline as inspired by VIMCO (Mnih & Rezende, 2016) , is a simple yet strong estimator which has performance similar to our estimator in the high entropy setting.",
"We are not aware of any recent work on gradient estimators for discrete distributions that has considered this estimator as baseline, while it may be often preferred given its simplicity.",
"This means that F φ (g) is the CDF and f φ (g) the PDF of the Gumbel(φ) distribution.",
"Additionally we will use the identities by Maddison et al. (2014):",
"Also, we will use the following notation, definitions and identities (see Kool et al. (2019c) ):",
"For a proof of equation 30, see Maddison et al. (2014) .",
"We can sample the set S k from the Plackett-Luce distribution using the Gumbel-Top-k trick by drawing Gumbel variables G φi ∼ Gumbel(φ i ) for each element and returning the indices of the k largest Gumbels.",
"If we ignore the ordering, this means we will obtain the set S k if min i∈S k G φi > max i∈D\\S k G φi .",
"Omitting the superscript k for clarity, we can use the Gumbel-Max trick, i.e. that G φ D\\S = max i ∈S G φi ∼ Gumbel(φ D\\S ) (equation 30) and marginalize over G φ D\\S :",
"Here we have used a change of variables u = F φ D\\S (g φ D\\S ).",
"This expression can be efficiently numerically integrated (although another change of variables may be required for numerical stability depending on the values of φ).",
"Exact computation in O(2 k ).",
"The integral in equation 31 can be computed exactly using the identity i∈S",
"Computation of p D\\C (S \\ C).",
"When using the Gumbel-Top-k trick over the restricted domain D \\ C, we do not need to renormalize the log-probabilities φ s , s ∈ D \\ C since the Gumbel-Top-k trick applies to unnormalized log-probabilities.",
"Also, assuming",
"This means that we can compute p D\\C (S \\ C) similar to equation 31:",
"Computation of R(S k , s).",
"Note that, using equation 10, it holds that",
"This means that, to compute the leave-one-out ratio for all s ∈ S k , we only need to compute p D\\{s} (S k \\ {s}) for s ∈ S k .",
"When using the numerical integration or summation in O(2 k ), we can reuse computation, whereas using the naive method, the cost is O(k · (k − 1)!",
") = O(k!), making the total computational cost comparable to computing just p(S k ), and the same holds when computing the 'second-order' leave one out ratios for the built-in baseline (equation 17).",
"Details of numerical integration.",
"For computation of the leave-one-out ratio (equation 35) for large k we can use the numerical integration, where we need to compute equation 34 with C = {s}.",
"For this purpose, we rewrite the integral as",
"Here we have used change of variables v = u exp(−b) and a = b − φ D\\S .",
"This form allows to compute the integrands efficiently, as",
"where the numerator only needs to computed once, and, since C = {s} when computing equation 35, the denominator only consists of a single term.",
"The choice of a may depend on the setting, but we found that a = 5 is a good default option which leads to an integral that is generally smooth and can be accurately approximated using the trapezoid rule.",
"We compute the integrands in logarithmic space and sum the terms using the stable LOGSUMEXP trick.",
"In our code we provide an implementation which also computes all second-order leave-one-out ratios efficiently."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.6666666865348816,
0.12121211737394333,
0.1860465109348297,
0.1428571343421936,
0.08510638028383255,
0.08695651590824127,
0.6666666865348816,
0.1818181723356247,
0.07017543166875839,
0.1071428507566452,
0.1428571343421936,
0.04878048226237297,
0.3499999940395355,
0.11428570747375488,
0.10526315122842789,
0.1428571343421936,
0.0714285671710968,
0.1818181723356247,
0.11320754140615463,
0,
0.11764705181121826,
0.11764705181121826,
0,
0.0952380895614624,
0.051282044500112534,
0.09756097197532654,
0.38461539149284363,
0.0952380895614624,
0.045454539358615875,
0.07017543166875839,
0.25,
0,
0,
0,
0.06896550953388214,
0.11999999731779099,
0,
0.0833333283662796,
0.12121211737394333,
0.14999999105930328,
0,
0,
0,
0.045454539358615875,
0,
0,
0,
0.04878048226237297,
0,
0.04255318641662598,
0,
0.045454539358615875,
0,
0.11428570747375488,
0,
0.04878048226237297,
0.07692307233810425,
0.0624999962747097,
0
] | rklEj2EFvB | true | [
"We derive a low-variance, unbiased gradient estimator for expectations over discrete random variables based on sampling without replacement"
] |
[
"We introduce a parameter sharing scheme, in which different layers of a convolutional neural network (CNN) are defined by a learned linear combination of parameter tensors from a global bank of templates. ",
"Restricting the number of templates yields a flexible hybridization of traditional CNNs and recurrent networks. ",
"Compared to traditional CNNs, we demonstrate substantial parameter savings on standard image classification tasks, while maintaining accuracy.\n",
"Our simple parameter sharing scheme, though defined via soft weights, in practice often yields trained networks with near strict recurrent structure; with negligible side effects, they convert into networks with actual loops.",
"Training these networks thus implicitly involves discovery of suitable recurrent architectures.",
"Though considering only the aspect of recurrent links, our trained networks achieve accuracy competitive with those built using state-of-the-art neural architecture search (NAS) procedures.\n",
"Our hybridization of recurrent and convolutional networks may also represent a beneficial architectural bias. ",
"Specifically, on synthetic tasks which are algorithmic in nature, our hybrid networks both train faster and extrapolate better to test examples outside the span of the training set.",
"The architectural details of convolutional neural networks (CNNs) have undergone rapid exploration and improvement via both human hand-design BID33 BID11 BID13 BID45 and automated search methods BID46 ).",
"Yet, this vast array of work limits itself to a circuit-like view of neural networks.",
"Here, a CNN is regarded as a fixed-depth feed-forward circuit, with a distinct parameter governing each internal connection.",
"These circuits are often trained to perform tasks which, in a prior era, might have been (less accurately) accomplished by running a traditional computer program coded by humans.",
"Programs, and even traditional hardware circuits, have a more reusable internal structure, including subroutines or modules, loops, and associated control flow mechanisms.We bring one aspect of such modularity into CNNs, by making it possible to learn a set of parameters that is reused across multiple layers at different depths.",
"As the pattern of reuse is itself learned, our scheme effectively permits learning the length (iteration count) and content of multiple loops defining the resulting CNN.",
"We view this approach as a first step towards learning neural networks with internal organization reminiscent of computer programs.",
"Though we focus solely on loop-like structures, leaving subroutines and dynamic control flow to future work, this simple change suffices to yield substantial quantitative and qualitative benefits over the standard baseline CNN models.While recurrent neural networks (RNNs) possess a loop-like structure by definition, their loop structure is fixed a priori, rather than learned as part of training.",
"This can actually be a disadvantage in the event that the length of the loop is mismatched to the target task.",
"Our parameter sharing scheme for CNNs permits a mix of loops and feed-forward layers to emerge.",
"For example, trained with our scheme, a 50-layer CNN might learn a 2-layer loop that executes 5 times between layers 10 and 20, a 3-layer loop that runs 4 times from layers 30 to 42, while leaving the remaining layers to assume independent parameter sets.",
"Our approach generalizes both CNNs and RNNs, creating a hybrid.",
"where parameter templates T (1) , T (2) are shared among each layer i, which now only contains a 2-dimensional parameter α",
"(i) .",
"Weights W",
"(i) (no longer parameters, illustrated with dotted boxes) used by layer i are generated from α",
"(i) and templates T (1) , T (2) .",
"Right: If weights W",
"(i) are outputs of a linear function (as in our method), learning parameter templates can be viewed as learning layer templates, offering a new (although equivalent) perspective for the middle diagram.",
"Non-linearities are omitted for simplicity.",
"FIG0 diagrams the parameter sharing scheme facilitating this hybridization.",
"Inspired by dictionary learning, different network layers share, via weighted combination, global parameter templates.",
"This re-parameterization is fully differentiable, allowing learning of sharing weights and template parameters.",
"Section 3 elaborates, and also introduces tools for analyzing learned loop structures.Section 4 demonstrates advantages of our hybrid CNNs across multiple experimental settings.",
"Taking a modern CNN design as a baseline, and re-parameterizing it according to our scheme improves:• Parameter efficiency.",
"Here, we experiment with the standard task of image classification using modern residual networks BID11 BID41 .",
"This task is a good proxy for general usefulness in computer vision, as high-performance classification architectures often serve as a backbone for many other vision tasks, such as semantic segmentation BID1 BID44 .",
"Our parameter sharing scheme drastically reduces the number of unique parameters required to achieve a given accuracy on CIFAR BID20 ) or ImageNet (Russakovsky et al., 2015 classification tasks.",
"Re-parameterizing a standard residual network with our scheme cuts parameters, without triggering any drop in accuracy.",
"This suggests that standard CNNs may be overparameterized in part because, by design (and unlike RNNs), they lack capacity to learn reusable internal operations.•",
"Extrapolation and generalization. Here",
", we explore whether our hybrid networks expand the class of tasks that one can expect to train neural networks to accomplish. This",
"line of inquiry, focusing on synthetic tasks, shares motivations with work on Neural Turing Machines BID5 . Specifically",
", we would like neural networks to be capable of learning to perform tasks for which there are concise traditional solution algorithms. BID5 uses sorting",
"as an example task. As we examine an",
"extension of CNNs, our tasks take the form of queries about planar graphs encoded as image input. On these tasks,",
"we observe improvements to both generalization ability and learning speed for our hybrid CNNs, in comparison to standard CNNs or RNNs. Our parameter sharing",
"scheme, by virtue of providing an architectural bias towards networks with loops, appears to assist in learning to emulate traditional algorithms.An additional side effect, seen in practice in many of our experiments, is that two different learned layers often snap to the same parameter values. That is, layers i and",
"j, learn coefficient vectors α DISPLAYFORM0 and α (j) (see FIG0 ) that",
"converge to be the same (up to scaling). This is a form of architecture",
"discovery, as it permits representation of the CNN as a loopy wiring diagram between repeated layers. Section 4.3 presents example results",
". We also draw comparisons to existing",
"neural architec-ture search (NAS) techniques. By simply learning recurrent structure",
"as byproduct of training with standard stochastic gradient descent, we achieve accuracy competitive with current NAS procedures.Before delving into the details of our method, Section 2 provides additional context in terms of prior work on recurrent models, parameter reduction techniques, and program emulation. Sections 3 and 4 describe our hybrid shared-parameter",
"CNN, experimental setup, and results. Section 5 concludes with commentary on our results and",
"possible future research pathways. 1 2 RELATED WORK Recurrent variants of CNNs are used extensively",
"for visual tasks. Recently, BID42 propose utilizing a convolutional LSTM BID32 as",
"a generic feedback architecture. RNN and CNN combinations have been used for scene labeling BID26",
", image captioning with attention BID39 , and understanding video BID3 , among others. These works combine CNNs and RNNs at a coarse scale, and in a fixed",
"hand-crafted manner. In contrast, we learn the recurrence structure itself, blending it",
"into the inner workings of a CNN.Analysis of residual networks BID11 reveals possible connections to recurrent networks stemming from their design BID21 . BID7 provide evidence that residual networks learn to iteratively",
"refine feature representations, making an analogy between a very deep residual network and an unrolled loop. BID17 further explore this connection, and experiment with training",
"residual networks in which some layers are forced to share identical parameters. This hard parameter sharing scheme again builds a predetermined recurrence",
"structure into the network. It yields successfully trained networks, but does not exhibit the type of",
"performance gains that Section 4 demonstrates for our soft parameter sharing scheme.Closely related to our approach is the idea of hypernetworks BID9 , in which one part of a neural network is parameterized by another neural network. Our shared template-based reparameterization could be viewed as one simple",
"choice of hypernetwork implementation. Perhaps surprisingly, this class of ideas has not been well explored for the",
"purpose of reducing the size of neural networks. Rather, prior work has achieved parameter reduction through explicit representation",
"bottlenecks BID15 , sparsifying connection structure BID27 BID45 , and pruning trained networks .Orthogonal to the question of efficiency, there is substantial interest in extending",
"neural networks to tackle new kinds of tasks, including emulation of computer programs. Some approach this problem using additional supervision in the form of execution traces",
"(Reed & de Freitas, 2016; BID0 , while other focus on development of network architectures that can learn from input-output pairs alone BID5 BID29 BID43 BID36 . Our experiments on synthetic tasks fall into the latter camp. At the level of architectural",
"strategy, BID36 benefit from changing the form of activation",
"function to bias the network towards correctly extrapolating common mathematical formulae. We build in a different implicit bias, towards learning iterative procedures within a CNN,",
"and obtain a boost on correctly emulating programs.",
"In this work, we take a step toward more modular and compact CNNs by extracting recurrences from feed-forward models where parameters are shared among layers.",
"Experimentally, parameter sharing yields models with lower error on CIFAR and ImageNet, and can be used for parameter reduction by training in a regime with fewer parameter templates than layers.",
"Moreover, we observe that parameter sharing often leads to different layers being functionally equivalent after training, enabling us to collapse them into recurrent blocks.",
"Results on an algorithmic task suggest that our shared parameter structure beneficially biases extrapolation.",
"We gain a more flexible form of behavior typically attributed to RNNs, as our networks adapt better to out-of-domain examples.",
"Our form of architecture discovery is also competitive with neural architecture search (NAS) algorithms, while having a smaller training cost than state-of-the-art gradient-based NAS.As the only requirement for our method is for a network to have groups of layers with matching parameter sizes, it can be applied to a plethora of CNN model families, making it a general technique with negligible computational cost.",
"We hope to raise questions regarding the rigid definitions of CNNs and RNNs, and increase interest in models that fall between these definitions.",
"Adapting our method for models with non-uniform layer parameter sizes BID13 BID45 might be of particular future interest.A ADDITIONAL RESULTS FOR IMPLICIT RECURRENCES Section 4.3 presents an example of implicit recurrences and folding of a SWRN 28-10-4 trained on CIFAR-10, where, for example, the last 6 layers in the second stage of the network fold into 2 layers with a self-loop.",
"Figure 6 presents an additional example, where non-trivial recurrences (unlike the one in Figure 4 ) emerge naturally, resulting in a model that is rich in structure.",
"− 2 = 10 layers) trained with soft parameter sharing on CIFAR-10.",
"Each stage (originally with 12 layers -the first two do not participate in parameter sharing) can be folded to yield blocks with complex recurrences.",
"For clarity, we use colors to indicate the computational flow: red takes precedence over green, which in turn has precedence over blue.",
"Colored paths are only taken once per stage.",
"Although not trivial to see, recurrences in each stage's folded form are determined by row/column repetitions in the respective Layer Similarity Matrix.",
"For example, for stage 2 we have S5,3 ≈ S6,4 ≈ 1, meaning that layers 3, 4, 5 and 6 can be folded into layers 3 and 4 with a loop (captured by the red edge).",
"The same holds for S7,1, S8,2, S9,3 and S10,4, hence after the loop with layers 3 and 4, the flow returns to layer 1 and goes all the way to layer 4, which generates the stage's output.",
"Even though there is an approximation when folding the network (in this example, we are tying layers with similarity close to 0.8), the impact on the test error is less than 0.3%.",
"Also note that the folded model has a total of 24 layers (20 in the stage diagrams, plus 4 which are not shown, corresponding to the first layer of the network and three 1 × 1 convolutions in skip-connections), instead of the original 40.",
"Figure 7: LSMs of a SWRN 40-8-8 (composed of 3 stages, each with 10 layers sharing 8 templates) trained on CIFAR-10 for 5 runs with different random seeds.",
"Although the LSMs differ across different runs, hard parameter sharing can be observed in all cases (off-diagonal elements close to 1, depicted by white), characterizing implicit recurrences which would enable network folding.",
"Moreover, the underlying structure is similar across runs, with hard sharing typically happening among layers i and i + 2, leading to a \"chessboard\" pattern."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.10256409645080566,
0.14814814925193787,
0.06666666269302368,
0.04878048226237297,
0.08695651590824127,
0.05405404791235924,
0.14814814925193787,
0.05128204822540283,
0,
0.1538461446762085,
0.1428571343421936,
0.10526315122842789,
0.1355932205915451,
0.05714285373687744,
0.12903225421905518,
0.1230769231915474,
0.19999998807907104,
0.1428571343421936,
0.16326530277729034,
0.09090908616781235,
0.0624999962747097,
0,
0,
0,
0.04878048226237297,
0,
0,
0,
0,
0,
0.20689654350280762,
0,
0.04999999701976776,
0.0952380895614624,
0.0714285671710968,
0.10810810327529907,
0,
0.12121211737394333,
0,
0.0555555522441864,
0,
0,
0.05714285373687744,
0.07017543166875839,
0.0833333283662796,
0.1599999964237213,
0.12121211737394333,
0.2222222238779068,
0.09090908616781235,
0.03389830142259598,
0,
0,
0.1666666567325592,
0.14814814925193787,
0.05714285373687744,
0,
0.307692289352417,
0.05714285373687744,
0.11764705181121826,
0,
0.1111111044883728,
0,
0,
0.0555555522441864,
0.0555555522441864,
0.03999999538064003,
0,
0.17142856121063232,
0.09999999403953552,
0.05405404791235924,
0.052631575614213943,
0.17142856121063232,
0.07692307233810425,
0.19354838132858276,
0.1269841194152832,
0.1818181723356247,
0.09090908616781235,
0.1111111044883728,
0,
0.05714285373687744,
0.0624999962747097,
0,
0.060606054961681366,
0.08888888359069824,
0.04878048226237297,
0.0952380895614624,
0.1249999925494194,
0.052631575614213943,
0.09090908616781235,
0.1111111044883728
] | rJgYxn09Fm | true | [
"We propose a method that enables CNN folding to create recurrent connections"
] |
[
"Gradient clipping is a widely-used technique in the training of deep networks, and is generally motivated from an optimisation lens: informally, it controls the dynamics of iterates, thus enhancing the rate of convergence to a local minimum.",
"This intuition has been made precise in a line of recent works, which show that suitable clipping can yield significantly faster convergence than vanilla gradient descent.",
"In this paper, we propose a new lens for studying gradient clipping, namely, robustness: informally, one expects clipping to provide robustness to noise, since one does not overly trust any single sample.",
"Surprisingly, we prove that for the common problem of label noise in classification, standard gradient clipping does not in general provide robustness.",
"On the other hand, we show that a simple variant of gradient clipping is provably robust, and corresponds to suitably modifying the underlying loss function.",
"This yields a simple, noise-robust alternative to the standard cross-entropy loss which performs well empirically.",
"We established that gradient clipping by itself does not suffice to endow even simple models with label noise robustness; however, a simple variant resolves this issue.",
"Experiments confirm that our composite loss-based gradient clipping performs well on datasets corrupted with label noise.",
"One interesting direction for future work is to analyse the behaviour of gradient-clipping inspired losses for the more general problem of distributionally robust learning (Shafieezadeh-Abadeh et al., 2015; Namkoong & Duchi, 2016; Sinha et al., 2018; Duchi & Namkoong, 2019) ."
] | [
0,
0,
0,
0,
0,
0,
1,
0,
0
] | [
0.17777776718139648,
0.09999999403953552,
0.27272728085517883,
0.22857142984867096,
0.2631579041481018,
0.13793103396892548,
0.41025641560554504,
0.19999998807907104,
0.04081632196903229
] | rklB76EKPr | true | [
"Gradient clipping doesn't endow robustness to label noise, but a simple loss-based variant does."
] |
[
" Among deep generative models, flow-based models, simply referred as \\emph{flow}s in this paper, differ from other models in that they provide tractable likelihood.",
"Besides being an evaluation metric of synthesized data, flows are supposed to be robust against out-of-distribution~(OoD) inputs since they do not discard any information of the inputs.",
"However, it has been observed that flows trained on FashionMNIST assign higher likelihoods to OoD samples from MNIST.",
"This counter-intuitive observation raises the concern about the robustness of flows' likelihood.",
"In this paper, we explore the correlation between flows' likelihood and image semantics.",
"We choose two typical flows as the target models: Glow, based on coupling transformations, and pixelCNN, based on autoregressive transformations.",
"Our experiments reveal surprisingly weak correlation between flows' likelihoods and image semantics: the predictive likelihoods of flows can be heavily affected by trivial transformations that keep the image semantics unchanged, which we call semantic-invariant transformations~(SITs).",
"We explore three SITs~(all small pixel-level modifications): image pixel translation, random noise perturbation, latent factors zeroing~(limited to flows using multi-scale architecture, e.g. Glow).",
"These findings, though counter-intuitive, resonate with the fact that the predictive likelihood of a flow is the joint probability of all the image pixels.",
"So flows' likelihoods, modeling on pixel-level intensities, is not able to indicate the existence likelihood of the high-level image semantics.",
"We call for attention that it may be \\emph{abuse} if we use the predictive likelihoods of flows for OoD samples detection.",
"Deep generative models have been very successful in image generation (Brock et al., 2018; Kingma & Dhariwal, 2018; Miyato et al., 2018) , natural language generation (Bowman et al., 2015; Yu et al., 2017) , audio synthesis and so on.",
"Among them, generative adversarial networks (GANs) are implicit generative models (Goodfellow et al., 2014) that explicit likelihood function is not required, and are trained by playing a minimax game between the discriminator and the generator; Variational auto-encoders (VAEs, Kingma & Welling (2013) ; Rezende et al. (2014) ) are latent variable generative models optimized by maximizing a lower bound, called evidence lower bound, of the data log-likelihood.",
"Flow-based models (Dinh et al., 2016; differ from them in that they provide exact log-likelihood evaluation with change of variables theorem (Rezende & Mohamed, 2015) .",
"A flow usually starts with a simple base probability distribution, e.g. diagonal Gaussian, then follows a chain of transformations in order to approximate complex distributions.",
"Each transformation is parameterized by specially designed neural networks so that the log-determinant of its Jacobian can be efficiently computed.",
"Most of the previous works focus on how to design more flexible transformations to achieve tighter log-likelihoods, and generate more realistic samples.",
"It is also believed that flows can be used to detect out-of-distribution(OoD) samples by assigning low likelihoods on them.",
"However, it has been observed that flows fail to do so.",
"For example, flows trained on FashionMNIST surprisingly assign higher likelihoods on MNIST samples (Nalisnick et al., 2018; Choi & Jang, 2018) .",
"Though analyses on pixel-level statistics are performed on this phenomenon (Nalisnick et al., 2018) , and density evaluation combined with uncertainty estimation is used to detect OoD samples (Choi & Jang, 2018) , the reasons behind flows' counter-intuitive behaviours are still not clear.",
"Humans easily discriminate MNIST images from FashionMNIST images, since their high-level image semantics are perceptually different.",
"Accordingly, it takes some metrics that can reflect the high-level image semantics for OoD detection.",
"In this paper, we empirically explore the correlation between flows' likelihoods and image semantics, and question the rationality and applicability of using predictive likelihoods of flows for OoD detection.",
"We first introduce a concept of semanticinvariant transformation (SIT).",
"An SIT transforms an input without changing its high-level semantics, e.g. a dog image through an SIT is still supposed to be recognized as a dog.",
"We choose two typical flow-based models as target models: Glow (Kingma & Dhariwal, 2018) , based on coupling transformations, and pixelCNN , based on autoregressive transformations.",
"We evaluate on image datasets MNIST and FashionMNIST under three trivial SITs: image translation, random noise perturbation, and latent factors zeroing (specific to invertible flows using multi-scale architectures, e.g. Glow).",
"We demonstrate that the predictive likelihoods of the target models show weak correlation to the image semantics in the following ways:",
"• Small pixel translations of test images could result in obvious likelihood decreases of Glow.",
"• Perturbing small random noises, unnoticeable to humans, to test images could lead to catastrophic likelihood decreases of target models.",
"This also applies even if we keep the semantic object of a test image intact, and only add noises to the background.",
"• For an invertible flow using multi-scale architecture, e.g. Glow, the inferred latent variables of an image is a list of gaussianized and standardized factors.",
"We find that the contributions of a flow's blocks to the log-likelihood are constant and independent of inputs.",
"Thus, simply zeroing the preceding latent factors of a sample image, and feed them to flow's reverse function.",
"We could obtain new samples with surprisingly higher likelihoods, yet with perceptually unnoticeable changes from the original image.",
"We emphasize that all these SITs are small pixel-level modifications on test images, and undoubtedly have no influences on humans' recognition of the semantic objects in the images.",
"However, they lead to obvious inconsistency of flows' likelihoods on test samples.",
"Considering that the predictive likelihood of a flow is the joint probability of all the image pixels, it may not convincingly indicate the existence of a semantic object in an image.",
"Thus it could be problematic to use flows for downstream tasks which require metrics that can reflect image semantics, e.g. OoD detection.",
"What is the problem of likelihood-based generative models?",
"Discriminative classifiers, trained to extract class-relevant features, are known to be vulnerable to adversarial examples, and give over-confident predictions even for OoD samples.",
"Generative models are supposed to be more robust since they model every pixel information of an image.",
"However, likelihood modeling in high-dimensional space can be hard and lead to counter-intuitive observations.",
"It was observed that likelihood-based generative models can assign even higher likelihoods on OoD samples (Nalisnick et al., 2018; Choi & Jang, 2018) .",
"Nalisnick et al. (2018) observe this phenomenon on both flows and VAEs.",
"They decompose the change-of-variable theorem and investigate the influences of different transformation layers, find that the phenomenon still exists regardless of whether the transformation is volume-preserving or not.",
"Their second-order analysis on pixel statistics suggests that OoD datasets, e.g. MNIST, just sit inside of in-distribution datasets, e.g. FashinMNIST, with roughly the same mean, smaller variance.",
"They suspect that flows may simply fit the pixel intensities without really capture the high-level semantics.",
"Ren (2019) find that the likelihood of an image is mostly dominated by the irrelevant background pixels, and propose a remedy to correct the original likelihood with a likelihood ratio.",
"Though significantly improves the accuracy of OoD detection, but still fail to answer the question: whether the likelihood ratio shows high correlation to high-level semantics.",
"This paper differs from previous works and step further to explore the correlations between the likelihood of flow-based generative models and image semantics.",
"Theoretical analyses in (Theis et al., 2015; van den Oord & Dambre, 2015) point out an important argument that generative models' ability to produce plausible samples is neither sufficient nor necessary for high likelihood.",
"Results in this paper provide more experimental evidences for this simple argument that even for powerful exact likelihood-based generative models-flows, the likelihoods of samples can be largely weakly correlated to the high-level image semantics.",
"Thus, special attention should be paid to this argument before we apply likelihood-based generative models to downstream tasks.",
"For example, considering the weak correlation between flows' likelihoods and image semantics, it may be inappropriate to use them for OoD samples detection.",
"On the other hand, these counter-intuitive behaviours of flows raise our awareness of the gap between the predictive likelihoods of flows and the expectation that these likelihoods can closely relate to the semantics for OoD detection.",
"What is exactly the likelihood of a image?",
"We should keep in mind that the predictive likelihood of a flow is the joint probability of all the image pixels.",
"There is no doubt that flows, trained by maximizing its likelihood, could generate impressive synthesized data.",
"There seem to be no problem that in terms of image generation, we expect that every single generated pixel in a image is the most likely one (hinging on its contextual pixels).",
"However, the likelihood is explicitly modeled on pixels, so can be easily influenced by pixel-level modifications.",
"Images' likelihoods significantly decrease even small noises are added to the pixels of backgrounds.",
"For downstream tasks that need some \"likelihood\" to indicate the object in an image is a cat, rather than a car, the pixels of backgrounds are almost irrelevant.",
"This drive us to think that we may need to model likelihood in some kind of semantic space or with some \"perceptual\" metrics, rather than on raw pixels.",
"One promising direction is to define likelihood of an images on its high-level representation, and successful examples are (Lee, 2018; Nilesh A. Ahuja, 2019) ."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.052631575614213943,
0.06451612710952759,
0.25,
0.5384615063667297,
0.12903225421905518,
0.4000000059604645,
0.05405404791235924,
0.12121211737394333,
0.25,
0.12121211737394333,
0.09090908616781235,
0.08955223858356476,
0,
0,
0.060606054961681366,
0.12121211737394333,
0.0624999962747097,
0,
0.05882352590560913,
0.11538460850715637,
0.13793103396892548,
0.2142857164144516,
0.37837836146354675,
0,
0.0555555522441864,
0.0555555522441864,
0.0952380895614624,
0.4516128897666931,
0,
0,
0.1764705777168274,
0.1621621549129486,
0.13793103396892548,
0.12903225421905518,
0.13333332538604736,
0.10256409645080566,
0.1599999964237213,
0.10810810327529907,
0.0555555522441864,
0.0952380895614624,
0.05882352590560913,
0.06666666269302368,
0.07407406717538834,
0.05405404791235924,
0.07999999821186066,
0.1111111044883728,
0.05128204822540283,
0.1428571343421936,
0.15789473056793213,
0.17142856121063232,
0.29411762952804565,
0,
0.27272728085517883,
0,
0.4444444477558136,
0.25,
0.0952380895614624,
0.12903225421905518,
0,
0.0952380895614624,
0.06896550953388214,
0.14814814925193787,
0.10256409645080566,
0,
0.05405404791235924
] | rkgIllBtwB | true | [
"show experimental evidences about the weak correlation between flows' likelihoods and image semantics."
] |
[
"This paper investigates strategies that defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system.",
"Specifically, we study applying image transformations such as bit-depth reduction, JPEG compression, total variance minimization, and image quilting before feeding the image to a convolutional network classifier.",
"Our experiments on ImageNet show that total variance minimization and image quilting are very effective defenses in practice, in particular, when the network is trained on transformed images.",
"The strength of those defenses lies in their non-differentiable nature and their inherent randomness, which makes it difficult for an adversary to circumvent the defenses.",
"Our best defense eliminates 60% of strong gray-box and 90% of strong black-box attacks by a variety of major attack methods.",
"As the use of machine intelligence increases in security-sensitive applications BID2 BID0 , robustness has become a critical feature to guarantee the reliability of deployed machine-learning systems.",
"Unfortunately, recent research has demonstrated that existing models are not robust to small, adversarially designed perturbations of the input BID1 BID31 BID14 BID20 BID6 .",
"Adversarially perturbed examples have been deployed to attack image classification services BID22 , speech recognition systems BID6 , and robot vision BID25 .",
"The existence of these adversarial examples has motivated proposals for approaches that increase the robustness of learning systems to such examples BID28 BID20 BID7 .The",
"robustness of machine learning models to adversarial examples depends both on the properties of the model (i.e., Lipschitzness) and on the nature of the problem considered, e.g., on the input dimensionality and the Bayes error of the problem BID11 . Consequently",
", defenses that aim to increase robustness against adversarial examples fall in one of two main categories. The first category",
"comprises model-specific strategies that enforce model properties such as invariance and smoothness via the learning algorithm or regularization scheme BID30 BID20 BID7 , potentially exploiting knowledge about the adversary's attack strategy BID14 . The second category",
"of defenses are model-agnostic: they try to remove adversarial perturbations from the input. For example, in the",
"context of image classification, adversarial perturbations can be partly removed via JPEG compression BID9 or image re-scaling BID23 . Hitherto, none of these",
"defenses has been shown to be very effective. Specifically, model-agnostic",
"defenses appear too simple to sufficiently remove adversarial perturbations from input images. By contrast, model-specific",
"defenses make strong assumptions about the nature of the adversary (e.g., on the norm that the adversary minimizes or on the number of iterations it uses to generate the perturbation). Consequently, they do not satisfy",
"BID18 principle: the adversary can alter its attack to circumvent such model-specific defenses.In this paper, we focus on increasing the effectiveness of model-agnostic defense strategies by developing approaches that (1) remove the adversarial perturbations from input images, (2) maintain sufficient information in input images to correctly classify them, and (3) are still effective in settings in which the adversary has information on the defense strategy being used. We explore transformations based",
"on image cropping and rescaling BID15 , bit-depth reduction ), JPEG compression (Dziugaite et al., 2016 , total variance minimization BID29 , and image quilting BID10 . We show that these defenses can",
"be surprisingly effective against existing attacks, in particular, when the convolutional network is trained on images that are transformed in a similar way. The image transformations are good",
"at countering the (iterative) fast gradient sign method BID20 ), Deepfool (Moosavi-Dezfooli et al., 2016 , and the BID5 attack, even in gray-box settings in which the model architecture and parameters are public. Our strongest defenses are based on",
"total variation minimization and image quilting: these defenses are non-differentiable and inherently random, which makes it difficult for an adversary to get around them. Our best defenses eliminate 60% of",
"gray-box attacks and 90% of black-box attacks by four major attack methods that perturb pixel values by 8% on average.",
"The results from this study suggest there exists a range of image transformations that have the potential to remove adversarial perturbations while preserving the visual content of the image: one merely has to train the convolutional network on images that were transformed in the same way.",
"A critical property that governs which image transformations are most effective in practice is whether Table 2 : Top-1 classification accuracy on images perturbed using attacks against ResNet-50 models trained on input-transformed images, and an Inception-v4 model trained using ensemble adversarial.",
"Adversarial images are generated by running attacks against the models, aiming for an average normalized L 2 -dissimilarity of 0.06.",
"The best defense against each attack is typeset in boldface.an adversary can incorporate the transformation in its attack.",
"For instance, median filtering likely is a weak remedy because one can backpropagate through the median filter, which is sufficient to perform any of the attacks described in Section 3.",
"A strong input-transformation defense should, therefore, be non-differentiable and randomized, a strategy has been previously shown to be effective BID35 b) .",
"Two of our top defenses possess both properties:1.",
"Both total variation minimization and image quilting are difficult to differentiate through.",
"Specifically, total variation minimization involves solving a complex minimization of a function that is inherently random.",
"Image quilting involves a discrete variable that selects the patch from the database, which is a non-differentiable operation, and the graph-cut optimization complicates the use of differentiable approximations BID24 .2.",
"Both total variation minimization and image quilting give rise to randomized defenses.",
"Total variation minimization randomly selects the pixels it uses to measure reconstruction error on when creating the denoised image.",
"Image quilting randomly selects one of the K nearest neighbors uniformly at random.",
"The inherent randomness of our defenses makes it difficult to attack the model: it implies the adversary has to find a perturbation that alters the prediction for the entire distribution of images that could be used as input, which is harder than perturbing a single image BID27 .Our",
"results with gray-box attacks suggest that randomness is particularly important in developing strong defenses. Therefore",
", we surmise that total variation minimization, image quilting, and related methods BID8 are stronger defenses than deterministic denoising procedures such as bit-depth reduction, JPEG compression, or non-local means BID4 . Defenses",
"based on total variation minimization and image quilting also have an advantage over adversarial-training approaches BID20 : an adversarially trained network is differentiable, which implies that it can be attacked using the methods in Section 3. An additional",
"disadvantage of adversarial training is that it focuses on a particular attack; by contrast, transformation-based defenses generalize well across attack methods because they are model-agnostic.While our study focuses exclusively on image classification, we expect similar defenses to be useful in other domains for which successful attacks have been developed, such as semantic segmentation and speech recognition BID6 BID38 . In speech recognition",
", for example, total variance minimization can be used to remove perturbations from waveforms, and one could develop \"spectrogram quilting\" techniques that reconstruct a spectrogram by concatenating \"spectrogram patches\" along the temporal dimension. We leave such extensions",
"to future work. In future work, we also",
"intend to study combinations of our input-transformation defenses with ensemble adversarial training BID34 , and we intend to investigate new attack methods that are specifically designed to circumvent our input-transformation defenses."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.04999999329447746,
0.09090908616781235,
0.04444443807005882,
0.0476190410554409,
0.4324324131011963,
0.045454539358615875,
0,
0.14999999105930328,
0.09756097197532654,
0.1249999925494194,
0.1538461446762085,
0.11320754140615463,
0.05714285373687744,
0.051282044500112534,
0.06896550953388214,
0.05882352590560913,
0,
0.17721518874168396,
0.08510638028383255,
0.08888888359069824,
0.037735845893621445,
0.08510638028383255,
0.2702702581882477,
0.06896550953388214,
0.14035087823867798,
0.04999999329447746,
0.1666666567325592,
0.04347825422883034,
0.20512819290161133,
0,
0.06451612710952759,
0.060606054961681366,
0.08888888359069824,
0.06451612710952759,
0,
0,
0.06779660284519196,
0,
0.03999999538064003,
0.0357142798602581,
0.1315789371728897,
0.1090909019112587,
0,
0.13636362552642822
] | SyJ7ClWCb | true | [
"We apply a model-agnostic defense strategy against adversarial examples and achieve 60% white-box accuracy and 90% black-box accuracy against major attack algorithms."
] |
[
"\tIn this paper, we propose the Asynchronous Accelerated Nonuniform Randomized Block Coordinate Descent algorithm (A2BCD).",
"We prove A2BCD converges linearly to a solution of the convex minimization problem at the same rate as NU_ACDM, so long as the maximum delay is not too large.",
"This is the first asynchronous Nesterov-accelerated algorithm that attains any provable speedup.",
"Moreover, we then prove that these algorithms both have optimal complexity.",
"Asynchronous algorithms complete much faster iterations, and A2BCD has optimal complexity.",
"Hence we observe in experiments that A2BCD is the top-performing coordinate descent algorithm, converging up to 4-5x faster than NU_ACDM on some data sets in terms of wall-clock time.",
"To motivate our theory and proof techniques, we also derive and analyze a continuous-time analog of our algorithm and prove it converges at the same rate.",
"In this paper, we propose and prove the convergence of the Asynchronous Accelerated Nonuniform Randomized Block Coordinate Descent algorithm (A2BCD), the first asynchronous Nesterovaccelerated algorithm that achieves optimal complexity.",
"No previous attempts have been able to prove a speedup for asynchronous Nesterov acceleration.",
"We aim to find the minimizer x * of the unconstrained minimization problem: DISPLAYFORM0 f (x) = f x (1) , . . . , x (n) ( FORMULA52 where f is σ-strongly convex for σ > 0 with L-Lipschitz gradient ∇f = (∇ 1 f, . . . , ∇ n f ).",
"x ∈ R d is composed of coordinate blocks x (1) , . . . , x (n) .",
"The coordinate blocks of the gradient ∇ i f are assumed L i -Lipschitz with respect to the ith block.",
"That is, ∀x, h ∈ R d : DISPLAYFORM1 where P i is the projection onto the ith block of R d .",
"LetL 1 n n i=1 L i be the average block Lipschitz constant.",
"These conditions on f are assumed throughout this whole paper.",
"Our algorithm can also be applied to non-strongly convex objectives (σ = 0) or non-smooth objectives using the black box reduction techniques proposed in BID1 .",
"Hence we consider only the coordinate smooth, strongly-convex case.",
"Our algorithm can also be applied to the convex regularized ERM problem via the standard dual transformation (see for instance Lin et al. (2014) ): DISPLAYFORM2 Hence A2BCD can be used as an asynchronous Nesterov-accelerated finite-sum algorithm.Coordinate descent methods, in which a chosen coordinate block i k is updated at every iteration, are a popular way to solve equation 1.1.",
"Randomized block coordinate descent (RBCD, Nesterov (2012) ) updates a uniformly randomly chosen coordinate block i k with a gradient-descent-like step: DISPLAYFORM3 The complexity K( ) of an algorithm is defined as the number of iterations required to decrease the error E(f (x k ) − f (x * )) to less than (f (x 0 ) − f (x * )).",
"Randomized coordinate descent has a complexity of K( ) = O(n(L/σ) ln(1/ )).Using",
"a series of averaging and extrapolation steps, accelerated RBCD Nesterov (2012) improves RBCD's iteration complexity K( ) to O(n L /σ ln(1/ )), which leads to much faster convergence whenL σ is large. This",
"rate is optimal when all L i are equal Lan & Zhou (2015) . Finally",
", using a special probability distribution for the random block index i k , the non-uniform accelerated coordinate descent method BID2 (NU_ACDM) can further decrease the complexity to O( DISPLAYFORM4 L i /σ ln(1/ )), which can be up to √ n times faster than accelerated RBCD, since some L i can be significantly smaller than L. NU_ACDM is the current state-of-the-art coordinate descent algorithm for solving equation 1.1.Our A2BCD algorithm generalizes NU_ACDM to the asynchronous-parallel case. We solve",
"equation 1.1 with a collection of p computing nodes that continually read a shared-access solution vector y into local memory then compute a block gradient ∇ i f , which is used to update shared solution vectors (x, y, v) . Proving",
"convergence in the asynchronous case requires extensive new technical machinery.A traditional synchronous-parallel implementation is organized into rounds of computation: Every computing node must complete an update in order for the next iteration to begin. However",
", this synchronization process can be extremely costly, since the lateness of a single node can halt the entire system. This becomes",
"increasingly problematic with scale, as differences in node computing speeds, load balancing, random network delays, and bandwidth constraints mean that a synchronous-parallel solver may spend more time waiting than computing a solution.Computing nodes in an asynchronous solver do not wait for others to complete and share their updates before starting the next iteration. They simply",
"continue to update the solution vectors with the most recent information available, without any central coordination. This eliminates",
"costly idle time, meaning that asynchronous algorithms can be much faster than traditional ones, since they have much faster iterations. For instance, random",
"network delays cause asynchronous algorithms to complete iterations Ω(ln(p)) time faster than synchronous algorithms at scale. This and other factors",
"that influence the speed of iterations are discussed in Hannah & Yin (2017a) . However, since many iterations",
"may occur between the time that a node reads the solution vector, and the time that its computed update is applied, effectively the solution vector is being updated with outdated information. At iteration k, the block gradient",
"∇ i k f is computed at a delayed iterateŷ k defined as 1 : DISPLAYFORM5 for delay parameters j(k, 1), . . . , j(k, n) ∈ N. Here j(k, i) denotes how many iterations out of date coordinate block i is at iteration k. Different blocks may be out of date",
"by different amounts, which is known as an inconsistent read. We assume 2 that j(k, i) ≤ τ for some",
"constant τ < ∞.Asynchronous algorithms were proposed",
"in Chazan & Miranker (1969) to solve linear systems.General convergence results and theory were developed later in BID5 ; Bertsekas & Tsitsiklis (1997); Tseng et al. (1990); Luo & Tseng (1992; 1993); Tseng (1991) There is also a rich body of work on asynchronous SGD. In the distributed setting, Zhou et al.",
"(2018) showed global convergence for stochastic variationally coherent problems even when the delays grow at a polynomial rate. In Lian et al. (2018) , an asynchronous",
"decentralized SGD was proposed with the same optimal sublinear convergence rate as SGD and linear speedup with respect to the number of workers. In Liu et al. (2018) , authors obtained",
"an asymptotic rate of convergence for asynchronous momentum SGD on streaming PCA, which provides insight into the tradeoff between asynchrony and momentum. In Dutta et al. (2018) , authors prove",
"convergence results for asynchronous SGD that highlight the tradeoff between faster iterations and iteration complexity. Further related work is discussed in Section",
"4."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.13333332538604736,
0.24390242993831635,
0.4444444477558136,
0.1538461446762085,
0,
0.1395348757505417,
0.31578946113586426,
0.3414634168148041,
0.27586206793785095,
0.1090909019112587,
0.0714285671710968,
0.12121211737394333,
0.11764705181121826,
0.07407406717538834,
0,
0.10256409645080566,
0.0833333283662796,
0.1428571343421936,
0.1666666567325592,
0.13793103396892548,
0.1666666567325592,
0,
0.13513512909412384,
0.11320754140615463,
0.20408162474632263,
0.17142856121063232,
0.1515151411294937,
0.0624999962747097,
0.1111111044883728,
0.05882352590560913,
0.19354838132858276,
0.13636362552642822,
0.0714285671710968,
0.17142856121063232,
0,
0.16393442451953888,
0.25,
0.1860465109348297,
0.27272728085517883,
0.21621620655059814
] | rylIAsCqYm | true | [
"We prove the first-ever convergence proof of an asynchronous accelerated algorithm that attains a speedup."
] |
[
"A framework for efficient Bayesian inference in probabilistic programs is introduced by embedding a sampler inside a variational posterior approximation.",
"Its strength lies in both ease of implementation and automatically tuning sampler parameters to speed up mixing time.",
"Several strategies to approximate the evidence lower bound (ELBO) computation are introduced, including a rewriting of the ELBO objective.",
"Experimental evidence is shown by performing experiments on an unconditional VAE on density estimation tasks; solving an influence diagram in a high-dimensional space with a conditional variational autoencoder (cVAE) as a deep Bayes classifier; and state-space models for time-series data.",
"We consider a probabilistic program (PP) to define a distribution p(x, z), where x are observations and z, both latent variables and parameters, and ask queries involving the posterior p(z|x).",
"This distribution is typically intractable but, conveniently, probabilistic programming languages (PPLs) provide inference engines to approximate it using Monte Carlo methods (e.g. particle Markov Chain Monte Carlo (MCMC) (Andrieu et al., 2010) or Hamiltonian Monte Carlo (HMC) (Neal et al., 2011) ) or variational approximations (e.g. Automatic Differentiation Variational Inference (ADVI) (Kucukelbir et al., 2017) ).",
"Whereas the latter are biased and underestimate uncertainty, the former may be exceedingly slow depending on the target distribution.",
"For such reason, over the recent years, there has been an increasing interest in developing more efficient posterior approximations (Nalisnick et al., 2016; Salimans et al., 2015; Tran et al., 2015) .",
"It is known that the performance of a sampling method depends on the parameters used (Papaspiliopoulos et al., 2007) .",
"Here we propose a framework to automatically adapt the posterior shape and tune the parameters of a posterior sampler with the aim of boosting Bayesian inference in PPs.",
"Our framework constitutes a principled way to enhance the flexibility of the variational posterior approximation, yet can be seen also as a procedure to tune the parameters of an MCMC sampler.",
"Our contributions are a new flexible and unbiased variational approximation to the posterior, which improves an initial variational approximation with a (learnable via automatic differentiation) stochastic process.",
"Appendix A discusses related work."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.29629629850387573,
0,
0.07692307233810425,
0.09090908616781235,
0.11428570747375488,
0.036363635212183,
0,
0,
0.07407406717538834,
0.06451612710952759,
0.11764705181121826,
0.1875,
0
] | Hkglty2EKH | true | [
"We embed SG-MCMC samplers inside a variational approximation"
] |
[
"The point estimates of ReLU classification networks, arguably the most widely used neural network architecture, have recently been shown to have arbitrarily high confidence far away from the training data.",
"This architecture is thus not robust, e.g., against out-of-distribution data.",
"Approximate Bayesian posteriors on the weight space have been empirically demonstrated to improve predictive uncertainty in deep learning.",
"The theoretical analysis of such Bayesian approximations is limited, including for ReLU classification networks.",
"We present an analysis of approximate Gaussian posterior distributions on the weights of ReLU networks.",
"We show that even a simplistic (thus cheap), non-Bayesian Gaussian distribution fixes the asymptotic overconfidence issue.",
"Furthermore, when a Bayesian method, even if a simple one, is employed to obtain the Gaussian, the confidence becomes better calibrated.",
"This theoretical result motivates a range of Laplace approximations along a fidelity-cost trade-off.",
"We validate these findings empirically via experiments using common deep ReLU networks.",
"As neural networks have been successfully applied in ever more domains, including safety-critical ones, the robustness of their predictions and the calibration of their predictive uncertainty have moved into focus, subsumed under the notion of AI safety (Amodei et al., 2016) .",
"A principal goal of uncertainty calibration is that learning machines (and neural networks in particular) should assign low confidence to test cases not explained well by the training data or prior information (Gal, 2016) .",
"The most obvious such instance are test points that lie \"far away\" from the training data.",
"Many methods to achieve this goal have been proposed, both Bayesian (Gal & Ghahramani, 2016; Blundell et al., 2015; Louizos & Welling, 2017) and non-Bayesian (Lakshminarayanan et al., 2017; Liang et al., 2018; Hein et al., 2019) .",
"ReLU networks are currently among the most widely used neural architectures.",
"This class comprises any network that can be written as a composition of linear layers (including fully-connected, convolutional, and residual layers) and a ReLU activation function.",
"But while ReLU networks often achieve high accuracy, the uncertainty of their predictions has been shown to be miscalibrated (Guo et al., 2017) .",
"Indeed, Hein et al. (2019) demonstrated that ReLU networks are always overconfident \"far away from the data\": scaling a training point x (a vector in a Euclidean input space) with a scalar δ yields predictions of arbitrarily high confidence in the limit δ → ∞.",
"This means ReLU networks are susceptible to adversarial or out-of-distribution (OOD) examples.",
"Bayesian methods have long been known empirically to improve predictive uncertainty calibration.",
"MacKay (1992) demonstrated empirically that the predictive uncertainty of Bayesian neural networks will naturally be high in regions not covered by training data.",
"Results like this raise the hope that the overconfidence problem of ReLU networks, too, might be mitigated by the use of Bayesian methods.",
"This paper offers a theoretical analysis of the binary classification case of ReLU networks with logistic output layer.",
"We show that equipping such networks with virtually any Gaussian probability distribution (i.e. regardless of whether it is motivated in a Bayesian fashion or not) mitigates the aforementioned theoretical problem, so that predictive confidence far away from the training data approaches a known constant, bounded away from one, whose value is controlled by the covariance (cf. Figure 1) .",
"At the same time, this treatment does not change the decision boundary of the trained network, so it has no negative effect on the predictive performance.",
"Figure 1: Binary classification on a toy dataset using a MAP estimate",
"(a) and various Gaussian approximations over the weights, sorted by their complexity of inverting the precision matrix.",
"These approximations are carried out only at the last layer of the network and d denotes the number of hidden units at that layer.",
"The shade of color represents the confidence of the prediction (darker shade means higher confidence).",
"The decision boundary is in thick black.",
"Even an arbitrary (i.e. nonBayesian) isotropic",
"(b) or diagonal",
"(c) covariance makes the confidence bounded away from one.",
"Using the data in a more Bayesian fashion (d) calibrates the uncertainty further, in particular in regions close to the data.",
"A central aspect of our result is that asymptotic overconfidence can be mitigated with an essentially arbitrary Gaussian distribution on the weight space, including one of simple diagonal or even scalar covariance, and one whose covariance need not even depend on the training data.",
"Achieving calibration at finite distances from the training data requires increasing levels of fidelity towards full Bayesian inference, for which our results also give some quantification.",
"Our results thus answer a question about \"how Bayesian\" one needs to be to achieve certain levels of calibration.",
"This is valuable because even approximate Bayesian treatments of deep learning, such as through Laplace approximations, can have high computational cost.",
"We empirically validate our results through a simple Laplace approximation to only the last layer of deep ReLU architectures, and find that this cheap procedure is already competitive to recently proposed non-Bayesian methods specifically constructed to overcome the overconfidence problem of ReLU networks.",
"We also show that this cheap Bayesian approach yields good performance in the multi-class classification setting, indicating that our analysis may carry over to this case.",
"Section 2 begins with a rigorous problem statement and assumptions, then develops the main theoretical results.",
"We discuss related work in Section 3, while empirical results are in Section 4.",
"We have shown that even an extremely approximate and virtually non-Bayesian probabilistic Gaussian treatment mitigates the most extreme aspects of overconfidence in ReLU networks.",
"Our analytical results bound the confidence of the Bayesian prediction of linear classifiers and ReLU networks far away from the training data away from one.",
"This motivates a spectrum of approximations, from ad-hoc isotropic to \"full Bayesian\" Laplace approximations.",
"In the Laplace approximation case, the bound asymptotically converges to a constant whose value can be controlled via the prior.",
"We validated our results experimentally by constructing a simple Laplace method that can still capture the properties we have shown, specifically by only approximating the last-layer's posterior distribution.",
"In contrast to other approximations, this method is cheap and simple to implement, yet already yields competitive performance compared to the more expensive, recently proposed non-Bayesian method for combating the overconfidence problem.",
"While more elaborate Laplace approximations can improve fidelity the further, our results provide virtually any ReLU network with a simple and computationally lightweight way to mitigate overconfidence.",
"1/2 = 0.",
"Notice, the denominator of the l.h.s. is positive.",
"Thus, it follows that µ f must be 0, implying that σ(µ f ) = 0.5.",
"Lemma A.2.",
"Let x ∈ R n be a vector and A ∈ R n×n be an SPD matrix.",
"If λ min (A) is the minimum eigenvalue of A, then x T Ax ≥ λ min x 2 .",
"Proof.",
"Since A is SPD, it admits an eigendecomposition A = QΛQ T and Λ = Λ 1 2 Λ 1 2 makes sense.",
"Therefore, by keeping in mind that Q T x is a vector in R n , we have",
"where the last equality is obtained as Q T x 2 = x T Q T Qx and noting that Q is an orthogonal matrix.",
"Proposition A.3.",
"Let f : R n → R be a binary linear classifier defined by f (x) := w T x and p(w|D) := N (w|µ, Σ) be the distribution over w.",
"Then for any x ∈ R n ,",
"Furthermore, if x ∈ R n then as δ > 0 goes to infinity",
"Proof.",
"The first result follows directly from Lemma A.2 and by noting that the denominator of eq. (4) is positive since Σ is symmetric positive-definite (SPD) by definition.",
"For the second result, let x ∈ R n be arbitrary.",
"By computation and again since the denominator of eq. (4) is positive, we have",
"We would like to inspect the asymptotic behavior of z(δx) with respect to δ.",
"First, for the sake of completeness, we can compute that lim δ→0 |z(δx)| = 0.",
"This reflects the case when δx goes to the decision boundary.",
"Now, for the case when δ → ∞, we can see that",
"since 1/δ 2 → 0 as δ → ∞.",
"Therefore, using Lemma A.2 and Cauchy-Schwarz inequality, we have",
"thus the proof is complete.",
"Under review as a conference paper at ICLR 2020",
"Lemma A.4 (Hein et al. (2019)).",
"Let {Q i } R l=1 be the set of linear regions associated to the ReLU network φ : R n → R n .",
"For any x ∈ R n there exists α ∈ R with α > 0 and t ∈ {1, . . . , R} such that δx ∈ Q t for all β ≥ α.",
"Furthermore, the restriction of φ to Q t can be written as an affine function.",
"Theorem A.5.",
"Let f : R d → R be a binary linear classifier defined by f • φ(x) := w T φ(x) where φ : R n → R d is a ReLU network and let p(w|D) := N (w|µ, Σ) be the distribution over w.",
"Then for any x ∈ R n ,",
"where V ∈ R d×n and a ∈ R d are some matrix and vector that depend on x.",
"Furthermore, as δ > 0 goes to infinity such that x ∈ Q and φ| Q (x) := Vx + a.",
"Applying eq. (4) to φ| Q (x) and following the proof of Proposition 2.3 yield",
"thus the first result is obtained.",
", such that for any δ ≥ α, we have that δx ∈ R and the restriction φ| R can be written as Ux + c.",
"Therefore, for any such δ,",
"Now, notice that as δ → ∞, 1/δ 2 and 1/δ goes to zero.",
"So, in the limit, we have that",
"Again, following the proof of Proposition 2.3 (i.e. using Cauchy-Schwarz and Lemma A.2), we can upper-bound this limit with",
"which concludes the proof.",
"Corollary A.6 (λ min (Σ) from a desired upper confidence bound on ReLU networks).",
"Let f • φ, with φ : R n → R d and f : R d → R, be a ReLU network defined by f • φ(x) := w T φ(x) and N (w|µ, Σ) be the distribution over w where the mean µ is fixed and Σ is any SPD matrix.",
"Then: (i) For any > 0 there exists Σ such that for any x ∈ R n far away from the training data, we have that |z • φ(x)| ≤ .",
"(ii) For any 0.5 < p < 1 there exists Σ such that for any x ∈ R n far away from the training data, we have that σ(|z • φ(x)|) ≤ p.",
"Proof.",
"We begin with (i).",
"Let > 0 and δ = 8 π µ 2 .",
"Pick any Σ SPD with λ min (Σ) = δ.",
"Then, by eq. (12) of Theorem 2.4 and our choice of λ min (Σ), for any z ∈ R n , asymptotically we have that",
"which is the desired result.",
"For",
"(ii), let 0.5 < p < 1 be arbitrary.",
"Observe that the inverse logistic function is given by σ −1 (x) := log x/(1 − x) for 0 < x < 1 and it is positive for 0.5 < x < 1.",
"Therefore by setting in (i) with",
"2 and verify that for any x ∈ R n this gives |z(x)| ≤ σ −1 (p).",
"Thus, for any x ∈ R n far away from the training data, since σ is monotonic, we have that",
"and the proof is complete.",
". Let p(w|D) := N (w|µ, Σ) be the posterior over w, obtained via a Laplace approximation with prior N (w|0, σ 2 0 I). Suppose H is the Hessian w.r.t. w at µ of the negative log-likelihood of the model. Then",
"(ii) For each i = 1, . . . , d, the ith eigenvalue λ i (Σ) of Σ is a non-decreasing function of σ Proof.",
"The negative log-likelihood of Bernoulli distribution is given by",
"Now, observing that σ (x) = σ(x)(1 − σ(x)) for all x ∈ R, we can compute",
"T ."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.16393442451953888,
0,
0.11764705181121826,
0.1702127605676651,
0.25531914830207825,
0.2448979616165161,
0.26923075318336487,
0.08888888359069824,
0.08888888359069824,
0.08695651590824127,
0.1492537260055542,
0.08163265138864517,
0.0937499925494194,
0.09090908616781235,
0.21052631735801697,
0.17543859779834747,
0.13698630034923553,
0.08888888359069824,
0.08888888359069824,
0.2142857164144516,
0.30188679695129395,
0.1599999964237213,
0.18823528289794922,
0.1071428507566452,
0.045454543083906174,
0.16326530277729034,
0.1538461446762085,
0.08888888359069824,
0,
0,
0,
0.0476190447807312,
0.20408162474632263,
0.1944444328546524,
0.1355932205915451,
0.15686273574829102,
0.1111111044883728,
0.28169015049934387,
0.21052631735801697,
0.08163265138864517,
0.04444444179534912,
0.24561403691768646,
0.15094339847564697,
0.12765957415103912,
0.15686273574829102,
0.23728813230991364,
0.26229506731033325,
0.23333333432674408,
0,
0.0952380895614624,
0.0833333283662796,
0.08510638028383255,
0.08163265138864517,
0,
0.11999999731779099,
0.07692307233810425,
0.1355932205915451,
0.04878048598766327,
0.04255318641662598,
0.1355932205915451,
0.09090908616781235,
0.08510638028383255,
0.17391304671764374,
0.1666666567325592,
0.09302325546741486,
0.13333332538604736,
0,
0,
0.052631575614213943,
0.0476190447807312,
0,
0.22641508281230927,
0.06779660284519196,
0.1666666567325592,
0.1818181723356247,
0.04878048598766327,
0.08163265138864517,
0.11320754140615463,
0.12244897335767746,
0.05128204822540283,
0.14035087823867798,
0.052631575614213943,
0.08695651590824127,
0.09999999403953552,
0.1090909019112587,
0.054054051637649536,
0.0833333283662796,
0.1690140813589096,
0.09836065024137497,
0.0952380895614624,
0.054054051637649536,
0,
0,
0.13793103396892548,
0.052631575614213943,
0.0476190447807312,
0.1355932205915451,
0.05128204822540283,
0.11999999731779099,
0.11320754140615463,
0.052631575614213943,
0.11428570747375488,
0.1090909019112587,
0.0952380895614624,
0.07999999821186066
] | H1gJ2RVFPH | true | [
"We argue theoretically that by simply assuming the weights of a ReLU network to be Gaussian distributed (without even a Bayesian formalism) could fix this issue; for a more calibrated uncertainty, a simple Bayesian method could already be sufficient."
] |
[
"Word alignments are useful for tasks like statistical and neural machine translation (NMT) and annotation projection.",
"Statistical word aligners perform well, as do methods that extract alignments jointly with translations in NMT.",
"However, most approaches require parallel training data and quality decreases as less training data is available.",
"We propose word alignment methods that require little or no parallel data.",
"The key idea is to leverage multilingual word embeddings – both static and contextualized – for word alignment.",
"Our multilingual embeddings are created from monolingual data only without relying on any parallel data or dictionaries.",
"We find that traditional statistical aligners are outperformed by contextualized embeddings – even in scenarios with abundant parallel data.",
"For example, for a set of 100k parallel sentences, contextualized embeddings achieve a word alignment F1 that is more than 5% higher (absolute) than eflomal.",
"Word alignment is essential for statistical machine translation and useful in NMT, e.g., for imposing priors on attention matrices (Liu et al., 2016; Alkhouli and Ney, 2017; Alkhouli et al., 2018) or for decoding (Alkhouli et al., 2016; Press and Smith, 2018) .",
"Further, word alignments have been successfully used in a range of tasks such as typological analysis (Lewis and Xia, 2008; Östling, 2015) , annotation projection (Yarowsky et al., 2001; Padó and Lapata, 2009 ) and creating multilingual embeddings (Guo et al., 2016) .",
"Statistical word aligners such as the IBM models (Brown et al., 1993) and their successors (e.g., fastalign (Dyer et al., 2013) , GIZA++ (Och and Ney, 2003) , eflomal (Östling and Tiedemann, 2016) ) are widely used for alignment.",
"With the rise of NMT (Bahdanau et al., 2014) , attempts have been made to interpret attention matrices as soft word alignments (Koehn and Knowles, 2017; Ghader and Monz, 2017) .",
"Several methods create alignments from attention matrices (Peter et al., 2017; Li et al., 2018; Zenkel et al., 2019) or pursue a multitask approach for alignment and translation (Chen et al., 2016; Garg et al., 2019) .",
"However, most systems require parallel data and their performance deteriorates when parallel text is scarce (cf.",
"Tables 1-2 in (Och and Ney, 2003) ).",
"Recent unsupervised multilingual embedding algorithms that use only monolingual data provide high quality static and contextualized embeddings (Conneau et al., 2018; Devlin et al., 2019; Pires et al., 2019) .",
"Our key idea is to leverage these embeddings for word alignments -without relying on parallel data.",
"Requiring no or little parallel data is advantageous in many scenarios, e.g., in the low-resource case and in domain-specific settings without parallel data.",
"A lack of parallel data cannot be easily remedied: mining parallel sentences is possible (cf.",
"(Schwenk et al., 2019) ) but assumes that monolingual corpora contain parallel sentences.",
"Contributions: (1) We propose two new alignment methods based on the matrix of embedding similarities.",
"(2) We propose two post-processing algorithms that handle null words and integrate positional information.",
"(3) We show that word alignments obtained from multilingual BERT outperform strong statistical word aligners like eflomal.",
"(4) We investigate the differences between word and subword processing for alignments and find subword processing to be preferable.",
"Upon acceptance we will publish the source code.",
"We presented word aligners based on contextualized (resp. static) embeddings that perform better than (resp. comparably with) statistical word aligners.",
"Our method is the first that does not require parallel data and is particularly useful for scenarios where a medium number of parallel sentences need to be aligned, but no additional parallel data is available.",
"For a set of 100k parallel sentences, contextualized embeddings achieve an alignment F 1 that is 5% higher (absolute) than eflomal."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.14814814925193787,
0.1428571343421936,
0.1538461446762085,
0.3333333134651184,
0.1428571343421936,
0.2857142686843872,
0.19354838132858276,
0.17142856121063232,
0.04444444179534912,
0.11999999731779099,
0.08510638028383255,
0.09756097197532654,
0.09999999403953552,
0.14814814925193787,
0,
0.10526315122842789,
0.3571428656578064,
0.1818181723356247,
0.1538461446762085,
0.07692307233810425,
0.07407406717538834,
0.07692307233810425,
0.2142857164144516,
0.2857142686843872,
0,
0.13793103396892548,
0.1428571343421936,
0.060606054961681366
] | SFIxZr3RRpr | true | [
"We use representations trained without any parallel data for creating word alignments."
] |
[
"Recent studies have shown the vulnerability of reinforcement learning (RL) models in noisy settings.",
"The sources of noises differ across scenarios.",
"For instance, in practice, the observed reward channel is often subject to noise (e.g., when observed rewards are collected through sensors), and thus observed rewards may not be credible as a result.",
"Also, in applications such as robotics, a deep reinforcement learning (DRL) algorithm can be manipulated to produce arbitrary errors.",
"In this paper, we consider noisy RL problems where observed rewards by RL agents are generated with a reward confusion matrix.",
"We call such observed rewards as perturbed rewards.",
"We develop an unbiased reward estimator aided robust RL framework that enables RL agents to learn in noisy environments while observing only perturbed rewards.",
"Our framework draws upon approaches for supervised learning with noisy data.",
"The core ideas of our solution include estimating a reward confusion matrix and defining a set of unbiased surrogate rewards.",
"We prove the convergence and sample complexity of our approach.",
"Extensive experiments on different DRL platforms show that policies based on our estimated surrogate reward can achieve higher expected rewards, and converge faster than existing baselines.",
"For instance, the state-of-the-art PPO algorithm is able to obtain 67.5% and 46.7% improvements in average on five Atari games, when the error rates are 10% and 30% respectively.",
"Designing a suitable reward function plays a critical role in building reinforcement learning models for real-world applications.",
"Ideally, one would want to customize reward functions to achieve application-specific goals (Hadfield-Menell et al., 2017) .",
"In practice, however, it is difficult to design a function that produces credible rewards in the presence of noise.",
"This is because the output from any reward function is subject to multiple kinds of randomness:• Inherent Noise.",
"For instance, sensors on a robot will be affected by physical conditions such as temperature and lighting, and therefore will report back noisy observed rewards.•",
"Application-Specific Noise. In",
"machine teaching tasks BID13 Loftin et al., 2014) , when an RL agent receives feedback/instructions from people, different human instructors might provide drastically different feedback due to their personal styles and capabilities. This",
"way the RL agent (machine) will obtain reward with bias.• Adversarial",
"Noise. Adversarial perturbation",
"has been widely explored in different learning tasks and shows strong attack power against different machine learning models. For instance, Huang et al.",
"(2017) has shown that by adding adversarial perturbation to each frame of the game, they can mislead RL policies arbitrarily.Assuming an arbitrary noise model makes solving this noisy RL problem extremely challenging. Instead, we focus on a specific",
"noisy reward model which we call perturbed rewards, where the observed rewards by RL agents are generated according to a reward confusion matrix. This is not a very restrictive",
"setting to start with, even considering that the noise could be adversarial: Given that arbitrary pixel value manipulation attack in RL is not very practical, adversaries in the real-world have high incentives to inject adversarial perturbation to the reward value by slightly modifying it. For instance, adversaries can",
"manipulate sensors via reversing the reward value.In this paper, we develop an unbiased reward estimator aided robust framework that enables an RL agent to learn in a noisy environment with observing only perturbed rewards. Our solution framework builds",
"on existing reinforcement learning algorithms, including the recently developed DRL ones (Q-Learning BID19 BID18 , Cross-Entropy Method (CEM) BID11 , Deep SARSA BID10 , Deep Q-Network (DQN) (Mnih et al., 2013; BID6 , Dueling DQN (DDQN) BID17 , Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015) , Continuous DQN (NAF) (Gu et al., 2016) and Proximal Policy Optimization (PPO) BID4",
"Only an underwhelming amount of reinforcement learning studies have focused on the settings with perturbed and noisy rewards, despite the fact that such noises are common when exploring a realworld scenario, that faces sensor errors or adversarial examples.",
"We adapt the ideas from supervised Er |r (r) = Pr |r (r =r − )r − + Pr |r (r =r + )r + .When",
"r = r + , from the definition in Lemma 1:Pr |r (r =r − ) = e + , Pr |r (r =r + ) = 1 − e + . Taking",
"the definition of surrogate rewards Eqn. FORMULA2",
"DISPLAYFORM0 Similarly, when r = r − , it also verifies Er |r [r(s t , a t , s t+1 )] = r(s t , a t , s t+1 ).Proof of",
"Lemma 2. The idea",
"of constructing unbiased estimator is easily adapted to multi-outcome reward settings via writing out the conditions for the unbiasedness property (s.t. Er |r [r] = r.). For simplicity",
", we shorthandr(r = R i ) asR i in the following proofs. Similar to Lemma",
"1, we need to solve the following set of functions to obtainr: DISPLAYFORM1 whereR i denotes the value of the surrogate reward when the observed reward is R i . Define R := [R 0",
"; R 1 ; · · · ; R M −1 ], andR := [R 0 ,R 1 , ...,R M −1 ], then the above equations are equivalent to: R = C ·R. If the confusion",
"matrix C is invertible, we obtain the surrogate reward: DISPLAYFORM2 According to above definition, for any true reward level R i , i = 0, 1, · · · , M − 1, we have DISPLAYFORM3 Furthermore, the probabilities for observing surrogate rewards can be written as follows: DISPLAYFORM4 wherep i = j p j c j,i , andp i , p i represent the probabilities of occurrence for surrogate rewardR i and true reward R i respectively. Corollary 1. Letp",
"i and p i denote",
"the probabilities of occurrence for surrogate rewardr(r = R i ) and true reward R i . Then the surrogate reward",
"satisfies, DISPLAYFORM5 Proof of Corollary 1. From Lemma 2, we have, DISPLAYFORM6",
"Consequently, DISPLAYFORM7 To establish Theorem 1, we need an auxiliary result (Lemma 3) from stochastic process approximation, which is widely adopted for the convergence proof for Q-Learning (Jaakkola et al., 1993; BID14 . Lemma 3. The random process {∆ t }",
"taking values",
"in R n and defined as DISPLAYFORM8 converges to zero w.p.1 under the following assumptions: DISPLAYFORM9 Here F t = {∆ t , ∆ t−1 , · · · , F t−1 · · · , α t , · · · } stands for the past at step t, α t (x) is allowed to depend on the past insofar as the above conditions remain valid. The notation || · || W refers to some weighted",
"maximum norm.Proof of Lemma 3. See previous literature (Jaakkola et al., 1993",
"; BID14 .Proof of Theorem 1. For simplicity, we abbreviate",
"s t , s t+1 , Q t ,",
"Q t+1 , r t ,r t and α t as s, s , Q, Q , r,r, and α, respectively.Subtracting from both sides the quantity Q * (s, a) in Eqn. (3): DISPLAYFORM10 In consequence, DISPLAYFORM11",
"Finally, DISPLAYFORM12 Becauser is bounded, it can be clearly verified that DISPLAYFORM13 for some constant C. Then, due to the Lemma 3, ∆ t converges to zero w.p.1, i.e., Q (s, a) converges to Q * (s, a).The procedure of Phased Q-Learning is described as",
"Algorithm 2: DISPLAYFORM14 DISPLAYFORM15 Note thatP here is the estimated transition probability, which is different from P in Eqn. FORMULA22 .To obtain the sample complexity results",
", the range",
"of our surrogate reward needs to be known. Assuming reward r is bounded in [0, R max ], Lemma",
"4 below states that the surrogate reward is also bounded, when the confusion matrices are invertible:Lemma 4. Let r ∈ [0, R max ] be bounded, where R max is a constant",
"; suppose C M ×M , the confusion matrix, is invertible with its determinant denoting as det(C). Then the surrogate reward satisfies DISPLAYFORM16 Proof",
"of Lemma 4. From Eqn. FORMULA4 , we have, DISPLAYFORM17 where adj(C",
") is the",
"adjugate matrix of C; det(C) is the determinant of C. It is known from linear algebra that, DISPLAYFORM18 where M ji is the determinant of the (M − 1) × (M − 1) matrix that results from deleting row j and column i of C. Therefore, M ji is also bounded: DISPLAYFORM19 where the sum is computed over all permutations σ of the set {0, 1, · · · , M − 2}; c is the element of M ji ; sgn(σ) returns a value that is +1 whenever the reordering given by σ can be achieved by successively interchanging two entries an even number of times, and −1 whenever it can not.Consequently, DISPLAYFORM20 Proof of Theorem 2. From Hoeffding's inequality, we obtain: DISPLAYFORM21",
"In the same way,r t is bounded by M det(C) · R max from Lemma 4. We then have, DISPLAYFORM22 Further, due to the unbiasedness",
"of surrogate rewards, we have st+1∈S P a (s t , s t+1 )r t = st+1∈S;rt∈R P a (s t , s t+1 ,r t )r t .As a result, DISPLAYFORM23 In the same way, DISPLAYFORM24 Recursing",
"the two equations in two directions (0 → T ), we get DISPLAYFORM25 Combining these two inequalities above we have: DISPLAYFORM26 For arbitrarily small , by choosing m appropriately, there always exists 1 = 2 =(1−γ) 2(1+γ) such that the policy error is bounded within . That is to say, the Phased Q-Learning algorithm can converge to the",
"near optimal policy within finite steps using our proposed surrogate rewards.Finally, there are |S||A|T transitions under which these conditions must hold, where | · | represent the number of elements in a specific set. Using a union bound, the probability of failure in any condition is",
"smaller than DISPLAYFORM27 We set the error rate less than δ, and m should satisfy that DISPLAYFORM28 In consequence, after m|S||A|T calls, which is, O DISPLAYFORM29 , the value function converges to the optimal one for every state s, with probability greater than 1 − δ.The above bound is for discounted MDP setting with 0 ≤ γ < 1. For undiscounted setting γ = 1, since the total error (for entire trajectory",
"of T time-steps) has to be bounded by , therefore, the error for each time step has to be bounded by T . Repeating our anayslis, we obtain the following upper bound: DISPLAYFORM30 Proof",
"of Theorem 3. DISPLAYFORM31 Using the CauchySchwarz inequality, DISPLAYFORM32 So we get, Var(r",
") − Var(r) ≥ 0. In addition, DISPLAYFORM33"
] | [
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3333333432674408,
0,
0.09756097197532654,
0.20689654350280762,
0.20000000298023224,
0.11764705181121826,
0.1818181723356247,
0.380952388048172,
0.0714285671710968,
0.09999999403953552,
0,
0.05128204822540283,
0.307692289352417,
0,
0.13793103396892548,
0,
0.11764705181121826,
0,
0,
0.09090908616781235,
0,
0.12903225421905518,
0.0416666641831398,
0.10810810327529907,
0.03999999538064003,
0.17391304671764374,
0.06896551698446274,
0.17391304671764374,
0,
0.06896550953388214,
0.11764705181121826,
0,
0,
0.05128204822540283,
0.07999999821186066,
0,
0,
0.0615384578704834,
0,
0.07999999821186066,
0,
0.04081632196903229,
0.06451612710952759,
0,
0,
0,
0.04878048226237297,
0.037735845893621445,
0.05882352590560913,
0,
0.0714285671710968,
0,
0.060606054961681366,
0,
0,
0,
0,
0,
0.03389830142259598,
0.07692307233810425,
0.0555555522441864,
0.0555555522441864,
0,
0
] | BkMWx309FX | true | [
"A new approach for learning with noisy rewards in reinforcement learning"
] |
[
"Training recurrent neural networks (RNNs) on long sequences using backpropagation through time (BPTT) remains a fundamental challenge. \n",
"It has been shown that adding a local unsupervised loss term into the optimization objective makes the training of RNNs on long sequences more effective. \n",
"While the importance of an unsupervised task can in principle be controlled by a coefficient in the objective function, the gradients with respect to the unsupervised loss term still influence all the hidden state dimensions, which might cause important information about the supervised task to be degraded or erased. \n",
"Compared to existing semi-supervised sequence learning methods, this paper focuses upon a traditionally overlooked mechanism -- an architecture with explicitly designed private and shared hidden units designed to mitigate the detrimental influence of the auxiliary unsupervised loss over the main supervised task.\n",
"We achieve this by dividing RNN hidden space into a private space for the supervised task and a shared space for both the supervised and unsupervised tasks.",
"We present extensive experiments with the proposed framework on several long sequence modeling benchmark datasets.",
"Results indicate that the proposed framework can yield performance gains in RNN models where long term dependencies are notoriously challenging to deal with.",
"Recurrent neural networks (RNNs) are widely considered the de facto tool for modeling sequences with a deep learning approach.",
"Training RNNs usually relies on the use of backpropagation through time (BPTT).",
"It is well known that unfortunately, it becomes difficult for BPTT to transmit gradients through very long computational graphs, as gradients tend to explode or vanish BID6 .",
"FIG0 -(a",
") gives an example in an oversimplified setting, where the hidden state at the first time-step does not receive gradients. To",
"make the BPTT-based training more effective, architectures such as the long short-term memory (LSTM) BID5 ) RNN and gated recurrent unit (GRU) BID2 ) RNNs, use parameterized gates which can make gradient flow more effective over long sequences.Recently, strong evidence in BID14 suggests that simultaneously learning supervised and unsupervised tasks can also enhance an RNN's ability to capture long-term dependencies. By",
"injecting unsupervised tasks locally along the sequence the unsupervised tasks can be harnessed to provide local and reliable gradients to more effectively optimize RNNs for long sequence learning tasks. Recent",
"work using this strategy BID10 BID4 BID14 , could be characterized as employing semi-supervised learning architectures consisting of two distinct types of RNNs, one for the primary supervised task and another for the auxiliary unsupervised tasks which are injected locally along the sequence. More concretely",
", the RNN for an unsupervised task updates is instantiated periodically along the sequence and its hidden states are reinitialized occasionally; whereas, the RNN for the supervised tasks operates at every time-step. FIG0 -(b) shows",
"how gradients",
"flow in this architecture.Despite the ability of these new semi-supervised architectures to mitigate the problem of long-distance BPTT, these approaches risk impairing the training of the main task by contaminating the entire representation-space with the unsupervised loss gradients.The challenge we address here is how to properly coordinate supervised and unsupervised tasks. Common wisdom for semi-supervised",
"learning BID10 typically follows one of the two procedures discussed below. The first widely used approach is",
"to weight supervised and unsupervised loss functions with varying coefficients empirically. However this method cannot radically",
"address aforementioned problem since representations for supervised and unsupervised learning are still entangled in same space. It is true that the contribution of",
"the unsupervised task can in principle be controlled by a coefficient in the objective function, but the gradients with respect to the unsupervised loss term still influence all the hidden state dimensions, which might cause important information about the supervised task to be erased accidentally. The second approach coordinates these",
"two types of learning by specifying a training order and separating them into different learning phases. For example, these approaches usually",
"first pre-train a model under unsupervised setting, then use the model for supervised learning BID11 .While these methods can provide rich auxiliary",
"knowledge which are potentially useful for the main task, there is no guarantee that this asynchronous learning fashion could let the main task utilize the auxiliary information well, and therefore long-term dependencies are still difficult to capture. It is thus crucial to ask: how exactly can auxiliary",
"unsupervised tasks best serve the main supervised learning task for long-term dependencies learning?On the other hand, it has been demonstrated that dividing",
"an RNN's representational space into different groups is useful for modeling long-term dependencies. One such example is clockwork RNNs BID7 , where each group",
"is responsible for a subset of hidden states and each processes input at different clock speeds. It is also possible to let each layer represent a group, and",
"each group may run at different time scales BID12 BID3 .With the above analysis in mind, we propose to solve the long-term",
"dependency problem by enabling the two RNNs to have a shared feature space for both supervised and unsupervised tasks, and allowing an RNN to have a private space dedicated for the supervised task. The key insight is to associate different time-scale updating operations",
"of distinct RNNs with different representational spaces. Through the shared feature space, the RNNs form an interface to exchange",
"features useful for both of them with less inference. As a side-product, the proposed variant of RNNs trains and evaluates slightly",
"faster since the architecture by design introduced an inductive bias that the modules for auxiliary tasks should have less parameters. FIG0 -(c) shows how the gradients flow through the hidden states during the backward",
"pass of",
"BPTT for the proposed architecture. It is clear that the lower (blue) space is not allowed to receive gradients from the unsupervised",
"task.Our primary contribution is introducing a private-and-shared feature space architecture for semisupervised sequential learning tasks, which is motivated through the lens of gradient flows. While the modification is simple, its application on modeling long-term dependencies has shown significant",
"improvement over other state-of-the-art algorithms to our knowledge, and thus we believe it will be of broad interest to the community. In Section 3, we describe the proposed method. In section 4, we present the experiments. In section 5, we",
"give an analysis of our method and experiments.2",
"RELATED WORK BID1 show that a generic temporal",
"convolutional network (TCN) outperforms some RNN variants on several",
"benchmark datasets. However, compared with TCNs, RNNs require lower memory for inference and can handle potential parameter change for",
"a transfer of domain BID1 . Furthermore, BID14 show that RNNs with an auxiliary unsupervised loss still outperform TCNs in terms of accuracy on",
"long sequence learning tasks. More importantly, RNNs can model, in principle, infinitely long dependencies with a finite number of parameters.Unsupervised",
"learning is often introduced in a pre-training fashion. For example, BID11 show that for natural language understanding tasks, generative pre-training of a language model on a large",
"amount of unlabeled text followed by discriminative fine-tuning leads to significant improvement. As another example, in BID10 , after pretraining the model with unlabeled data, the authors fix the weights and add additional",
"task-specific model capacity, making it possible to leverage large, rich and universal representations for the downstream tasks. It should be noted that BID10 utilizes additional datasets, whereas we do not. BID14 propose RNN-AE (AutoEncoder) to form an auxiliary",
"unsupervised task to aid RNNs in handling long sequencse, i.e. r-RNN (reconstruction",
") and p-RNN (prediction). The r-RNN approach tries to reconstruct the original input from the internal representation. Skim-RNN BID13 ) dynamically decides to update",
"only a small fraction of the hidden state for relatively unimportant input tokens. A skim-RNN",
"contains multiple RNNs that have a shared space and also have private spaces on which they operate. However a skim-RNN only considers",
"supervised tasks and aims at accelerating the inference speed. In contrast, our method is specifically designed",
"for long sequence learning problems with an unsupervised loss. Furthermore, a skim-RNN only uses",
"one RNN at each time-step which is determined by a reinforcement learning agent, whereas ours always use multiple",
"RNNs. As a side-effect of not relying on reinforcement learning algorithms, our method is easier to train in practice.The hidden state of our proposed RNN",
"also has multiple subsets, but they run at the same clock speed. Even more importantly, we introduce an inductive bias that different hidden sub-spaces should be responsible for different tasks.",
"In this paper, we have presented a semi-supervised RNN architecture with explicitly designed private and shared hidden representations.",
"This architecture allows information transfer between the supervised and unsupervised tasks in a hitherto unexplored way.",
"Compared with other similar semi-supervised RNN techniques, our experiments on widely used and competitive benchmark data sets suggest that our formulation indeed yields performance gains.",
"We conjecture that these gains come from the desirable properties of both gradient and information flow in architectures with shared and private representations.",
"As a side-product, our proposed architecture trains and evaluates faster than the related alternatives that we have explored since the architecture introduces an inductive bias that the modules for auxiliary tasks should have fewer parameters."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.03999999538064003,
0.17543859779834747,
0.3333333432674408,
0.8732394576072693,
0.3461538553237915,
0.08510638028383255,
0.1090909019112587,
0.11764705181121826,
0.09090908616781235,
0.035087715834379196,
0.11764705181121826,
0.16091953217983246,
0.1428571343421936,
0.19178082048892975,
0.22580644488334656,
0.307692289352417,
0.08163265138864517,
0.25,
0.1818181723356247,
0.277777761220932,
0.11538460850715637,
0.18867923319339752,
0.1666666567325592,
0.18867923319339752,
0.0363636314868927,
0.178571417927742,
0.07547169178724289,
0.3030303120613098,
0.2448979616165161,
0.19230768084526062,
0.16129031777381897,
0.15686273574829102,
0.14705881476402283,
0.158730149269104,
0.1463414579629898,
0.04999999701976776,
0,
0.07999999821186066,
0.25925925374031067,
0.11538460850715637,
0.072727270424366,
0.16129031777381897,
0.14492753148078918,
0.13333332538604736,
0.11320754140615463,
0.1702127605676651,
0.1538461446762085,
0.1666666567325592,
0.21739129722118378,
0.03999999538064003,
0.1428571343421936,
0.09836065024137497,
0.35999998450279236,
0.2916666567325592,
0.0714285671710968,
0.2222222238779068,
0.19354838132858276
] | r1lcM3AcKm | true | [
"This paper focuses upon a traditionally overlooked mechanism -- an architecture with explicitly designed private and shared hidden units designed to mitigate the detrimental influence of the auxiliary unsupervised loss over the main supervised task."
] |
[
"We investigate multi-task learning approaches which use a shared feature representation for all tasks.",
"To better understand the transfer of task information, we study an architecture with a shared module for all tasks and a separate output module for each task.",
"We study the theory of this setting on linear and ReLU-activated models.",
"Our key observation is that whether or not tasks' data are well-aligned can significantly affect the performance of multi-task learning.",
"We show that misalignment between task data can cause negative transfer (or hurt performance) and provide sufficient conditions for positive transfer.",
"Inspired by the theoretical insights, we show that aligning tasks' embedding layers leads to performance gains for multi-task training and transfer learning on the GLUE benchmark and sentiment analysis tasks; for example, we obtained a 2.35% GLUE score average improvement on 5 GLUE tasks over BERT LARGE using our alignment method.",
"We also design an SVD-based task re-weighting scheme and show that it improves the robustness of multi-task training on a multi-label image dataset.",
"Multi-task learning has recently emerged as a powerful paradigm in deep learning to obtain language (Devlin et al. (2018) ; Liu et al. (2019a; b) ) and visual representations (Kokkinos (2017) ) from large-scale data.",
"By leveraging supervised data from related tasks, multi-task learning approaches reduce the expensive cost of curating the massive per-task training data sets needed by deep learning methods and provide a shared representation which is also more efficient for learning over multiple tasks.",
"While in some cases, great improvements have been reported compared to single-task learning (McCann et al. (2018) ), practitioners have also observed problematic outcomes, where the performances of certain tasks have decreased due to task interference (Alonso and Plank (2016) ; Bingel and Søgaard (2017) ).",
"Predicting when and for which tasks this occurs is a challenge exacerbated by the lack of analytic tools.",
"In this work, we investigate key components to determine whether tasks interfere constructively or destructively from theoretical and empirical perspectives.",
"Based on these insights, we develop methods to improve the effectiveness and robustness of multi-task training.",
"There has been a large body of algorithmic and theoretical studies for kernel-based multi-task learning, but less is known for neural networks.",
"The conceptual message from the earlier work (Baxter (2000) ; Evgeniou and Pontil (2004) ; Micchelli and Pontil (2005) ; Xue et al. (2007) ) show that multi-task learning is effective over \"similar\" tasks, where the notion of similarity is based on the single-task models (e.g. decision boundaries are close).",
"The work on structural correspondence learning (Ando and Zhang (2005) ; Blitzer et al. (2006) ) uses alternating minimization to learn a shared parameter and separate task parameters.",
"Zhang and Yeung (2014) use a parameter vector for each task and learn task relationships via l 2 regularization, which implicitly controls the capacity of the model.",
"These results are difficult to apply to neural networks: it is unclear how to reason about neural networks whose feature space is given by layer-wise embeddings.",
"To determine whether two tasks interfere constructively or destructively, we investigate an architecture with a shared module for all tasks and a separate output module for each task (Ruder (2017) ).",
"See Figure 1 for an illustration.",
"Our motivating observation is that in addition to model similarity which affects the type of interference, task data similarity plays a second-order effect after controlling model similarity.",
"To illustrate the idea, we consider three tasks with the same number of data samples where task 2 and 3 have the same decision boundary but different data distributions (see Figure 2 for an illustration).",
"We observe that training task 1 with task 2 or task 3 can either improve or hurt task 1's performance, depending on the amount of contributing data along the decision boundary!",
"This observation shows that by measuring the similarities of the task data and the models separately, we can analyze the interference of tasks and attribute the cause more precisely.",
"Motivated by the above observation, we study the theory of multi-task learning through the shared module in linear and ReLU-activated settings.",
"Our theoretical contribution involves three components: the capacity of the shared module, task covariance, and the per-task weight of the training procedure.",
"The capacity plays a fundamental role because, if the shared module's capacity is too large, there is no interference between tasks; if it is too small, there can be destructive interference.",
"Then, we show how to determine interference by proposing a more fine-grained notion called task covariance which can be used to measure the alignment of task data.",
"By varying task covariances, we observe both positive and negative transfers from one task to another!",
"We then provide sufficient conditions which guarantee that one task can transfer positively to another task, provided with sufficiently many data points from the contributor task.",
"Finally, we study how to assign per-task weights for settings where different tasks share the same data but have different labels.",
"Our theory leads to the design of two algorithms with practical interest.",
"First, we propose to align the covariances of the task embedding layers and present empirical evaluations on well-known benchmarks and tasks.",
"On 5 tasks from the General Language Understanding Evaluation (GLUE) benchmark (Wang et al. (2018b) ) trained with the BERT LARGE model by Devlin et al. (2018) , our method improves the result of BERT LARGE by a 2.35% average GLUE score, which is the standard metric for the benchmark.",
"Further, we show that our method is applicable to transfer learning settings; we observe up to 2.5% higher accuracy by transferring between six sentiment analysis tasks using the LSTM model of Lei et al. (2018) .",
"Second, we propose an SVD-based task reweighting scheme to improve multi-task training for settings where different tasks have the same data but different labels.",
"On the ChestX-ray14 image classification dataset, we compare our method to the unweighted scheme and observe an improvement of 5.6 AUC score for all tasks.",
"In conclusion, these evaluations confirm that our theoretical insights are applicable to a broad range of settings and applications.",
"In this work, we studied the theory of multi-task learning in linear and ReLU-activated settings.",
"We verified our theory and its practical implications through extensive synthetic and real world experiments.",
"Our work opens up many interesting future questions.",
"First, could we extend the guarantees for choosing optimization schemes to non-linear settings?",
"Second, a limitation of our SVD-based optimization scheduler is that it only applies to settings with the same data.",
"Could we extend the method for heterogeneous task data?",
"More broadly, we hope our work inspires further studies to better understand multi-task learning in neural networks and to guide its practice.",
"Hard parameter sharing vs soft parameter sharing.",
"The architecture that we study in this work is also known as the hard parameter sharing architecture.",
"There is another kind of architecture called soft parameter sharing.",
"The idea is that each task has its own parameters and modules.",
"The relationships between these parameters are regularized in order to encourage the parameters to be similar.",
"Other architectures that have been studied before include the work of Misra et al. (2016), where the authors explore trainable architectures for convolutional neural networks.",
"Domain adaptation.",
"Another closely related line of work is on domain adaptation.",
"The acute reader may notice the similarity between our study in Section 2.3 and domain adaptation.",
"The crucial difference here is that we are minimizing the multi-task learning objective, whereas in domain adaptation the objective is typically to minimize the objective on the target task.",
"See Ben"
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.0714285671710968,
0.21621620655059814,
0.1538461446762085,
0.05882352590560913,
0.11764705181121826,
0.06779660284519196,
0.10810810327529907,
0.04444444179534912,
0.11538460850715637,
0.0714285671710968,
0.1875,
0.05882352590560913,
0.13333332538604736,
0.17142856121063232,
0.06896551698446274,
0.04878048226237297,
0.15789473056793213,
0,
0.1463414579629898,
0.09999999403953552,
0.052631575614213943,
0.1818181723356247,
0.09999999403953552,
0.10810810327529907,
0.12121211737394333,
0.1249999925494194,
0,
0.05128204822540283,
0.06896550953388214,
0.05128204822540283,
0.05882352590560913,
0.1538461446762085,
0.12121211737394333,
0.1090909019112587,
0.0416666641831398,
0.05405404791235924,
0.1538461446762085,
0.12121211737394333,
0.13793103396892548,
0.0714285671710968,
0,
0.07407406717538834,
0.12121211737394333,
0.08695651590824127,
0.05714285373687744,
0,
0,
0.0833333283662796,
0.07692307233810425,
0,
0.10810810327529907,
0.0833333283662796,
0.06451612710952759,
0
] | SylzhkBtDB | true | [
"A Theoretical Study of Multi-Task Learning with Practical Implications for Improving Multi-Task Training and Transfer Learning"
] |
[
"We review three limitations of BLEU and ROUGE – the most popular metrics\n",
"used to assess reference summaries against hypothesis summaries, come up with\n",
"criteria for what a good metric should behave like and propose concrete ways to\n",
"assess the performance of a metric in detail and show the potential of Transformers-based Language Models to assess reference summaries against hypothesis summaries.",
"Evaluation metrics play a central role in the machine learning community.",
"They direct the efforts of the research community and are used to define the state of the art models.",
"In machine translation and summarization, the two most common metrics used for evaluating similarity between candidate and reference texts are BLEU [Papineni et al., 2002] and ROUGE [Lin, 2004] .",
"Both approaches rely on counting the matching n-grams in the candidates summary to n-grams in the reference text.",
"BLEU is precision focused while ROUGE is recall focused.",
"These metrics have posed serious limitations and have already been criticized by the academic community [Reiter, 2018] [Callison-Burch et al., 2006] [Sulem et al., 2018] [Novikova et al., 2017] .",
"In this work, we formulate an empirical criticism of BLEU and ROUGE, establish a criteria that a sound evaluation metric should have and propose concrete ways to test any metric towards these criteria.",
"We also use recent advances in NLP to design a data-driven metric addressing the weaknesses found in BLEU and ROUGE and scoring high on the criteria for a sound evaluation metric.",
"2 Related Work 2.1 BLEU, ROUGE and n-gram matching approaches BLEU (Bilingual Evaluation Understudy) [Papineni et al., 2002] and ROUGE (Recall-Oriented Understudy for Gisting Evaluation) [Lin, 2004] have been used to evaluate many NLP tasks for almost two decades.",
"The general acceptance of these methods depend on many factors including their simplicity and the intuitive interpretability.",
"Yet the main factor is the claim that they highly correlate with human judgement [Papineni et al., 2002] .",
"This has been criticised extensively by the literature and the shortcomings of these methods have been widely studied.",
"Reiter [Reiter, 2018] , in his structured review of BLEU, finds a low correlation between BLEU and human judgment.",
"Callison et al [Callison-Burch et al., 2006] examines BLEU in the context of machine translation and find that BLEU does neither correlate with human judgment on adequacy(whether the hypothesis sentence adequately captures the meaning of the reference sentence) nor fluency(the quality of language in a sentence).",
"Sulem et al [Sulem et al., 2018] examines BLEU in the context of text simplification on grammaticality, meaning preservation and simplicity and report BLEU has very low or in some cases negative correlation with human judgment.",
"In this work, we have established a framework to assess metrics comparing the quality of reference and hypothesis summary/translations.",
"Based on these criteria, we compare evaluators using recent Transformers to BLEU and ROUGE and highlight their potential to replace BLEU and ROUGE."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3125,
0,
0.12121211737394333,
0.2631579041481018,
0.13333332538604736,
0.23529411852359772,
0.25531914830207825,
0.12121211737394333,
0.1538461446762085,
0.09302324801683426,
0.1249999925494194,
0.2666666507720947,
0.1428571343421936,
0.1666666567325592,
0.05405404791235924,
0.17142856121063232,
0.21052631735801697,
0.2142857164144516,
0.19607841968536377,
0.15789473056793213,
0.2702702581882477
] | S1xkQac9LB | true | [
"New method for assessing the quaility of similarity evaluators and showing potential of Transformer-based language models in replacing BLEU and ROUGE."
] |
[
"In this paper, we explore meta-learning for few-shot text classification.",
"Meta-learning has shown strong performance in computer vision, where low-level patterns are transferable across learning tasks.",
"However, directly applying this approach to text is challenging–lexical features highly informative for one task maybe insignificant for another.",
"Thus, rather than learning solely from words, our model also leverages their distributional signatures, which encode pertinent word occurrence patterns.",
"Our model is trained within a meta-learning framework to map these signatures into attention scores, which are then used to weight the lexical representations of words.",
"We demonstrate that our model consistently outperforms prototypical networks learned on lexical knowledge (Snell et al., 2017) in both few-shot text classification and relation classification by a significant margin across six benchmark datasets (19.96% on average in 1-shot classification)."
] | [
0,
0,
1,
0,
0,
0
] | [
0.11764705181121826,
0.09999999403953552,
0.1428571343421936,
0.09090908616781235,
0.12244897335767746,
0.032258059829473495
] | H1emfT4twB | false | [
"Meta-learning methods used for vision, directly applied to NLP, perform worse than nearest neighbors on new classes; we can do better with distributional signatures."
] |
[
"The description of neural computations in the field of neuroscience relies on two competing views:",
"(i) a classical single-cell view that relates the activity of individual neurons to sensory or behavioural variables, and focuses on how different cell classes map onto computations;",
"(ii) a more recent population view that instead characterises computations in terms of collective neural trajectories, and focuses on the dimensionality of these trajectories as animals perform tasks.",
"How the two key concepts of cell classes and low-dimensional trajectories interact to shape neural computations is however currently not understood.",
"Here we address this question by combining machine-learning tools for training RNNs with reverse-engineering and theoretical analyses of network dynamics.",
"We introduce a novel class of theoretically tractable recurrent networks: low-rank, mixture of Gaussian RNNs.",
"In these networks, the rank of the connectivity controls the dimensionality of the dynamics, while the number of components in the Gaussian mixture corresponds to the number of cell classes.",
"Using back-propagation, we determine the minimum rank and number of cell classes needed to implement neuroscience tasks of increasing complexity.",
"We then exploit mean-field theory to reverse-engineer the obtained solutions and identify the respective roles of dimensionality and cell classes.",
"We show that the rank determines the phase-space available for dynamics that implement input-output mappings, while having multiple cell classes allows networks to flexibly switch between different types of dynamics in the available phase-space.",
"Our results have implications for the analysis of neuroscience experiments and the development of explainable AI.",
"With recent advances in deep-learning, the novel approach of training and reverse-engineering RNNs on neuroscience tasks has led to insights on the implementation of cognitive processes (see [1] for a review).",
"Reverse-engineering methods have however provided only partial understanding so far, by either focusing on the characterization of neural dynamics and leaving aside the description of learnt connectivity [2] , or the converse [3] .",
"Taking advantage of recent theoretical results on low-rank networks [4] , we present a reverse-engineering approach that leads to analytically tractable classes of RNNs performing various tasks.",
"Crucially, these classes of models exhibit well defined dimensionality and number of cell classes allowing us to identify the roles of these two properties on neural computation.",
"In this work we have provided an abstract description of computations performed in recurrent neural networks.",
"By focusing on simple tasks this has allowed us to identify the complementary roles of the two important aspects that are dimensionality (section 3.1) and cell classes (section 3.2).",
"Beyond these simple tasks, we have been able to use this understanding to build networks solving multiple tasks (section 3.3), and we expect these principles of neural computation to be important results for the development of procedures aiming at reverse-engineering networks trained on more complex, real-world tasks."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.3499999940395355,
0.30188679695129395,
0.3396226465702057,
0.3404255211353302,
0.1304347813129425,
0.14999999105930328,
0.30434781312942505,
0.31111109256744385,
0.3636363446712494,
0.25925925374031067,
0.25,
0.29629629850387573,
0.178571417927742,
0.23076923191547394,
0.44897958636283875,
0.1904761791229248,
0.37037035822868347,
0.23880596458911896
] | SklZVQtLLr | true | [
"A theoretical analysis of a new class of RNNs, trained on neuroscience tasks, allows us to identify the role of dynamical dimensionality and cell classes in neural computations."
] |
[
"We propose the fusion discriminator, a single unified framework for incorporating conditional information into a generative adversarial network (GAN) for a variety of distinct structured prediction tasks, including image synthesis, semantic segmentation, and depth estimation.",
"Much like commonly used convolutional neural network - conditional Markov random field (CNN-CRF) models, the proposed method is able to enforce higher-order consistency in the model, but without being limited to a very specific class of potentials.",
"The method is conceptually simple and flexible, and our experimental results demonstrate improvement on several diverse structured prediction tasks.",
"Convolutional neural networks (CNNs) have demonstrated groundbreaking results on a variety of different learning tasks.",
"However, on tasks where high dimensional structure in the data needs to be preserved, per-pixel regression losses typically result in unstructured outputs since they do not take into consideration non-local dependencies in the data.",
"Structured prediction frameworks such as graphical models and joint CNN-graphical model-based architectures e.g. CNN-CRFs have been used for imposing spatial contiguity using non-local information BID13 BID2 BID25 .",
"The motivation to use CNN-CRF models stems from their ability to capture some structured information from second order statistics using the pairwise part.",
"However, statistical interactions beyond the second-order are tedious to incorporate and render the models complicated BID0 BID12 ).Generative",
"models provide another way to represent the structure and spacial contiguity in large high-dimensional datasets with complex dependencies. Implicit generative",
"models specify a stochastic procedure to produce outputs from a probability distribution. Such models are appealing",
"because they do not demand parametrization of the probability distribution they are trying to model. Recently, there has been",
"great interest in CNN-based implicit generative models using autoregressive BID4 and adversarial training frameworks BID16 .Generative adversarial networks",
"(GANs) BID7 can be seen as a two player minimax game where the first player, the generator, is tasked with transforming a random input to a specific distribution such that the second player, the discriminator, can not distinguish between the true and synthesized distributions. The most distinctive feature of",
"adversarial networks is the discriminator that assesses the discrepancy between the current and target distributions. The discriminator acts as a progressively",
"precise critic of an increasingly accurate generator. Despite their structured prediction capabilities",
", such a training paradigm is often unstable. However, recent work on spectral normalization",
"(SN) and gradient penalty has significantly increased training stability BID8 . Conditional GANs (cGANs) BID19 incorporate conditional",
"image information in the discriminator and have been widely used for class conditioned image generation . To that effect, unlike in standard GANs, a discriminator",
"for cGANs discriminates between the generated distribution and the target distribution on pairs of samples y and conditional information x.For class conditioning, several unique strategies have been presented to incorporate class information in the discriminator BID24 BID23 . DISPLAYFORM0 Adversarial loss (a) Concatenated Image Conditioning",
"x y Adversarial loss DISPLAYFORM1 Discriminator models for image conditioning. We propose fusing the features of the input and the ground truth",
"or generated image rather than concatenating.However, a cGAN can also be conditioned by structured data such as an image. Such conditioning is much more useful for structured prediction",
"problems. Since the discriminator in an image conditioned-GAN has access",
"to large portions of the image the adversarial loss can be interpreted as a learned loss that incorporates higher order statistics, essentially eliminating the need to manually design higher order loss functions. This variation of cGANs has extensively been used for image-to-image",
"translation tasks . However, the best way of incorporating conditional image information",
"into a GAN is not clear, and methods of feeding generated and conditional images to the discriminator tend to use a naive concatenation approach. In this work we address this gap by proposing a discriminator architecture",
"specifically designed for image conditioning. Such a discriminator contributes to the promise of generalization that GANs",
"bring to structured prediction problems by providing a singular and simplistic setup for capturing higher order non-local structural information from higher dimensional data without complicated modeling of energy functions.Contributions. We propose an approach to incorporating conditional information into a cGAN",
"using a fusion discriminator architecture (Fig. 1b) . In particular, we make the following key contributions:1. We propose a novel",
"discriminator architecture designed to incorporating conditional",
"information for structured prediction tasks. The method is designed to incorporate conditional information in feature space in a",
"way that allows the discriminator to enforce higher-order consistency in the model, and is conceptually simpler than alternative structured prediction methods such as CNN-CRFs where higher-order potentials have to be manually incorporated in the loss function.2. We demonstrate the effectiveness of this method on a variety of distinct structured",
"prediction tasks including semantic segmentation, depth estimation, and generating real images from semantic masks. Our empirical study demonstrates that the fusion discriminator is effective in preserving",
"high-order statistics and structural information in the data and is flexible enough to be used successfully for many structured prediction tasks.2 RELATED WORK 2.1 CNN-CRF MODELS Models for structured prediction have been extensively studied",
"in computer vision. In the past these models often entailed the construction of hand-engineered features. In 2015,",
"BID15 demonstrated that a fully convolutional approach to semantic segmentation could",
"yield state-ofthe-art results at that time with no need for hand-engineering features. BID1 showed that post-processing the results of a CNN with a conditional Markov random field led",
"to significant improvements. Subsequent work by many authors have refined this approach by incorporating the CRF as a layer within",
"a deep network and thereby enabling the parameters of both models to be learnt simultaneously BID11 . Many researchers have used this approach for other structured prediction problems, including image-to-image",
"translation and depth estimation BID14 .In most cases CNN-CRF models only incorporate unary and pairwise potentials. BID0 investigated incorporating",
"higher-order potentials into CNN-based models for semantic segmentation, and",
"found that while it is possible to learn the parameters of these potentials, they can be tedious to incorporate and render the model quite complex. Thus there is a need to develop methods that can incorporate higher-order statistical information without requiring",
"manual modeling of higher order potentials.",
"Structured prediction problems can be posed as image conditioned GAN problems.",
"The discriminator plays a crucial role in incorporating non-local information in adversarial training setups for structured prediction problems.",
"Image conditioned GANs usually feed concatenated input and output pairs to the discriminator.",
"In this research, we proposed a model for the discriminator of cGANs that involves fusing features from both the input and the output image in feature space.",
"This method provides the discriminator a hierarchy of features at different scales from the conditional data, and thereby allows the discriminator to capture higher-order statistics from the data.",
"We qualitatively demonstrate and empirically validate that this simple modification can significantly improve the general adversarial framework for structured prediction tasks.",
"The results presented in this paper strongly suggest that the mechanism of feeding paired information into the discriminator in image conditioned GAN problems is of paramount importance.6",
"SUPPLEMENTARY MATERIAL"
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.5490195751190186,
0.14814814925193787,
0.1621621549129486,
0.1764705777168274,
0.12244897335767746,
0.12765957415103912,
0.14999999105930328,
0.05405404791235924,
0.051282044500112534,
0.060606054961681366,
0.10810810327529907,
0,
0.13333332538604736,
0.1621621549129486,
0.19354838132858276,
0.060606054961681366,
0.11428570747375488,
0.25,
0.20689654350280762,
0.25641024112701416,
0.17391303181648254,
0.13793103396892548,
0.15094339847564697,
0.4000000059604645,
0.2857142686843872,
0.34285715222358704,
0.3928571343421936,
0.4324324131011963,
0.3199999928474426,
0.4000000059604645,
0.2295081913471222,
0.22727271914482117,
0.23999999463558197,
0.11764705181121826,
0.06666666269302368,
0.23255813121795654,
0.15789473056793213,
0.25,
0.05405404791235924,
0.1428571343421936,
0.15094339847564697,
0.07999999821186066,
0.06896550953388214,
0.3888888955116272,
0.1875,
0.22727271914482117,
0.2380952388048172,
0.29999998211860657,
0.22727271914482117
] | SJlf488Y_4 | true | [
"We propose the fusion discriminator, a novel architecture for incorporating conditional information into the discriminator of GANs for structured prediction tasks."
] |
[
"Model-based reinforcement learning (MBRL) has been shown to be a powerful framework for data-efficiently learning control of continuous tasks.",
"Recent work in MBRL has mostly focused on using more advanced function approximators and planning schemes, leaving the general framework virtually unchanged since its conception.",
"In this paper, we identify a fundamental issue of the standard MBRL framework -- what we call the objective mismatch issue.",
"Objective mismatch arises when one objective is optimized in the hope that a second, often uncorrelated, metric will also be optimized.",
"In the context of MBRL, we characterize the objective mismatch between training the forward dynamics model w.r.t. the likelihood of the one-step ahead prediction, and the overall goal of improving performance on a downstream control task.",
"For example, this issue can emerge with the realization that dynamics models effective for a specific task do not necessarily need to be globally accurate, and vice versa globally accurate models might not be sufficiently accurate locally to obtain good control performance on a specific task.",
"In our experiments, we study this objective mismatch issue and demonstrate that the likelihood of the one-step ahead prediction is not always correlated with downstream control performance.",
"This observation highlights a critical flaw in the current MBRL framework which will require further research to be fully understood and addressed.",
"We propose an initial method to mitigate the mismatch issue by re-weighting dynamics model training.",
"Building on it, we conclude with a discussion about other potential directions of future research for addressing this issue."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
0.1818181723356247,
0.14999999105930328,
0.24242423474788666,
0.22857142984867096,
0.17391303181648254,
0.1538461446762085,
0.24390242993831635,
0.21621620655059814,
0.3333333134651184,
0.05882352590560913
] | Sked_0EYwB | false | [
"We define, explore, and begin to address the objective mismatch issue in model-based reinforcement learning."
] |
[
"There has recently been a heated debate (e.g. Schwartz-Ziv & Tishby (2017), Saxe et al. (2018), Noshad et al. (2018), Goldfeld et al. (2018)) about measuring the information flow in Deep Neural Networks using techniques from information theory.",
"It is claimed that Deep Neural Networks in general have good generalization capabilities since they not only learn how to map from an input to an output but also how to compress information about the training data input (Schwartz-Ziv & Tishby, 2017).",
"That is, they abstract the input information and strip down any unnecessary or over-specific information.",
"If so, the message compression method, Information Bottleneck (IB), could be used as a natural comparator for network performance, since this method gives an optimal information compression boundary.",
"This claim was then later denounced as well as reaffirmed (e.g. Saxe et al. (2018), Achille et al. (2017), Noshad et al. (2018)), as the employed method of mutual information measuring is not actually measuring information but clustering of the internal layer representations (Goldfeld et al. (2018)).",
"In this paper, we will present a detailed explanation of the development in the Information Plain (IP), which is a plot-type that compares mutual information to judge compression (Schwartz-Ziv & Tishby (2017)), when noise is retroactively added (using binning estimation). ",
"We also explain why different activation functions show different trajectories on the IP.",
"Further, we have looked into the effect of clustering on the network loss through early and perfect stopping using the Information Plane and how clustering can be used to help network pruning.",
"Deep Neural Networks (DNNs) have recently achieved promising results in many areas especially computer vision and natural language processing.",
"Yet, the learning process and design principles of configuring DNN architecture are under-investigated (Tishby & Zaslavsky, 2015) .",
"There are some recent attempts towards addressing this challenge.",
"From an information theoretic viewpoint, Schwartz-Ziv & Tishby (2017) have investigated the learning dynamics of DNN -how the mutual information (MI) of the layer activation with input and target develops over the course of training.",
"The finding is that DNNs generally first increase the MI of the layers with both, but then reduce the MI with the input.",
"This perceived compression has led to promising results of DNN in many applications 1 .",
"This compression behaviour resembles the IB-method, a constraint method which aims to retain maximum information content for given compression levels (Tishby et al. (1999) ) and these possible maxima are depicted by the IB-bound.",
"2 Through the similarity, the IB-bound could be used as a way to judge network architecture (Schwartz-Ziv & Tishby (2017) ).",
"The closer to the IB-bound the better the NN is likely to perform.",
"However, this finding is controversial, which has been supported by e.g. ; ; Noshad et al. (2018) and denied.",
"Most prominently, Saxe et al. (2018) have argued that this does not generalize for all activation functions and that compression does not necessarily lead to good generalization.",
"Nonetheless Alemi et al. (2016) , Kolchinsky et al. (2017) , Nguyen & Choi (2018) , Banerjee & Montúfar (2018) and Alemi et al. (2018) have tried to implement the IB-constraint as optimization parameter for DNN training leading to promising results.",
"Amjad & Geiger (2018) criticize these attempt claiming that they were not really sticking to the IB for their optimization process since in deterministic NNs the mutual information is either infinite or constant.",
"Hence, the IB cannot produce optimizeable gradients.",
"They therefore reason, that the results of these authors were only possible by giving up a hard IB constraint.",
"Recent success with fully invertible neural networks (which cannot experience any form of compression) cast doubt on the notion of compression being a necessary factor for good generalization (e.g. (Jacobsen et al., 2018) , (Chang et al., 2017) , (Ardizzone et al., 2019) , (Song et al., 2019) ).",
"Finally, a recent paper by Goldfeld et al. (2018) assessed that measuring MI in this scenario is actually tracking how clustered the internal layer representations of the samples are.",
"Building on Goldfeld et al. (2018) , this work attempts to explain the trajectories in the IP created through MI estimation using binning.",
"Through this we will shed more light on the investigation of the learning dynamics of DNNs through usage of the IP.",
"Section 2.2.1 shows that the smaller the bin size for the binning estimator, the more the layers drift towards a fixed point in the IP.",
"Section 2.2.2 highlights that higher layers strongly influence the shape of lower layers in the IP.",
"Section 2.3 explains why the IP looks the way it does.",
"3 Clustering is then examined as a design parameter.",
"This is done by investigating the connection between the loss function and clustering through the usage of early and perfect stopping in section 2.4.1.",
"Here, no clear connection is found.",
"Lastly, network pruning is attempted in section 2.5 using the IP, where slightly positive indications are found.",
"At first though, the experimental setup is outlined.",
"This paper studies the information plane as a neural network analysis tool.",
"We have looked into the influence of different bin sizes for binning estimation, which has led to a detailed explanation of why certain behaviour is happening in the information plane.",
"Thus, finding new strong evidence that the information plane only tracks clustering as Goldfeld et al. (2018) suggested.",
"The usage of measuring clustering has been investigated using early stopping and perfect stopping, which we have not been able to generalise the finding across different datasets.",
"Clustering can be used to design a NN in terms of pruning, which might be worthy of further investigation.",
"The information plane holds value as a measure of clustering and could potentially lead to advancements in Deep Learning.",
"One aspect that has not been part of the discussion so far is that in contrast to non-linearly saturating activation functions like TanH, which has no binning during the real training process, ReLU in fact has a bin.",
"The 0-bin could actually lead to a loss in mutual information because the injectiveness of the activation function gets lost (not invertible anymore) and mutual information is not bound to be constant or infinite.",
"Therefore, networks with ReLU could experience a form of compression.",
"ReLU does in general show better generalization capabilities than TanH, which could partially support the claim that compressed neural networks generalize better Schwartz-Ziv & Tishby (2017) .",
"A well known problem of ReLUs is called \"dying ReLUS\" which could be a case of \"too high\" compression.",
"Which would disturb the mapping between input and output.",
"Taking out the binning of ReLUs, like in LeakyReLUs, is almost always favorable compared to standard ReLUs in terms of generalization (Xu et al. (2015) ).",
"Since LeakyReLUs restore the invertibility of the activation function and therefore prevent compression, this also indicates that compression does not necessarily generalizes better in DNNs.",
"It remains a task for future investigations, how this can be explained in detail."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.15094339847564697,
0.10526315122842789,
0.1764705777168274,
0.21276594698429108,
0.1071428507566452,
0.24137930572032928,
0.1875,
0.1702127605676651,
0.10256409645080566,
0.21621620655059814,
0,
0.16326530277729034,
0.10526315122842789,
0.11764705181121826,
0.19230768084526062,
0.14999999105930328,
0.06666666269302368,
0.051282044500112534,
0.09090908616781235,
0.11999999731779099,
0.1538461446762085,
0.07407406717538834,
0.1538461446762085,
0.1666666567325592,
0.1666666567325592,
0.1428571343421936,
0.1621621549129486,
0.19512194395065308,
0.1764705777168274,
0.06451612710952759,
0.13793103396892548,
0.23255813121795654,
0,
0.15789473056793213,
0.0714285671710968,
0.375,
0.4166666567325592,
0.15789473056793213,
0.17391303181648254,
0.21621620655059814,
0.307692289352417,
0.15094339847564697,
0.23999999463558197,
0.13333332538604736,
0.13333332538604736,
0.10526315122842789,
0.13793103396892548,
0.13636362552642822,
0.1818181723356247,
0.1764705777168274
] | Hyljn1SFwr | true | [
"We give a detailed explanation of the trajectories in the information plane and investigate its usage for neural network design (pruning)"
] |
[
"Formal understanding of the inductive bias behind deep convolutional networks, i.e. the relation between the network's architectural features and the functions it is able to model, is limited.",
"In this work, we establish a fundamental connection between the fields of quantum physics and deep learning, and use it for obtaining novel theoretical observations regarding the inductive bias of convolutional networks.",
"Specifically, we show a structural equivalence between the function realized by a convolutional arithmetic circuit (ConvAC) and a quantum many-body wave function, which facilitates the use of quantum entanglement measures as quantifiers of a deep network's expressive ability to model correlations.",
"Furthermore, the construction of a deep ConvAC in terms of a quantum Tensor Network is enabled.",
"This allows us to perform a graph-theoretic analysis of a convolutional network, tying its expressiveness to a min-cut in its underlying graph.",
"We demonstrate a practical outcome in the form of a direct control over the inductive bias via the number of channels (width) of each layer.",
"We empirically validate our findings on standard convolutional networks which involve ReLU activations and max pooling.",
"The description of a deep convolutional network in well-defined graph-theoretic tools and the structural connection to quantum entanglement, are two interdisciplinary bridges that are brought forth by this work.",
"A central factor in the application of machine learning to a given task is the restriction of the hypothesis space of learned functions known as inductive bias.",
"In deep convolutional networks, inductive bias manifests itself in architectural features such as number of layers, number of channels per layer, and more BID17 .",
"Formal understanding of the inductive bias behind convolutional networks is limited -the assumptions encoded into these models, which seem to form an excellent prior knowledge for different types of data (e.g. BID16 ; BID15 ; van den Oord et al. (2016) ), are for the most part a mystery.An important aspect of the influence that a certain architectural feature has on the inductive bias, is its effect on the network's ability to model correlations between regions of its input.",
"In this regard, one typically considers partitions that divide input regions into disjoint sets, and asks how far the function realized by the network is from being separable with respect to these partitions BID5 Levine et al., 2017) .",
"For example, BID5 show that when separability is measured through the algebraic notion of separation-rank, deep Convolutional Arithmetic Circuits (ConvACs) BID7 support exponential (in network size) separation-ranks for certain input partitions, while being limited to polynomial separation-ranks for others.",
"ConvACs are a special class of convolutional networks, characterized by linear activations and product pooling, which served a key role in theoretical analyses of convolutional networks, in virtue of their algebraic structure.In this work, we draw upon formal similarities between how physicists describe a system of manyparticles as a quantum mechanical wave function, and how machine learning practitioners map a high-dimensional input (e.g. image) to a set of output labels through a deep network.",
"In particular, we show that there is a structural equivalence between a function modeled by a ConvAC and a many-body quantum wave function, which relies on their underlying tensorial structure.",
"This allows employment of the well-established physical notion of quantum entanglement measures (Plenio and Virmani, 2007) , which subsumes other algebraic notions of separability such as the separation-rank mentioned above, for the analysis of correlations modeled by deep convolutional networks.Importantly, quantum entanglement is used by physicists as prior knowledge to form compact representations of many-body wave functions in what is known as Tensor Networks (TNs) (Östlund and Rommer, 1995; Verstraete and Cirac, 2004; Vidal, 2008; BID11 .",
"In the domain of machine learning, a network in the form of a ConvAC is effectively a compact representation of a multi-dimensional array related to the convolutional weights.",
"This has been analyzed to date via tensor decompositions -where the representations are based on linear combinations of outer-products between lower-order tensors BID7 .",
"A TN, on the other hand, is a way to compactly represent a higher-order tensor through inner-products among lower-order tensors, which allows a natural representation of TNs through an underlying graph.",
"Although the fundamental language is different, we show that a ConvAC can be mapped to a TN, and thus a graph-theoretic setting for studying functions modeled by deep convolutional networks is brought forth.",
"In particular, notions of max-flow/min-cut are shown to convey important meaning.The results we present, connect the inductive bias of deep convolutional networks to the number of channels in each layer, and indicate how these should be set in order to satisfy prior knowledge on the task at hand.",
"Specifically, the ability of a ConvAC to represent correlations between input regions is shown to be related to a min-cut over all edge-cut sets that separate the corresponding input nodes in the associated TN.",
"Such results enable one to avoid bottle-necks and adequately tailor the network architecture through application of prior knowledge.",
"Our results are theoretically proven for a deep ConvAC architecture; their applicability to a conventional deep convolutional network architecture, which involves ReLU activations and max pooling, is demonstrated through experiments.Some empirical reasoning regarding the influence of the channel numbers on the network's performance has been suggested (e.g. Szegedy et al. (2016) ), mainly regarding the issue of bottle-necks which is naturally explained via our theoretical analysis below.",
"Such insights on the architectural design of deep networks are new to machine learning literature, and rely on TN bounds recently derived in physics literature, referred to as 'quantum min-cut max-flow' BID8 .",
"The mapping we present between ConvACs and TNs indicates new possibilities for the use of graphtheory in deep networks, where min-cut analysis could be just the beginning.",
"Additionally, the connections we derive to quantum entanglement and quantum TNs may open the door to further well-established physical insights regarding correlation structures modeled by deep networks.The use of TNs in machine learning has appeared in an empirical context where Stoudenmire and Schwab (2016) trained a matrix product state (MPS) TN architecture to perform supervised learning tasks on the MNIST data-set.",
"Additionally, there is a growing interest in the physics community in RBM based forms for variational many-body wave functions (e.g. BID1 ).",
"BID2 present a theoretical mapping between RBMs and TNs which allows them to connect the entanglement bounds of a TN state to the expressiveness of the corresponding RBM.",
"We provide below the minimal tensor analysis background required for following the analyses of ConvACs and TNs that are carried out in this paper.",
"The core concept in tensor analysis is a tensor, which may be thought of as a multi-dimensional array.",
"The order of a tensor is defined to be the number of indexing entries in the array, which are referred to as modes.",
"The dimension of a tensor in a particular mode is defined as the number of values that may be taken by the index in that mode.",
"If A is a tensor of order N and dimension M i in each mode i ∈ [N ], its entries are denoted A d1...",
"d N , where the index in each mode takes values between 1 and the appropriate dimension, d i ∈ [M i ]. Suppose A is a tensor of order N , and let (A, B) be a partition of [N ] := {1, . . . , N }, i.e. A and B are disjoint subsets of [N ] whose union covers the entire set. The matricization of A w.r.t. the partition (A, B), denoted A A,B , is essentially the arrangement of the tensor elements as a matrix whose rows correspond to A and columns to B (see appendix A for exact definition).",
"The construction of a deep ConvAC in terms of a TN brought forth the main theoretical achievements of this paper.",
"This method enabled us to carry a graph-theoretic analysis of a convolutional network, and tie its expressiveness to a minimal cut in the graph characterizing it.",
"Our construction began with a structural equivalence between the function realized by a ConvAC and a quantum many-body wave function.",
"This facilitated the transfer of mathematical and conceptual tools employed by physicists, such as the tool of TNs and the concept of 'entanglement measures', providing well-defined quantifiers for a deep network's expressive ability to model correlations between regions of its input.",
"By employing these tools, we were able to present theoretical observations regarding the role that the number of channels in each layer fulfills in the overall expressiveness of a deep convolutional network, and how they affect its ability to model given input correlations.",
"Furthermore, practical implications were presented for the construction of a deep network architecture when there is prior knowledge regarding the input correlations.Apart from the direct results discussed above, two important interdisciplinary bridges emerge from this work.",
"The results we drew between min-cut in the graph representation of a ConvAC to network expressivity measures, may constitute an initial example for employing the connection to TNs for the application of graph-theoretic measures and tools to the analysis of the function realized by a deep convolutional network.",
"The second bridge, is the mathematical connection between the two fields of quantum physics and deep learning.",
"The field of quantum TNs is a rapidly evolving one, and the established construction of a successful deep learning architecture in the language of TNs may allow applications and insights to be transferred between the two domains.",
"For example, the tree shaped TN that was shown in this work to be equivalent to a deep convolutional network, has been known in the physics community for nearly a decade to be inferior to another deep TN architecture by the name of MERA (Vidal, 2008) , in its expressiveness and in its ability to model correlations.The MERA TN constitutes an exemplar case of how the TNs/deep-learning connection established in this work allows a bi-directional flow of tools and intuition.",
"MERA architecture introduces over-laps by adding 'disentangling' operations prior to the pooling operations, which, in translation to terms of deep learning, effectively mix activations that are intended to be pooled in different pooling windows.",
"Physicists have a good grasp of how these specific overlapping operations allow a most efficient representation of functions that exhibit high correlations at all length scales (Vidal, 2007) .",
"Accordingly, a new view of the role of overlaps in the high expressivity of deep networks as effectively 'disentangling' intricate correlations in the data can be established.",
"In the other direction, as deep convolutional networks are the most empirically successful machine learning architectures to date, physicists may benefit from trading their current 'overlaps by disentangling' scheme to the use of overlapping convolutional windows (proven to contribute exponentially to the expressive capacity of neural networks by Sharir and Shashua FORMULA1 ), in their search for expressive representations of quantum wave functions.",
"Overall, We view this work as an exciting bridge for transfer of tools and ideas between fields, and hope it will reinforce a fruitful interdisciplinary discourse.",
"DISPLAYFORM0",
"We provide below a short introduction to the notation used by physicists when describing quantum mechanical properties of a many-body system.",
"We follow relevant derivations in Preskill (1998) and Hall FORMULA1 , referring the interested reader to these sources for a more comprehensive mathematical introduction to quantum mechanics.A state of a system, which is a complete description of a physical system, is given in quantum mechanics as a ray in a Hilbert space (to be defined below).",
"Relevant Hilbert spaces in quantum mechanics are vector spaces over the complex numbers.",
"We restrict our discussion to vector spaces over R, as the properties related to complex numbers are not required for our analysis and do not affect it.",
"Physicists denote such vectors in the 'ket' notation, in which a vector ψ is denoted by: |ψ ∈ H. The Hilbert space H has an inner product denoted by φ|ψ , that maps a pair of two vectors in H to a scalar.",
"This inner product operation is also referred to as 'projecting |ψ onto |φ '.",
"A ray is an equivalence class of vectors that differ by multiplication by a nonzero scalar.",
"For any nonzero ray, a representative of the class, |ψ , is conventionally chosen to have a unit norm: ψ|ψ",
"= 1. A 'bra' notation φ|, is used for the 'dual vector' which formally is a linear mapping between vectors to scalars defined as |ψ → φ|ψ .",
"We can intuitively think of a 'ket' as a column vector and 'bra' as a row vector.Relevant Hilbert spaces can be infinite dimensional or finite dimensional.",
"We limit our discussion to quantum states which reside in finite dimensional Hilbert spaces, as these lie at the heart of our analogy to convolutional networks.",
"Besides being of interest to us, these spaces are extensively investigated in the physics community as well.",
"For example, the spin component of a spinful particle's wave function resides in a finite dimensional Hilbert space.",
"One can represent a general single particle state |ψ ∈ H1, where dim(H1) = M , as a linear combination of some orthonormal basis vectors: DISPLAYFORM0 where v ∈ R M is the vector of coefficients compatible with the basis {|ψ d } M d=1 of H1, each entry of which can be calculated by the projection: DISPLAYFORM1 The extension to the case of N particles, each with a wave function residing in a local finite dimensional Hilbert space Hj for j ∈ [N ] (e.g. N spinful particles), is readily available through the tool of a tensor product.",
"In order to define a Hilbert space which is the tensor product of the local Hilbert spaces: H := ⊗ N j=1 Hj, we will specify its scalar product.",
"Denote the scalar product in each Hj by ·|· j , then the scalar product in the tensor product finite dimensional Hilbert space DISPLAYFORM2 For simplicity, we set the dimensions of the local Hilbert spaces Hj to be equal for all j, i.e. ∀j : dim(Hj) = M .",
"In the spin example, this means that the particles have the same spin, e.g. for N electrons (spin 1/2), M = 2. Denoting as above the orthonormal basis of the local Hilbert space by DISPLAYFORM3 , the many-body quantum wave function |ψ ∈ H = ⊗ N j=1 Hj can be written as: DISPLAYFORM4 Reproducing eq. 2.",
"A Tensor Network (TN) is formally represented by an underlying undirected graph that has some special attributes, we elaborate on this formal definition in appendix E.1.",
"In the following, we give a more intuitive description of a TN, which is nonetheless exact and required for our construction of the ConvAC TN.",
"The basic building blocks of a TN are tensors, which are represented by nodes in the network.",
"The order of a tensor represented by a node, is equal to its degree -the number of edges incident to it, also referred to as its legs.",
"FIG6 (a) shows three examples:",
"1) A vector, which is a tensor of order 1, is represented by a node with one leg.",
"2) A matrix, which is a tensor of order 2, is represented by a node with two legs.",
"3) Accordingly, a tensor of order N is represented in the TN as a node with N legs.",
"In a TN, each edge is associated with a number called the bond dimension.",
"The bond dimension assigned to a specific leg of a node, is simply the dimension of the corresponding mode of the tensor represented by this node (see definitions for a mode and its dimension in section 2).A",
"TN is a collection of such tensors represented by nodes, with edges that can either be connected to a node on one end and loose on the other end or connect between two nodes. Each",
"edge in a TN is represented by an index that runs between 1 and its bond dimension. An index",
"representing an edge which connects between two tensors is called a contracted index, while an index representing an edge with one loose end is called an open index. The set",
"of contracted indices will be denoted by K = {k1, ..., kP } and the set of open indices will be denoted by D = {d1, ..., dN }. The operation",
"of contracting the network is defined by summation over all of the P contracted indices An example for a contraction of a simple TN is depicted in FIG6 . There, a TN corresponding",
"to the operation of multiplying a vector v ∈ R r 1 by a matrix M ∈ R r 2 ×r 1 is performed by summing over the only contracted index, k. As there is only one open",
"index, d, the result of contracting the network is an order 1 tensor (a vector): u ∈ R r 2 which upholds u = M v. In FIG6 (c) a somewhat more elaborate example is illustrated, where a TN composed of order 2 and 3 tensors represents a tensor of order 5. This network represents a",
"decomposition known as a tensor train (Oseledets (2011)) in the tensor analysis community or a matrix product state (MPS) (see overview in e.g. Orús (2014)) in the condensed matter physics community, which arranges order 3 tensors in such a 'train' architecture and allows the representation of an order N tensor with a linear (in N ) amount of parameters. The MPS exemplifies a typical",
"desired quality of TNs. The decomposition of a higher",
"order tensor into a set of sparsely interconnected lower order tensors, was shown (Oseledets and Tyrtyshnikov (2009); BID0 ) to greatly diminish effects related to the curse of dimensionality discussed above. 8"
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.22727271914482117,
0.2916666567325592,
0.3333333432674408,
0.24242423474788666,
0.10810810327529907,
0.10256409645080566,
0.05714285373687744,
0.2978723347187042,
0.1428571343421936,
0.1463414579629898,
0.14117646217346191,
0.1071428507566452,
0.1428571343421936,
0.125,
0.08695651590824127,
0.24390242993831635,
0.19999998807907104,
0.0952380895614624,
0.08510638028383255,
0.20408162474632263,
0.16393442451953888,
0.1702127605676651,
0.21621620655059814,
0.15189872682094574,
0.2083333283662796,
0.2222222238779068,
0.2222222238779068,
0.1463414579629898,
0.1904761791229248,
0.1904761791229248,
0.0555555522441864,
0.1538461446762085,
0.10256409645080566,
0.0952380895614624,
0.11363635957241058,
0.1666666567325592,
0.1904761791229248,
0.1666666567325592,
0.25925925374031067,
0.21052631735801697,
0.18867924809455872,
0.2857142686843872,
0.2857142686843872,
0.2857142686843872,
0.2368420958518982,
0.2448979616165161,
0.04444443807005882,
0.24390242993831635,
0.2028985470533371,
0.09090908616781235,
0.1538461446762085,
0.1904761791229248,
0.19354838132858276,
0.1860465109348297,
0.1111111044883728,
0.060606054961681366,
0,
0.10526315122842789,
0.13333332538604736,
0.04999999329447746,
0.1860465109348297,
0.1666666567325592,
0.1111111044883728,
0.08510638028383255,
0.08888888359069824,
0.13793103396892548,
0.08695651590824127,
0.04347825422883034,
0.1463414579629898,
0.11428570747375488,
0.04878048226237297,
0,
0,
0,
0.11428570747375488,
0.0624999962747097,
0.2083333283662796,
0.11764705181121826,
0.10810810327529907,
0,
0.09756097197532654,
0.13636362552642822,
0.0833333283662796,
0.06557376682758331,
0.11428570747375488,
0,
0.1249999925494194
] | SywXXwJAb | true | [
"Employing quantum entanglement measures for quantifying correlations in deep learning, and using the connection to fit the deep network's architecture to correlations in the data."
] |
[
"Deep learning algorithms are increasingly used in modeling chemical processes.",
"However, black box predictions without rationales have limited used in practical applications, such as drug design.",
"To this end, we learn to identify molecular substructures -- rationales -- that are associated with the target chemical property (e.g., toxicity).",
"The rationales are learned in an unsupervised fashion, requiring no additional information beyond the end-to-end task.",
"We formulate this problem as a reinforcement learning problem over the molecular graph, parametrized by two convolution networks corresponding to the rationale selection and prediction based on it, where the latter induces the reward function.",
"We evaluate the approach on two benchmark toxicity datasets.",
"We demonstrate that our model sustains high performance under the additional constraint that predictions strictly follow the rationales.",
"Additionally, we validate the extracted rationales through comparison against those described in chemical literature and through synthetic experiments.",
"Recently, deep learning has been successfully applied to the development of predictive models relating chemical structures to physical or biological properties, outperforming existing methods BID8 BID14 .",
"However, these gains in accuracy have come at the cost of interpretability.",
"Often, complex neural models operate as black boxes, offering little transparency concerning their inner workings.Interpretability plays a critical role in many areas including cheminformatics.",
"Consider, for example, the problem of toxicity prediction.",
"Over 90% of small molecule drug candidates entering Phase I trials fail due to lack of efficacy or due to adverse side effects.",
"In order to propose a modified compound with improved properties, medicinal chemists must know which regions of the molecule are responsible for toxicity, not only the overall level of toxicity BID1 .",
"We call the key molecular substructures relating to the outcome rationales.",
"In traditional cheminformatics approaches such as pharmacophore mapping, obtaining such a rationale behind the prediction is an intrinsic part of the model BID24 BID7 BID12 In this paper, we propose a novel approach to incorporate rationale identification as an integral part of the overall property prediction problem.",
"We assume access to the same training data as in the original prediction task, without requiring annotated rationales.",
"At the first glance, the problem seems solvable using existing tools.",
"For instance, attention-based models offer the means to highlight the importance of individual atoms for the target prediction.",
"However, it is challenging to control how soft selections are exploited by later processing steps towards the prediction.",
"In this sense, the soft weighting can be misleading.",
"In contrast, hard selection confers the guarantee that the excluded atoms are not relied upon for prediction.",
"The hard selection of substructures in a molecule is, however, a hard combinatorial problem.",
"Prior approaches circumvent this challenge by considering a limited set of predefined substructures (typically of 1-6 atoms), like the ones encoded in some molecular fingerprints BID7 .",
"Ideally, we would like the model to derive these structures adaptively based on their utility for the target prediction task.We formulate the problem of selecting important regions of the molecule as a reinforcement learning problem.",
"The model is parametrized by a convolutional network over a molecular graph in which the atoms and bonds are the nodes and edges of the graph, respectively.",
"Different from traditional reinforcement learning methods that have a reward function provided by the environment, our model seeks to learn such a reward function alongside the reinforcement learning algorithm.",
"More generally, our model works as a search mechanism for combinatorial sets, which readily expands to applications beyond chemistry or graphs.Our iterative construction of rationales provides several advantages over standard architectures.",
"First, sequential selection enables us to incorporate contextual features associated with past selections, as well as global properties of the whole molecule.",
"Second, we can explicitly enforce desirable rationale properties (e.g., number of substructures) by including appropriate regularization terms in the reward function.",
"We test our model on two toxicity datasets: the Tox21 challenge dataset, which is a series of 12 toxicity tests, and the human ether-a-go-go-related gene (hERG) channel blocking.",
"The reinforcement learning model identifies the structural components of the molecule that are relevant to these toxicity prediction tasks while simultaneously highlighting opportunities for molecular modification at these sites.",
"We show that by only selecting about 40-50% of the atoms in the molecules, we can create models that nearly match the performance of models that use the entire molecule.",
"By comparing selected regions with rationales described in chemical literature, we further validate the rationales extracted by the model.",
"We present a model that treats the problem of selecting rationales from molecules as a reinforcement learning problem.",
"By creating an auxiliary prediction network, we use a learned reward structure to facilitate the selection of atoms in the molecule that are relevant to the prediction task, without significant loss in predictive performance.",
"In this work, we explore the applicability of rationales in the chemistry domain.",
"Through various experiments on the Tox21 and hERG datasets, we demonstrate that our model successfully learns to select important substructures in an unsupervised manner, requiring the same data as an end-to-end prediction task, which is relevant to many applications including drug design and discovery.",
"Molecules are far more complicated to reason about as compared to images or text due to complex chemical theories and a lack of definitive ground truth rationale labels.",
"As deep learning algorithms continue to permeate the chemistry domain, it will be ever more important to consider the interpretability of such models."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.07999999821186066,
0.06451612710952759,
0.21052631735801697,
0.06451612710952759,
0.3478260934352875,
0.0833333283662796,
0.12903225421905518,
0.0624999962747097,
0.09999999403953552,
0,
0.04999999701976776,
0.17391303181648254,
0.05714285373687744,
0.13636362552642822,
0.3199999928474426,
0.15686273574829102,
0.25,
0,
0.19354838132858276,
0.12121211737394333,
0,
0.12903225421905518,
0.07407406717538834,
0.09999999403953552,
0.30434781312942505,
0.15789473056793213,
0.21052631735801697,
0.25531914830207825,
0.0555555522441864,
0,
0.09756097197532654,
0.2857142686843872,
0.10526315122842789,
0.0624999962747097,
0.32258063554763794,
0.1818181723356247,
0.07407406717538834,
0.072727270424366,
0.09756097197532654,
0.1111111044883728
] | HkJ1rgbCb | true | [
"We use a reinforcement learning over molecular graphs to generate rationales for interpretable molecular property prediction."
] |
[
"In recent years, substantial progress has been made on graph convolutional networks (GCN).",
"In this paper, for the first time, we theoretically analyze the connections between GCN and matrix factorization (MF), and unify GCN as matrix factorization with co-training and unitization.",
"Moreover, under the guidance of this theoretical analysis, we propose an alternative model to GCN named Co-training and Unitized Matrix Factorization (CUMF).",
"The correctness of our analysis is verified by thorough experiments.",
"The experimental results show that CUMF achieves similar or superior performances compared to GCN.",
"In addition, CUMF inherits the benefits of MF-based methods to naturally support constructing mini-batches, and is more friendly to distributed computing comparing with GCN.",
"The distributed CUMF on semi-supervised node classification significantly outperforms distributed GCN methods.",
"Thus, CUMF greatly benefits large scale and complex real-world applications.",
"In recent years, works on graph convolutional networks (GCN) (Kipf & Welling, 2017) have achieved great success in many graph-based tasks, e.g., semi-supervised node classification (Kipf & Welling, 2017) , link prediction (Zhang & Chen, 2018) and recommendation systems (Ying et al., 2018) .",
"GCN defines a graph convolution operation, which generates the embedding of each node by aggregating the representations of its neighbors.",
"Given a graph, GCN performs the graph convolution operation layer by layer to obtain the final node representations, which will be passed to neural networks to support various tasks.",
"To perform GCN on large scale graphs in constrained memory or distributed computing environments, different sampling methods have been proposed, such as neighbor sampling (Hamilton et al., 2017) and importance sampling (Chen et al., 2018b) .",
"Instead of sampling, Cluster-GCN (Chiang et al., 2019) proposes an approach to convert computation on a huge matrix to computing on a set of small matrices.",
"However, these methods still suffer from performance loss when conducting distributed computing.",
"To take use of various contextual information on edges in a graph, Relational GCN (RGCN) (Schlichtkrull et al., 2018) extends neighbor aggregation by using edge types in link prediction.",
"Besides the edge types, Edge-enhanced Graph Neural Networks (EGNNs) (Gong & Cheng, 2019) takes more contextual features into consideration.",
"However, in general, GCN still has the efficiency problem when facing complex forms of contextual information.",
"Besides GCN, graph embedding methods (Perozzi et al., 2014; Tang et al., 2015b; a; Grover & Leskovec, 2016) are also widely applied.",
"In general, these methods rely on first-order and secondorder proximity to embed very large information networks into low-dimensional vector spaces.",
"The first-order proximity in a graph is the local pairwise proximity between two vertices, and the secondorder proximity between a pair of vertices in a graph is the similarity between their neighborhood structures.",
"As for GCN, previous work shows that the graph convolution operation is actually a special form of Laplacian smoothing .",
"Thus, as the converging of the model, the smoothing process can keep the final representation of a node more and more similar to those of its neighbors.",
"Therefore, GCN is consistent with graph embedding methods in capturing the structural information.",
"According to previous work (Qiu et al., 2018) , graph embedding methods have been successfully unified as matrix factorization (MF).",
"Thus, we believe that there might be some connections between GCN and MF.",
"Meanwhile, comparing with GCN, MF-based methods are extremely flexible and suitable for distributed computing (Gemulla et al., 2011; Zhuang et al., 2013; Yu et al., 2014) .",
"MF-based methods are also easy and efficient to be extended to tasks with complex forms of contextual information on graph edges (Rendle et al., 2011; Rendle, 2012; Jamali & Lakshmanan, 2013; Shi et al., 2014; Liu et al., 2015) .",
"Thus, if we can unify the GCN model as a special form of MF, large scale and complex real-world applications will benefit from this.",
"In this paper, we theoretically reveal the connections between GCN and MF, and unify GCN as matrix factorization with co-training and unitization in section",
"2. Here, the co-training process means co-training with the classification task of labeled nodes as in (Weston et al., 2012; Yang et al., 2016) , and the unitization indicates conducting vector unitization on node representations.",
"Then, under the guidance of our theoretical analysis, we formally propose an alternative model to GCN named Co-training and Unitized Matrix Factorization (CUMF) in section",
"3. Extensive experiments are conducted on several real-world graphs, and show co-training and unitization are two essential components of CUMF.",
"Under centralized computing settings, CUMF achieves similar or superior performances comparing with GCN.",
"These observations strongly verify the correctness of our theoretical analysis.",
"Moreover, GCN performs poor on dense graphs, while CUMF has great performances.",
"This is may caused by the over-smoothing of graph convolution on dense graphs, while CUMF can balance the smoothing of neighbours and the classification of labeled nodes through the co-training process.",
"Experiments under distributed computing settings are also conducted, and distributed CUMF significantly outperforms the state-of-the-art distributed GCN method, i.e., cluster-GCN (Chiang et al., 2019) .",
"Thus, CUMF is extremely friendly to large scale real-world graphs.",
"Meanwhile, lots of works have been done to model contextual information in MF-based methods (Rendle et al., 2011; Rendle, 2012; Jamali & Lakshmanan, 2013; Shi et al., 2014; Liu et al., 2015) , which have shown great effectiveness, efficiency and flexibility.",
"To the best of our knowledge, CUMF is the first work that connects GCN to MF.",
"We theoretically unify GCN as co-training and unitized matrix factorization, and a CUMF model is therefore proposed.",
"We conduct thorough and empirical experiments, which strongly verify the correctness of our theoretical analysis.",
"The experimental results show that CUMF achieve similar or superior performances compared to GCN.",
"We also observe that GCN performs poor on dense graphs, while CUMF has great performances.",
"This is may caused by the over-smoothing of graph convolution on dense graphs, while CUMF can balance the smoothing of neighbours and the classification of labeled nodes via co-training.",
"Moreover, due to the MF-based architecture, CUMF is extremely flexible and easy to be applied to distributed computing for large scale real-world applications, and significantly outperforms state-of-the-art distributed GCN methods."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.25,
0.3636363446712494,
0.060606054961681366,
0,
0,
0.05882352590560913,
0,
0.0952380895614624,
0.16326530277729034,
0.06896550953388214,
0.1111111044883728,
0.09302325546741486,
0.05882352590560913,
0,
0,
0,
0,
0.0624999962747097,
0.12903225421905518,
0.12121211737394333,
0.06666666269302368,
0.1249999925494194,
0.0833333283662796,
0.25806450843811035,
0.0833333283662796,
0.05882352590560913,
0.08695651590824127,
0.17142856121063232,
0.375,
0.14999999105930328,
0.0555555522441864,
0.13793103396892548,
0,
0,
0,
0.1621621549129486,
0.05714285373687744,
0,
0.04255318641662598,
0,
0.5185185074806213,
0.1538461446762085,
0,
0.07692307233810425,
0.1666666567325592,
0.054054051637649536
] | HJxf53EtDr | true | [
"We unify graph convolutional networks as co-training and unitized matrix factorization."
] |
[
"Molecular graph generation is a fundamental problem for drug discovery and has been attracting growing attention.",
"The problem is challenging since it requires not only generating chemically valid molecular structures but also optimizing their chemical properties in the meantime.",
"Inspired by the recent progress in deep generative models, in this paper we propose a flow-based autoregressive model for graph generation called GraphAF.",
"GraphAF combines the advantages of both autoregressive and flow-based approaches and enjoys: (1) high model flexibility for data density estimation; (2) efficient parallel computation for training; (3) an iterative sampling process, which allows leveraging chemical domain knowledge for valency checking.",
"Experimental results show that GraphAF is able to generate 68\\% chemically valid molecules even without chemical knowledge rules and 100\\% valid molecules with chemical rules.",
"The training process of GraphAF is two times faster than the existing state-of-the-art approach GCPN.",
"After fine-tuning the model for goal-directed property optimization with reinforcement learning, GraphAF achieves state-of-the-art performance on both chemical property optimization and constrained property optimization.",
"Designing novel molecular structures with desired properties is a fundamental problem in a variety of applications such as drug discovery and material science.",
"The problem is very challenging, since the chemical space is discrete by nature, and the entire search space is huge, which is believed to be as large as 10 33 (Polishchuk et al., 2013) .",
"Machine learning techniques have seen a big opportunity in molecular design thanks to the large amount of data in these domains.",
"Recently, there are increasing efforts in developing machine learning algorithms that can automatically generate chemically valid molecular structures and meanwhile optimize their properties.",
"Specifically, significant progress has been achieved by representing molecular structures as graphs and generating graph structures with deep generative models, e.g., Variational Autoencoders (VAEs) (Kingma & Welling, 2013) , Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) and Autoregressive Models .",
"For example, Jin et al. (2018) proposed a Junction Tree VAE (JT-VAE) for molecular structure encoding and decoding.",
"De Cao & Kipf (2018) studied how to use GANs for molecular graph generation.",
"You et al. (2018a) proposed an approach called Graph Convolutional Policy Network (GCPN), which formulated molecular graph generation as a sequential decision process and dynamically generates the nodes and edges based on the existing graph substructures.",
"They used reinforcement learning to optimize the properties of generated graph structures.",
"Recently, another very related work called MolecularRNN (MRNN) (Popova et al., 2019) proposed to use an autoregressive model for molecular graph generation.",
"The autoregressive based approaches including both GCPN and MRNN have demonstrated very competitive performance in a variety of tasks on molecular graph generation.",
"Recently, besides the aforementioned three types of generative models, normalizing flows have made significant progress and have been successfully applied to a variety of tasks including density estimation (Dinh et al., 2016; Papamakarios et al., 2017) , variational inference (Kingma et al., 2016; Louizos & Welling, 2017; Rezende & Mohamed, 2015) , and image generation (Kingma & Dhariwal, 2018) .",
"Flow-based approaches define invertible transformations between a latent base distribution (e.g. Gaussian distribution) and real-world high-dimensional data (e.g. images and speech).",
"Such an JT-VAE ------RVAE ------GCPN -----MRNN -----GraphNVP ------GraphAF ----- invertible mapping allows the calculation of the exact data likelihood.",
"Meanwhile, by using multiple layers of non-linear transformation between the hidden space and observation space, flows have a high capacity to model the data density.",
"Moreover, different architectures can be designed to promote fast training (Papamakarios et al., 2017) or fast sampling (Kingma et al., 2016 ) depending on the requirement of different applications.",
"Inspired by existing work on autoregressive models and recent progress of deep generative models with normalizing flows, we propose a flow-based autoregressive model called GraphAF for molecular graph generation.",
"GraphAF effectively combines the advantages of autoregressive and flow-based approaches.",
"It has a high model capacity and hence is capable of modeling the density of real-world molecule data.",
"The sampling process of GraphAF is designed as an autoregressive model, which dynamically generates the nodes and edges based on existing sub-graph structures.",
"Similar to existing models such as GCPN and MRNN, such a sequential generation process allows leveraging chemical domain knowledge and valency checking in each generation step, which guarantees the validity of generated molecular structures.",
"Meanwhile, different from GCPN and MRNN as an autoregressive model during training, GraphAF defines a feedforward neural network from molecular graph structures to the base distribution and is therefore able to compute the exact data likelihood in parallel.",
"As a result, the training process of GraphAF is very efficient.",
"We conduct extensive experiments on the standard ZINC (Irwin et al., 2012) dataset.",
"Results show that the training of GraphAF is significantly efficient, which is two times faster than the state-of-theart model GCPN.",
"The generated molecules are 100% valid by incorporating the chemical rules during generation.",
"We are also surprised to find that even without using the chemical rules for valency checking during generation, the percentage of valid molecules generated by GraphAF can be still as high as 68%, which is significantly higher than existing state-of-the-art GCPN.",
"This shows that GraphAF indeed has the high model capability to learn the data distribution of molecule structures.",
"We further fine-tune the generation process with reinforcement learning to optimize the chemical properties of generated molecules.",
"Results show that GraphAF significantly outperforms previous state-of-the-art GCPN on both property optimization and constrained property optimization tasks.",
"We proposed GraphAF, the first flow-based autoregressive model for generating realistic and diverse molecular graphs.",
"GraphAF is capable to model the complex molecular distribution thanks to the flexibility of normalizing flow, as well as generate novel and 100% valid molecules in empirical experiments.",
"Moreover, the training of GraphAF is very efficient.",
"To optimize the properties of generated molecules, we fine-tuned the generative process with reinforcement learning.",
"Experimental results show that GraphAF outperforms all previous state-of-the-art baselines on the standard tasks.",
"In the future, we plan to train our GraphAF model on larger datasets and also extend it to generate other types of graph structures (e.g., social networks).",
"10:",
"end for 19:",
"In this section, we elaborate the network architecture and the implementation details of three tasks.",
"Network architecture.",
"The network architecture is fixed among all three tasks.",
"More specifically, the R-GCN is implemented with 3 layers and the embedding dimension is set as 128.",
"We use batch normalization before graph pooling to accelerate the convergence and choose sum-pooling as the readout function for graph representations.",
"Both node MLPs and edge MLPs have two fullyconnected layers equipped with tanh non-linearity.",
"Density Modeling and Generation.",
"To achieve the results in Table 2 , we train GraphAF on ZINC250K with a batch size of 32 on 1 Tesla V100 GPU and 32 CPU cores for 10 epochs.",
"We optimize our model with Adam with a fixed learning rate of 0.001.",
"Property Optimization.",
"For both property optimization and constrained property optimization, we first pretrain a GraphAF network via the density modeling task for 300 epochs, and then finetune the network toward desired molecular distribution through RL process.",
"Following are details about the reward design for property optimization.",
"The reward of each step consists of step-wise validity rewards and the final rewards discounted by a fixed factor γ.",
"The step-wise validity penalty is fixed as -1.",
"The final reward of a molecule m includes both property-targeted reward and chemical validation reward.",
"We adopt the same chemical validation rewards as GCPN.",
"We define propertytargeted reward as follows:",
"γ is set to 0.97 for QED optimization and 0.9 for penalized logP optimization respectively.",
"We fine-tune the pretrained model for 200 iterations with a fixed batch size of 64 using Adam optimizer.",
"We also adopt a linear learning rate warm-up to stabilize the training.",
"We perform the grid search to determine the optimal hyperparameters according to the chemical scoring performance.",
"The search space is summarised in Table 7 .",
"Constrained Property Optimization.",
"We first introduce the way we sample sub-graphs from 800 ZINC molecules.",
"Given a molecule, we first randomly sample a BFS order and then drop the last m nodes in BFS order as well as edges induced by these nodes, where m is randomly chosen from {0, 1, 2, 3, 4, 5} each time.",
"Finally, we reconstruct the sub-graph from the remaining nodes in the BFS sequence.",
"Note that the final sub-graph is connected due to the nature of BFS order.",
"For reward design, we set it as the improvement of the target score.",
"We fine-tune the pretrained model for 200 iterations with a batch size of 64.",
"We also use Adam with a learning rate of 0.0001 to optimize the model.",
"Finally, each molecule is optimized for 200 times by the tuned model."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.25,
0.10256409645080566,
0.31578946113586426,
0.18867924809455872,
0.10810810327529907,
0.06451612710952759,
0.3333333432674408,
0.15789473056793213,
0.04444443807005882,
0.0555555522441864,
0.1538461446762085,
0.1090909019112587,
0.1764705777168274,
0.2666666507720947,
0.20408162474632263,
0.1428571343421936,
0.307692289352417,
0.307692289352417,
0.0634920597076416,
0.0555555522441864,
0,
0.09999999403953552,
0.0476190410554409,
0.41860464215278625,
0.23076923191547394,
0.1818181723356247,
0.1538461446762085,
0.12765957415103912,
0.20000000298023224,
0,
0.06666666269302368,
0.05882352590560913,
0.06896550953388214,
0.072727270424366,
0.12121211737394333,
0.1249999925494194,
0.25,
0.3870967626571655,
0.1463414579629898,
0,
0.06666666269302368,
0.19999998807907104,
0.1818181723356247,
0.10526315867900848,
0.06666666269302368,
0,
0.06451612710952759,
0.17142856121063232,
0.06896550953388214,
0.09999999403953552,
0.17777776718139648,
0.06896550953388214,
0.17391303181648254,
0.1538461446762085,
0.05882352590560913,
0,
0.13793103396892548,
0,
0,
0.19999998807907104,
0.11764705181121826,
0,
0,
0,
0,
0,
0.038461532443761826,
0,
0,
0,
0.13333332538604736,
0.06451612710952759,
0.2142857164144516
] | S1esMkHYPr | true | [
"A flow-based autoregressive model for molecular graph generation. Reaching state-of-the-art results on molecule generation and properties optimization."
] |
[
"We study two types of preconditioners and preconditioned stochastic gradient descent (SGD) methods in a unified framework.",
"We call the first one the Newton type due to its close relationship to the Newton method, and the second one the Fisher type as its preconditioner is closely related to the inverse of Fisher information matrix.",
"Both preconditioners can be derived from one framework, and efficiently estimated on any matrix Lie groups designated by the user using natural or relative gradient descent minimizing certain preconditioner estimation criteria.",
"Many existing preconditioners and methods, e.g., RMSProp, Adam, KFAC, equilibrated SGD, batch normalization, etc., are special cases of or closely related to either the Newton type or the Fisher type ones.",
"Experimental results on relatively large scale machine learning problems are reported for performance study.",
"This paper investigates the use of preconditioner for accelerating gradient descent, especially in large scale machine learning problems.",
"Stochastic gradient descent (SGD) and its variations, e.g., momentum BID11 BID9 , RMSProp and Adagrad BID5 , Adam BID6 , etc., are popular choices due to their simplicity and wide applicability.",
"These simple methods do not use well normalized step size, could converge slow, and might involve more controlling parameters requiring fine tweaking.",
"Convex optimization is a well studied field BID2 .",
"Many off-the-shelf methods there, e.g., (nonlinear) conjugate gradient descent, quasi-Newton methods, Hessian-free optimizations, etc., can be applied to small and middle scale machine learning problems without much modifications.",
"However, these convex optimization methods may have difficulty in handling gradient noise and scaling up to problems with hundreds of millions of free parameters.",
"For a large family of machine learning problems, natural gradient with the Fisher information metric is equivalent to a preconditioned gradient using inverse of the Fisher information matrix as the preconditioner BID1 .",
"Natural gradient and its variations, e.g., Kronecker-factored approximate curvature (KFAC) BID8 and the one in BID10 , all use such preconditioners.",
"Other less popular choices are the equilibrated preconditioner BID4 and the one proposed in BID7 .",
"Momentum or the heavy-ball method provides another independent way to accelerate converge BID9 BID11 .",
"Furthermore, momentum and preconditioner can be combined to further accelerate convergence as shown in Adam BID6 .",
"This paper groups the above mentioned preconditioners and preconditioned SGD methods into two classes, the Newton type and the Fisher type.",
"The Newton type is closely related to the Newton method, and is suitable for general purpose optimizations.",
"The Fisher type preconditioner relates to the inverse of Fisher information matrix, and is limited to a large subclass of stochastic optimization problems where the Fish information metric can be well defined.",
"Both preconditioners can be derived from one framework, and estimated on any matrix Lie groups designated by the user with almost the same natural or relative gradient descent methods minimizing specific preconditioner estimation criteria.",
"Two types of preconditioners and preconditioned SGD methods are studied.",
"The one requiring Hessian-vector product for preconditioner estimation is suitable for general purpose optimization.",
"We call it the Newton type preconditioned SGD due to its close relationship to the Newton method.",
"The other one only requires gradient for preconditioner estimation.",
"We call it the Fisher type preconditioned SGD as its preconditioner is closely related to the inverse of Fisher information matrix.",
"Both preconditioners can be efficiently learned using natural or relative gradient descent on any matrix Lie groups designated by the user.",
"The Fisher type preconditioned SGD has lower computational complexity per iteration, but may require more tuning efforts on selecting its step size and damping factor.",
"The Newton type preconditioned SGD has higher computational complexity per iteration, but is more user friendly due to its use of normalized step size and built-in gradient noise damping ability.",
"Both preconditioners, even with very sparse representations, are shown to considerably accelerate convergence on relatively large scale problems."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.30434781312942505,
0.25925925374031067,
0.13333332538604736,
0.4067796468734741,
0.09302324801683426,
0.21276594698429108,
0.10526315122842789,
0.07843136787414551,
0.054054051637649536,
0.20338982343673706,
0.1538461446762085,
0.2222222238779068,
0.11999999731779099,
0.1395348757505417,
0.09302324801683426,
0.13333332538604736,
0.17391303181648254,
0.1818181723356247,
0.2142857164144516,
0.16129031777381897,
0.20512820780277252,
0.0952380895614624,
0.1860465109348297,
0.10526315122842789,
0.2083333283662796,
0.07999999821186066,
0.03703703358769417,
0.10169491171836853,
0.04255318641662598
] | Bye5SiAqKX | true | [
"We propose a new framework for preconditioner learning, derive new forms of preconditioners and learning methods, and reveal the relationship to methods like RMSProp, Adam, Adagrad, ESGD, KFAC, batch normalization, etc."
] |
[
"We present EDA: easy data augmentation techniques for boosting performance on text classification tasks.",
"EDA consists of four simple but powerful operations: synonym replacement, random insertion, random swap, and random deletion.",
"On five text classification tasks, we show that EDA improves performance for both convolutional and recurrent neural networks.",
"EDA demonstrates particularly strong results for smaller datasets; on average, across five datasets, training with EDA while using only 50% of the available training set achieved the same accuracy as normal training with all available data.",
"We also performed extensive ablation studies and suggest parameters for practical use.",
"Text classification is a fundamental task in natural language processing (NLP).",
"Machine learning and deep learning have achieved high accuracy on tasks ranging from sentiment analysis (Tang et al., 2015) to topic classification BID24 , but high performance is often dependent on the size and quality of training data, which is often tedious to collect.",
"Automatic data augmentation is commonly used in vision BID20 BID22 BID10 and speech (Cui et al., 2015; BID7 and can help train more robust models, particularly when using smaller datasets.",
"However, because it is difficult to come up with generalized rules for language transformation, universal data augmentation techniques in NLP have not been explored.Previous work has proposed techniques for data augmentation in NLP.",
"One popular study generated new data by translating sentences into French and back into English BID28 .",
"Other works have used predictive language models for synonym replacement BID8 and data noising as smoothing BID27 .",
"Although these techniques are valid, they are not often used in practice because they have a high cost of implementation relative to performance gain.In this paper, we present a simple set of universal data augmentation techniques for NLP called EDA (easy data augmentation).",
"To the best of our knowledge, we are the first to comprehensively explore text editing techniques for data augmentation.",
"We systematically evaluate EDA on five benchmark classification tasks, and results show that EDA provides substantial improvements on all five tasks and is particularly helpful for smaller datasets.",
"Code will be made publicly available.Operation Sentence None A sad, superior human comedy played out on the back roads of life.",
"SR A lamentable, superior human comedy played out on the backward road of life.",
"RI A sad, superior human comedy played out on funniness the back roads of life.",
"RS A sad, superior human comedy played out on roads back the of life.",
"RD A sad, superior human out on the roads of life.",
"We have shown that simple data augmentation operations can boost performance on text classification tasks.",
"Although improvement is at times marginal, EDA substantially boosts performance and reduces overfitting when training on smaller datasets.",
"Continued work on this topic could include exploring the theoretical underpinning of the EDA operations.",
"We hope that EDA's simplicity makes a compelling case for its widespread use in NLP."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.48275861144065857,
0,
0.3030303120613098,
0.08888888359069824,
0.07407406717538834,
0.07692307233810425,
0.11538460850715637,
0.13333332538604736,
0.1395348757505417,
0,
0.0624999962747097,
0.15094339847564697,
0.24242423474788666,
0.25641024112701416,
0.05405404791235924,
0.06896550953388214,
0.06666666269302368,
0.06896550953388214,
0.07692307233810425,
0.46666666865348816,
0.1818181723356247,
0.06896550953388214,
0.06666666269302368
] | BJelsDvo84 | true | [
"Simple text augmentation techniques can significantly boost performance on text classification tasks, especially for small datasets."
] |
[
"We propose a model that is able to perform physical parameter estimation of systems from video, where the differential equations governing the scene dynamics are known, but labeled states or objects are not available.",
"Existing physical scene understanding methods require either object state supervision, or do not integrate with differentiable physics to learn interpretable system parameters and states.",
"We address this problem through a \\textit{physics-as-inverse-graphics} approach that brings together vision-as-inverse-graphics and differentiable physics engines, where objects and explicit state and velocity representations are discovered by the model.",
"This framework allows us to perform long term extrapolative video prediction, as well as vision-based model-predictive control.",
"Our approach significantly outperforms related unsupervised methods in long-term future frame prediction of systems with interacting objects (such as ball-spring or 3-body gravitational systems), due to its ability to build dynamics into the model as an inductive bias.",
"We further show the value of this tight vision-physics integration by demonstrating data-efficient learning of vision-actuated model-based control for a pendulum system.",
"We also show that the controller's interpretability provides unique capabilities in goal-driven control and physical reasoning for zero-data adaptation.",
"System identification or physical parameter estimation is commonly required for control or state estimation for physical modelling, and typically relies on dedicated sensing equipment and carefully constructed experiments.",
"Current machine learning approaches to physical modeling from video either require training by supervised regression from video to object coordinates before estimating explicit physics (Watters et al., 2017; Wu et al., 2017b; Belbute-Peres et al., 2018) , or are able to discover and segment objects from video in an unsupervised manner, but do not naturally integrate with a physics engine for long-term predictions or generation of interpretable locations and physical parameters for physical reasoning (Xu et al., 2019; van Steenkiste et al., 2018) .",
"In this work, we bridge the gap between unsupervised discovery of objects from video and learning the physical dynamics of a system, by learning unknown physical parameters and explicit trajectory coordinates.",
"Our approach, called physics-as-inverse-graphics, solves the physical modeling problem via a novel vision-as-inverse-graphics encoder-decoder system that can render and de-render image components using Spatial Transformers (ST) (Jaderberg et al., 2015) in a way that makes it possible for the latent representation to generate disentangled interpretable states (position/velocity).",
"These can be used directly by a differentiable physics engine (Degrave et al., 2016; Belbute-Peres et al., 2018) to learn the parameters of a scene where the family of differential equations governing the system are known (e.g. objects connected by a spring), but the corresponding parameters are not (e.g. spring constant).",
"This allows us to to identify physical parameters and learn vision components of the model jointly in an end-to-end fashion.",
"Our contribution is a solution to unsupervised learning of physical parameters from video, without having access to ground-truth appearance, position or velocities of the objects, a task that had so far remained unsolved (Wu et al., 2015; Belbute-Peres et al., 2018) .",
"In addition to showing that our model can learn physical parameters without object or state supervision (a task with intrinsic scientific interest in and of itself), we show that incorporating dynamics priors in the form of known physical equations of motion with learnable parameters together with learnable vision and graphics can improve model performance in two challenging tasks: long term video prediction and visual model predictive control.",
"We first evaluate physical parameter estimation accuracy and future video frame prediction on 4 datasets with different non-linear interactions and visual difficulty.",
"We then demonstrate the value of our method by applying it for data-efficient learning of vision-based control of an under-actuated pendulum.",
"Notably our unique ability to extract interpretable states and parameters from pixels without supervision enables end-to-end vision-based control to exploit goal-paramaterized policies and physical reasoning for zero-shot adaptation.",
"Physics-as-inverse graphics provides a valuable mechanism to integrate inductive bias about physical data generating processes into learning.",
"This allows unsupervised object tracking and system identification, in addition to sample efficient, generalisable and flexible control.",
"However, incorporating this structure into lightly supervised deep learning models has proven challenging to date.",
"We introduced a model that accomplishes this, relying on a coordinate-consistent decoder that enables image reconstruction from physics.",
"We have shown that our model is able to perform accurate long term prediction and that it can be used to learn the dynamics of an actuated system, allowing us to perform vision-based model-predictive control."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
1,
0.2142857164144516,
0.2711864411830902,
0.0833333283662796,
0.23529411852359772,
0.15094339847564697,
0.15686273574829102,
0.1818181723356247,
0.23404255509376526,
0.24137930572032928,
0.15584415197372437,
0.3561643660068512,
0.19607841968536377,
0.28985506296157837,
0.2142857164144516,
0.15094339847564697,
0.11764705181121826,
0.13793103396892548,
0.12244897335767746,
0.0416666604578495,
0.04255318641662598,
0.2083333283662796,
0.317460298538208
] | BJeKwTNFvB | true | [
"We propose a model that is able to perform physical parameter estimation of systems from video, where the differential equations governing the scene dynamics are known, but labeled states or objects are not available."
] |
[
"This paper proposes ASAL, a new pool based active learning method that generates high entropy samples.",
"Instead of directly annotating the synthetic samples, ASAL searches similar samples from the pool and includes them for training.",
"Hence, the quality of new samples is high and annotations are reliable. ",
"ASAL is particularly suitable for large data sets because it achieves a better run-time complexity (sub-linear) for sample selection than traditional uncertainty sampling (linear).",
"We present a comprehensive set of experiments on two data sets and show that ASAL outperforms similar methods and clearly exceeds the established baseline (random sampling). ",
"In the discussion section we analyze in which situations ASAL performs best and why it is sometimes hard to outperform random sample selection.",
"To the best of our knowledge this is the first adversarial active learning technique that is applied for multiple class problems using deep convolutional classifiers and demonstrates superior performance than random sample selection.",
"The goal of active learning (AL) algorithms is to train a model most efficiently, i.e. achieving the best performance with as few labelled samples as possible.",
"Typical AL algorithms operate in an iterative fashion, where in each AL-cycle a query strategy selects samples that the oracle should annotate.",
"These samples are expected to improve the model most effectively when added to the training set.",
"This procedure continues until a predefined stopping criteria is met.In this paper we will mainly focus on pool based active learning, because a pool of unlabelled samples is often available beforehand or can easily be build.",
"Furthermore, annotating all pool samples serves as an ideal evaluation environment for active learning algorithms.",
"It enables to train a fullysupervised model that establishes a performance upper bound on this data set.",
"Similarly, randomly selecting instead of actively choosing samples establishes a lower bound.",
"Then, the goal of an active learning algorithm is to approximate the performance of the fully supervised model with as few labelled samples as possible, while exceeding the performance of random sampling.Uncertainty sampling is an effective query strategy that identifies samples that are more informative than random ones.",
"The heuristic is, that samples for which the model is most uncertain contain new information and improve the model.",
"However, to identify such samples an exhaustive search over the full pool is required and the uncertainty score needs to be recomputed as soon as the model is updated (each AL cycle).",
"Thus, uncertainty sampling has a linear run-time complexity such that scanning very large unlabelled data sets is impractical even for inexpensive score functions.Our contributions are as follows:• We propose Adversarial Sampling for Active Learning (ASAL) that allows to approximate the performance of uncertainty sampling with a sub-linear run-time complexity.•",
"We conduct an extensive set of experiments using four different benchmarks (two and ten classes) and discuss the limitations of ASAL and how to overcome them.•",
"We demonstrate ASAL with different CNN based classifiers and three different feature sets to compare samples: raw pixel values, compressed representations of an auto-encoder and the features used to discriminate between real and fake samples in GANs.",
"Our experiments and results show that ASAL clearly outperforms random sampling and approximates exhaustive uncertainty sampling on three out of four benchmarks.",
"Compared to GAAL, ASAL outperforms random sampling, enables annotating real samples, handles multiple class problems and uses CNN based classifiers.",
"ASAL allows to update the feature maps of a classifier in each AL cycle and still achieves sub-linear run-time complexity whereas the hashing based methods of Jain et al. FORMULA1 has a linear run-time complexity if the feature maps are updated.",
"Updating the classifier and keeping the features for matching fixed, leads to sub-linear run-times but without guaranteeing that newly added samples have the highest entropy of all samples available in the pool.To achieve a sub-linear run-time complexity, ASAL requires to train a GAN and potentially an autoencoder beforehand.",
"Nonetheless, this initial cost pays off for extremely large data sets.",
"Although, it might be impractical to consider each sample during training of the GAN, it can generate representative samples and ASAL allows to select samples from the pool that were not used to train the GAN.",
"Thus, ASAL favours large data sets with similar samples, where it is only possible to train the GAN for a fixed number of iterations but contains a close match for any synthetic sample.",
"Conversely, small data sets with diverse samples allow to train the GANs for many epochs such that it is align to the data distribution.",
"However, real samples are sparsely distributed in feature space such that even the closest matches of a synthetic sample are significantly different.We observed in FIG1 that ASAL performs similar as random sampling.",
"Although ASAL enables to generate uncertain samples, it fails to select similar samples from the pool that have high entropy.",
"One explanation is the aforementioned situation, where the images are diverse but the data set is comparatively small.",
"Note, that CIFAR-10 is clearly more diverse than MNIST but has the same amount of samples.",
"Furthermore, the top row in Fig. 5 shows that synthetic images still look unrealistic and identifying a similar real sample is a challenging problem.",
"Another reason for poor performance is using low level features to compare different samples.",
"To achieve state-of-the-art results on CIFAR-10, we had to use a much deeper network than for all other experiments but kept the architectures of the feature extractors almost identical.",
"This can lead to a mismatch where the entropy of a sample mainly depends on high-level features but the matching method uses only low-level features to compare samples.",
"FIG2 in the appendix shows for example that exhaustive uncertainty sampling selects most frequently images with the category cat exactly a class that ASAL selects least frequently.",
"This is a sign that ASAL considers low-level features to find similar samples instead of more complex properties that characterize class information.",
"Fig.",
"5 provides again such an indication.",
"The last column shows a synthetic image with a white horse on a gray background and ASAL proposes matches with white object on a gray background but contain either a ship or an airplane.",
"This means, that the classifier requires samples of a specific class it is uncertain about, ASAL generates these samples but fails to retrieve matches showing theses categories.On CIFAR-10 -two classes we reported for ASAL-Autoencoder similar or slightly higher accuracy than for exhaustive uncertainty sampling.",
"Although we consider uncertainty sampling as a performance reference that we try to approximate, it is always possible to exceed its performance.",
"Note, entropy is one particular property that can identify informative samples.",
"Nonetheless, it is possible that samples with lower entropy are more effective for training the classifier.",
"We proposed and evaluated a new pool-based active learning method that uses sample generation and matching.",
"However, the sub-linear run-time complexity requires relaxing the guarantee, that selected samples have the highest entropy of all pool samples.",
"We showed, that the success of ASAL depends on different factors: the structure of the data set, the quality of the trained GAN and the relevance of the feature used to compare samples.",
"A poor GAN can generate high entropy samples but poor quality samples are impractical to match.",
"Small data sets that contain very different samples complicate both, training GANs and finding similar matches.",
"Less representative features might not contain the properties needed to find similar samples, where both have a high entropy.",
"Nonetheless, we demonstrated that ASAL outperforms random sample selection and approximates exhaustive uncertainty sampling in three out of four cases.",
"Furthermore, the sub-linear run-time complexity makes ASAL suitable for large data set.",
"We pointed out that ASAL uses low-level feature but there are signs that high-level features might be more suitable to match samples.",
"Thus, one particular direction of future research includes identifying such high-level features.",
"Possible candidates are VGG (Simonyan & Zisserman FORMULA1 ) or AlexNet (Krizhevsky et al. FORMULA1 features.",
"Training the model on the unlabelled pool and the small initial data set might lead to features covering the needed properties.",
"In addition, sample generation allows adding other scores beside information entropy.",
"Thus, an interesting direction of future research is designing other scores that will be used during sample generation i.e. measuring sample diversity (Zhu et al. FORMULA2 We keep the suggested splitting into training, validation and testing for each benchmark."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.5945945978164673,
0.307692289352417,
0.29411762952804565,
0.13636362552642822,
0.21276594698429108,
0.22727271914482117,
0.23076923191547394,
0.25531914830207825,
0.2380952388048172,
0.11428570747375488,
0.2181818187236786,
0.2222222238779068,
0.10810810327529907,
0.12121211737394333,
0.21052631735801697,
0.2631579041481018,
0.2083333283662796,
0.1538461446762085,
0.13333332538604736,
0.2222222238779068,
0.1463414579629898,
0.1463414579629898,
0.25925925374031067,
0.35483869910240173,
0,
0.2745097875595093,
0.1538461446762085,
0.1904761791229248,
0.23529411852359772,
0.4000000059604645,
0.1111111044883728,
0.21621620655059814,
0.27272728085517883,
0.11428570747375488,
0.08163265138864517,
0.2666666507720947,
0.22727271914482117,
0.2380952388048172,
0,
0.1304347813129425,
0.21875,
0.14999999105930328,
0.25,
0.2702702581882477,
0.3888888955116272,
0.31578946113586426,
0.2222222238779068,
0.17142856121063232,
0.1621621549129486,
0.19999998807907104,
0.19512194395065308,
0.1818181723356247,
0.1428571343421936,
0,
0,
0.1538461446762085,
0.0624999962747097,
0.13333332538604736
] | r1GB5jA5tm | true | [
"ASAL is a pool based active learning method that generates high entropy samples and retrieves matching samples from the pool in sub-linear time."
] |
[
"We point out important problems with the common practice of using the best single model performance for comparing deep learning architectures, and we propose a method that corrects these flaws.",
"Each time a model is trained, one gets a different result due to random factors in the training process, which include random parameter initialization and random data shuffling.",
"Reporting the best single model performance does not appropriately address this stochasticity.",
"We propose a normalized expected best-out-of-n performance (Boo_n) as a way to correct these problems.",
"Replicating results in deep learning research is often hard.",
"This harms their usefulness to industry, leads to a waste of effort by other researchers, and limits the scientific value of such results.One reason is that many papers provide information insufficient for replication.",
"Details of the experimental setup can significantly influence the results BID13 BID10 BID23 , so the details should be provided at least in appendices, ideally alongside the source code, as was strongly emphasized e.g. by BID17 .However",
", an important second factor hinders replicability: most deep learning training methods are inherently stochastic. This randomness",
"usually comes from random data ordering in stochastic gradient descent and from random parameter initialization, though there can be additional sources of randomness such as dropout or gradient noise. Consequently, even",
"if we fix the model architecture and the experimental setup (including the hyperparameters), we obtain a different result each time we run an experiment. Statistical techniques",
"are needed to handle this variability. However, in deep learning",
"research, they are heavily underused. What is usually done instead?Most",
"empirical deep learning papers",
"simply report the performance of the best single model (sometimes calling it just \"single model\" performance). We will later show this is the case",
"at least for some sub-domains. Given the result stochasticity, such",
"method is statistically flawed. The best model performance is not robust",
"under experiment replication, and its expected value improves with an increasing number of experiments performed, among other problems. Since many deep learning publications largely",
"ignore these issues, we dedicate the first part of this article to explaining them in some detail, and later run experiments to quantify them.Appropriate statistical techniques are hence necessary for evaluating (and comparing) the performance of machine learning (ML) architectures. Some well-developed methods exist for such comparisons",
"(a great introduction is given for instance by BID5 ). However, most existing methods focus on comparing the",
"mean performance. This may be one of the reasons why statistical methods",
"are being underused, since mean may be unattractive to researchers in certain situations.There are multiple possible reasons for this. The one that we do consider sound 1 is that when deploying",
"models in practice, it is often natural to train multiple instances of a model and then deploy the best one to production based on a validation set evaluation.2 Underperforming models can be discarded, so the final deployed",
"model does come from 1 Other reasons why researchers resort to the best performance as opposed to the mean may come from the current highly competitive atmosphere in the field with (possibly excessive) focus on performance on standard datasets (see BID4 or BID26 for further discussion), which may motivate researchers to publish only their best results. Also, statistically sound estimation of performance does require",
"repeatedly re-running experiments, which does incur additional cost, which researchers may prefer to invest in additional model tuning, especially in the present situation where reviewers seem not to require statistically sound evaluation of models and on the other hand may favour high-performing models. Of course, these motives should instead give place to effort to",
"do good science, as opposed to a race on standard benchmarks. 2 In some applications there is focus on speed of training and",
"on reducing computational costs -there it does make sense to focus on the performance of the typical model as opposed to the best out of n, so the use of mean or median is appropriate. the higher tier of the model performance population, and the use",
"of mean may be inappropriate.Hence, rather than to completely abandon reporting the performance of the best model, we propose a way to fix its flaws. We do this by estimating the expected best-out-of-n (Boo n ) performance",
"by running more than n experiments, which gives the estimate statistical validity if a sufficient number of experiments are run. We discuss how this measure relates to the performance distribution of the",
"model, and we also give a method to empirically estimate Boo n .The paper proceeds as follows: First, we give a high-level explanation of why",
"reporting performance of the best single model is problematic. We also give some evidence that it is widely used in the deep learning community",
", which is why this explanation may be needed. We proceed by presenting Boo n as a way to fix the above problems. We then give",
"some experimental evidence for the flaws of best-singlemodel reporting",
"and show that Boo n does not suffer from them. We wrap up by discussing the place of Boo n in a ML researcher's toolbox alongside",
"traditional measures such as mean or median.",
"Boo n does fix the main flaws of reporting the best single model performance.",
"However, let us have a look at some of its limitations.Hyperparameter tuning This work does not fully compensate for improved expected results due to hyperparameter tuning, nor was it its primary aim.",
"Boo n is appropriate in the case of random hyperparameter sampling, where the performances in different runs are independent.",
"However, this is not the case for more advanced hyperparameter optimization methods.",
"The primary focus of this work was on tackling variability due to random initialization, data shuffling, and similar sources, which we have shown to be significant in itself.",
"Compensation for more advanced hyperparameter tuning (and ensuring the comparability of models in that case) is certainly a worthwhile area for future research.Mean, median, and other alternatives We do not claim our method to be strictly superior to traditional ways of aggregating results, such as mean or quantiles.",
"However, we have outlined a case where using Boo n is justified -situations where a final model to be deployed can be chosen from a pool of trained candidates.",
"In such case, Boo n is easily interpretable and more informative than a performance of a typical model, expressed by mean or median.",
"Hence, we think Boo n is a useful addition to the methodological toolbox along existing methods.Just a single number Boo n is still just a single number whose ability to characterize the performance distribution is limited by its single dimension.",
"Paper authors should try to characterise the performance distribution as fully as possible, which may involve a histogram, mean, standard deviation, ideally along a dataset containing the results of all experiments, from which an interested reader may be able to deduce whichever characteristic she finds interesting.",
"Unfortunately, such characterization is usually lacking.However, alongside this detailed characterization, describing an architecture's performance by a single number still has its appeal, especially for the purpose of comparison among architectures and choosing the best one according to some criterion (in fact, each quantitative score can be understood as a proxy for ordering architectures with respect to such criterion of interest, such as expected performance of the best model out of n).",
"We have explained why, in some cases, Boo n may be useful for such purpose.Computational cost Some may deem Boo n impractical due to its requirement to train architectures many times, which may be very expensive in some cases.",
"However, result stochasticity needs to be addressed to produce reliable results, and it is hard to imagine a general method to do so without repeated evaluation 12 .",
"Researchers should focus on architectures which they can evaluate properly given their resources.",
"However, the main target of our criticism is not projects whose resources are stretched by a single training; it is projects that do have the necessary resources for multiple evaluations but use them to produce better-looking results rather than results that are more informative and robust.",
"Reporting just the best single model performance is not statistically sound.",
"This practice in machine learning research needs to change if the research is to have lasting value.",
"Reviewers can play an important role in bringing this change.Still, asking for the performance of a best model out of n can have valid reasons.",
"For the situations where the best-model performance is indeed a good metric, we are suggesting Boo n as a way to evaluate it properly.",
"DISPLAYFORM0 where Φ is the c.d.f. of a standard normal random variable.",
"DISPLAYFORM1 (the first integrand has the form of the p.d.f. found above and hence integrates to one) so the expected maximum is neatly expressed in terms of a maximum of a standard normal and is linearly proportional to both the mean and the standard deviation.",
"Once n is fixed for comparison purposes, Boo n (N (0, 1)) is just a constant, e.g. Boo 5 FIG2 ) ≈ 1.163, Boo 10 (N (0, 1)) ≈ 1.539."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
1,
0.14814814925193787,
0.24390242993831635,
0.2790697515010834,
0.10526315122842789,
0.19672130048274994,
0.0624999962747097,
0.1304347813129425,
0.07017543166875839,
0.19607841968536377,
0.10256409645080566,
0,
0.12121211737394333,
0.2745097875595093,
0.10256409645080566,
0.20512820780277252,
0.22641508281230927,
0.2222222238779068,
0.12765957415103912,
0.1463414579629898,
0.10169491171836853,
0.1904761791229248,
0.17721518874168396,
0.13513512909412384,
0.11764705181121826,
0.2295081913471222,
0.29032257199287415,
0.17241378128528595,
0.19607841968536377,
0.3921568691730499,
0.14814814925193787,
0.21052631735801697,
0.22641508281230927,
0,
0.3333333432674408,
0.09836065024137497,
0.08695651590824127,
0.09756097197532654,
0.1071428507566452,
0.21333332359790802,
0.18518517911434174,
0.15686273574829102,
0.16949151456356049,
0.11594202369451523,
0.25581395626068115,
0.06557376682758331,
0.11320754140615463,
0,
0.20588234066963196,
0.25,
0.13636362552642822,
0.3396226465702057,
0.15686273574829102,
0.1395348757505417,
0.1269841194152832,
0.07547169178724289
] | Skx5txzb0W | true | [
"We point out important problems with the common practice of using the best single model performance for comparing deep learning architectures, and we propose a method that corrects these flaws."
] |
[
"Unsupervised domain adaptation aims to generalize the hypothesis trained in a source domain to an unlabeled target domain.",
"One popular approach to this problem is to learn domain-invariant embeddings for both domains.",
"In this work, we study, theoretically and empirically, the effect of the embedding complexity on generalization to the target domain.",
"In particular, this complexity affects an upper bound on the target risk; this is reflected in experiments, too.",
"Next, we specify our theoretical framework to multilayer neural networks.",
"As a result, we develop a strategy that mitigates sensitivity to the embedding complexity, and empirically achieves performance on par with or better than the best layer-dependent complexity tradeoff.",
"Domain adaptation is critical in many applications where collecting large-scale supervised data is prohibitively expensive or intractable, or where conditions at prediction time can change.",
"For instance, self-driving cars must be robust to different weather, change of landscape and traffic.",
"In such cases, the model learned from limited source data should ideally generalize to different target domains.",
"Specifically, unsupervised domain adaptation aims to transfer knowledge learned from a labeled source domain to similar but completely unlabeled target domains.",
"One popular approach to unsupervised domain adaptation is to learn domain-invariant representations (Ben-David et al., 2007; Long et al., 2015; Ganin et al., 2016) , by minimizing a divergence between the representations of source and target domains.",
"The prediction function is learned on these \"aligned\" representations with the aim of making it domain-independent.",
"A series of theoretical works justifies this idea (Ben-David et al., 2007; Mansour et al., 2009; Ben-David et al., 2010; Cortes & Mohri, 2011) .",
"Despite the empirical success of domain-invariant representations, exactly matching the representations of source and target distribution can sometimes fail to achieve domain adaptation.",
"For example, Wu et al. (2019) show that exact matching may increase target error if label distributions are different between source and target domain, and propose a new divergence metric to overcome this limitation.",
"Zhao et al. (2019) establish lower and upper bounds on the risk when label distributions between source and target domains differ.",
"Johansson et al. (2019) point out the information lost in non-invertible embeddings, and propose different generalization bounds based on the overlap of the supports of source and target distribution.",
"In contrast to previous analyses that focus on changes in the label distributions or joint support, we study the effect of embedding complexity.",
"In particular, we show a general bound on the target risk that reflects a tradeoff between embedding complexity and the divergence of source and target domains.",
"A too powerful class of embeddings can result in overfitting the source data and the matching of source and target distributions, resulting in arbitrarily high target risk.",
"Hence, a restriction is needed.",
"We observe that indeed, without appropriately constraining the embedding complexity, the performance of state-of-the-art methods such as domain-adversarial neural networks (Ganin et al., 2016) can deteriorate significantly.",
"Next, we tailor the bound to multilayer neural networks.",
"In a realistic scenario, one may have a total depth budget and divide the network into an encoder (embedding) and predictor by aligning the representations of source and target in a chosen layer, which defines the division.",
"In this case, a more complex encoder necessarily implies a weaker predictor, and vice versa.",
"This tradeoff is reflected in the bound and, we see that, in practice, there is an \"optimal\" division.",
"To better optimize the tradeoff between encoder and predictor without having to tune the division, we propose to optimize the tradeoffs in all layers jointly via a simple yet effective objective that can easily be combined with most current approaches for learning domain-invariant representations.",
"Implicitly, this objective restricts the more powerful deeper encoders by encouraging a simultaneous alignment across layers.",
"In practice, the resulting algorithm achieves performance on par with or better than standard domain-invariant representations, without tuning of the division.",
"Empirically, we examine our theory and learning algorithms on sentiment analysis (Amazon review dataset), digit classification (MNIST, MNIST-M, SVHN) and general object classification (Office-31).",
"In short, this work makes the following contributions:",
"• General upper bounds on target error that capture the effect of embedding complexity when learning domain-invariant representations; • Fine-grained analysis for multilayer neural networks, and a new objective with implicit regularization that stabilizes and improves performance; • Empirical validation of the analyzed tradeoffs and proposed algorithm on several datasets.",
"In this paper, we theoretically and empirically analyze the effect of embedding complexity on the target risk in domain-invariant representations.",
"We find a complexity tradeoff that has mostly been overlooked by previous work.",
"In fact, without carefully selecting and restricting the encoder class, learning domain invariant representations might even harm the performance.",
"We further develop a simple yet effective algorithm to approximately optimize the tradeoff, achieving performance across tasks that matches the best network division, i.e., complexity tradeoff.",
"Interesting future directions of work include other strategies for model selection, and a more refined analysis and exploitation of the effect of inductive bias."
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.22857142984867096,
0.12121211737394333,
0.3684210479259491,
0.1621621549129486,
0.06666666269302368,
0.4680851101875305,
0.0476190410554409,
0.17142856121063232,
0.10810810327529907,
0.10256409645080566,
0.2745097875595093,
0.2222222238779068,
0.04878048226237297,
0.2926829159259796,
0.1538461446762085,
0.09999999403953552,
0.17777776718139648,
0.4285714328289032,
0.3333333134651184,
0.19512194395065308,
0.07999999821186066,
0.21276594698429108,
0.13793103396892548,
0.23529411852359772,
0.11764705181121826,
0.1111111044883728,
0.29999998211860657,
0.1111111044883728,
0.14999999105930328,
0.0952380895614624,
0.0714285671710968,
0.32258063554763794,
0.4615384638309479,
0.24242423474788666,
0.21052631735801697,
0.2978723347187042,
0.24390242993831635
] | SkgOzlrKvH | true | [
"We study the effect of the embedding complexity in learning domain-invariant representations and develop a strategy that mitigates sensitivity to it."
] |
[
"We propose a new architecture termed Dual Adversarial Transfer Network (DATNet) for addressing low-resource Named Entity Recognition (NER).",
"Specifically, two variants of DATNet, i.e., DATNet-F and DATNet-P, are proposed to explore effective feature fusion between high and low resource.",
"To address the noisy and imbalanced training data, we propose a novel Generalized Resource-Adversarial Discriminator (GRAD).",
"Additionally, adversarial training is adopted to boost model generalization.",
"We examine the effects of different components in DATNet across domains and languages and show that significant improvement can be obtained especially for low-resource data.",
"Without augmenting any additional hand-crafted features, we achieve new state-of-the-art performances on CoNLL and Twitter NER---88.16% F1 for Spanish, 53.43% F1 for WNUT-2016, and 42.83% F1 for WNUT-2017.",
"Named entity recognition (NER) is an important step in most natural language processing (NLP) applications.",
"It detects not only the type of named entity, but also the entity boundaries, which requires deep understanding of the contextual semantics to disambiguate the different entity types of same tokens.",
"To tackle this challenging problem, most early studies were based on hand-crafted rules, which suffered from limited performance in practice.",
"Current methods are devoted to developing learning based algorithms, especially neural network based methods, and have been advancing the state-of-the-art consecutively BID7 BID23 BID6 BID33 .",
"These end-to-end models generalize well on new entities based on features automatically learned from the data.",
"However, when the annotated corpora is small, especially in the low resource scenario BID56 , the performance of these methods degrades significantly since the hidden feature representations cannot be learned adequately.Recently, more and more approaches have been proposed to address low-resource NER.",
"Early works BID5 BID24 primarily assumed a large parallel corpus and focused on exploiting them to project information from high-to low-resource.",
"Unfortunately, such a large parallel corpus may not be available for many low-resource languages.",
"More recently, cross-resource word embedding BID9 BID0 BID52 was proposed to bridge the low and high resources and enable knowledge transfer.",
"Although the aforementioned transferbased methods show promising performance in low-resource NER, there are two issues deserved to be further investigated on:",
"1) Representation Difference -they did not consider the representation difference across resources and enforced the feature representation to be shared across languages/domains;",
"2) Resource Data Imbalance -the training size of high-resource is usually much larger than that of low-resource.",
"The existing methods neglect such difference in their models, resulting in poor generalization.In this work, we present an approach termed Dual Adversarial Transfer Network (DATNet) to address the above issues in a unified framework for low-resource NER.",
"Specifically, to handle the representation difference, we first investigate on two architectures of hidden layers (we use bidirectional long-short term memory (BiLSTM) model as hidden layer) for transfer.",
"The first one is that all the units in hidden layers are common units shared across languages/domains.",
"The second one is composed of both private and common units, where the private part preserves the independent language/domain information.",
"Extensive experiments are conducted to show their advantages over each other in different situations.",
"On top of common units, the adversarial discriminator (AD) loss is introduced to encourage the resource-agnostic representation so that the knowledge from high resource can be more compatible with low resource.",
"To handle the resource data imbalance issue, we further propose a variant of the AD loss, termed Generalized Resource-Adversarial Discriminator (GRAD), to impose the resource weight during training so that low-resource and hard samples can be paid more attention to.",
"In addition, we create adversarial samples to conduct the Adversarial Training (AT), further improving the generalization and alleviating over-fitting problem.",
"We unify two kinds of adversarial learning, i.e., GRAD and AT, into one transfer learning model, termed Dual Adversarial Transfer Network (DATNet), to achieve end-to-end training and obtain the state-of-the-art performance on a series of NER tasks-88.16% F1 for CoNLL-2002 Spanish, 53.43% and 42.83% F1 for WNUT-2016 .",
"Different from prior works, we do not use additional hand-crafted features and do not use cross-lingual word embeddings while addressing the cross-language tasks.",
"In this paper we develop a transfer learning model DATNet for low-resource NER, which aims at addressing two problems remained in existing work, namely representation difference and resource data imbalance.",
"We introduce two variants of DATNet, DATNet-F and DATNet-P, which can be chosen for use according to the cross-language/domain user case and the target dataset size.",
"To improve model generalization, we propose dual adversarial learning strategies, i.e., AT and GRAD.",
"Extensive experiments show the superiority of DATNet over existing models and it achieves new state-of-the-art performance on CoNLL NER and WNUT NER benchmark datasets."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.8181818127632141,
0.0416666604578495,
0.1428571343421936,
0,
0.1599999964237213,
0.3461538553237915,
0.09756097197532654,
0,
0.04347825422883034,
0.07999999821186066,
0.09756097197532654,
0.0923076868057251,
0.1702127605676651,
0.14999999105930328,
0.04347825422883034,
0.04255318641662598,
0.04444443807005882,
0.0476190410554409,
0.32258063554763794,
0.07547169178724289,
0,
0.045454539358615875,
0,
0,
0.16129031777381897,
0.08888888359069824,
0.3561643660068512,
0.08695651590824127,
0.178571417927742,
0.11999999731779099,
0.0952380895614624,
0.25
] | HkGzUjR5tQ | true | [
"We propose a new architecture termed Dual Adversarial Transfer Network (DATNet) for addressing low-resource Named Entity Recognition (NER) and achieve new state-of-the-art performances on CoNLL and Twitter NER."
] |
[
"Generative adversarial networks (GANs) evolved into one of the most successful unsupervised techniques for generating realistic images.",
"Even though it has recently been shown that GAN training converges, GAN models often end up in local Nash equilibria that are associated with mode collapse or otherwise fail to model the target distribution.",
"We introduce Coulomb GANs, which pose the GAN learning problem as a potential field, where generated samples are attracted to training set samples but repel each other.",
"The discriminator learns a potential field while the generator decreases the energy by moving its samples along the vector (force) field determined by the gradient of the potential field.",
"Through decreasing the energy, the GAN model learns to generate samples according to the whole target distribution and does not only cover some of its modes.",
"We prove that Coulomb GANs possess only one Nash equilibrium which is optimal in the sense that the model distribution equals the target distribution.",
"We show the efficacy of Coulomb GANs on LSUN bedrooms, CelebA faces, CIFAR-10 and the Google Billion Word text generation.",
"Generative adversarial networks (GANs) excel at constructing realistic images BID28 BID24 BID3 and text BID18 .",
"In GAN learning, a discriminator network guides the learning of another, generative network.",
"This procedure can be considered as a game between the generator which constructs synthetic data and the discriminator which separates synthetic data from training set data BID16 .",
"The generator's goal is to construct data which the discriminator cannot tell apart from training set data.",
"GAN convergence points are local Nash equilibria.",
"At these local Nash equilibria neither the discriminator nor the generator can locally improve its objective.Despite their recent successes, GANs have several problems.",
"First (I), until recently it was not clear if in general gradient-based GAN learning could converge to one of the local Nash equilibria BID38 BID15 .",
"It is even possible to construct counterexamples BID16 .",
"Second (II), GANs suffer from \"mode collapsing\", where the model generates samples only in certain regions which are called modes.",
"While these modes contain realistic samples, the variety is low and only a few prototypes are generated.",
"Mode collapsing is less likely if the generator is trained with batch normalization, since the network is bound to create a certain variance among its generated samples within one batch .",
"However batch normalization introduces fluctuations of normalizing constants which can be harmful BID16 .",
"To avoid mode collapsing without batch normalization, several methods have been proposed BID5 BID38 .",
"Third (III), GANs cannot assure that the density of training samples is correctly modeled by the generator.",
"The discriminator only tells the generator whether a region is more likely to contain samples from the training set or synthetic samples.",
"Therefore the discriminator can only distinguish the support of the model distribution from the support of the target distribution.",
"Beyond matching the support of distributions, GANs with proper objectives may learn to locally align model and target densities via averaging over many training examples.",
"On a global scale, however, GANs fail to equalize model and target densities.",
"The discriminator does not inform the generator globally where probability mass is missing.",
"Consequently, standard GANs are not assured to capture the global sample density and are prone to neglect large parts of the target distribution.",
"The next paragraph gives an example of this.",
"Fourth (IV), the discriminator of GANs may forget previous modeling errors of the generator which then may reappear, a property that leads to oscillatory behavior instead of convergence BID16 .Recently",
", problem (I) was solved by proving that GAN learning does indeed converge when discriminator and generator are learned using a two time-scale learning rule BID20 . Convergence",
"means that the expected SGD-gradient of both the discriminator objective and the generator objective are zero. Thus, neither",
"the generator nor the discriminator can locally improve, i.e., learning has reached a local Nash equilibrium. However, convergence",
"alone does not guarantee good generative performance. It is possible to converge",
"to sub-optimal solutions which are local Nash equilibria. Mode collapse is a special",
"case of a local Nash equilibrium associated with suboptimal generative performance. For example, assume a two",
"mode real world distribution where one mode contains too few and the other mode too many generator samples. If no real world samples",
"are between these two distinct modes, then the discriminator penalizes to move generated samples outside the modes. Therefore the generated",
"samples cannot be correctly distributed over the modes. Thus, standard GANs can",
"not capture the global sample density such that the resulting generators are prone to neglect large parts of the real world distribution. A more detailed example",
"is listed in the Appendix in Section A.1.In this paper, we introduce a novel GAN model, the Coulomb GAN, which has only one Nash equilibrium. We are later going to show",
"that this Nash equilibrium is optimal, i.e., the model distribution matches the target distribution. We propose Coulomb GANs to",
"avoid the GAN shortcoming (II) to (IV) by using a potential field created by point charges analogously to the electric field in physics. The next section will introduce",
"the idea of learning in a potential field and prove that its only solution is optimal. We will then show how learning",
"the discriminator and generator works in a Coulomb GAN and discuss the assumptions needed for our optimality proof. In Section 3 we will then see",
"that the Coulomb GAN does indeed work well in practice and that the samples it produces have very large variability and appear to capture the original distribution very well.Related Work. Several GAN approaches have been",
"suggested for bringing the target and model distributions in alignment using not just local discriminator information: Geometric GANs combine samples via a linear support vector machine which uses the discriminator outputs as samples, therefore they are much more robust to mode collapsing BID31 . Energy-Based GANs BID41 and their",
"later improvement BEGANs BID3 ) optimize an energy landscape based on auto-encoders. McGANs match mean and covariance",
"of synthetic and target data, therefore are more suited than standard GANs to approximate the target distribution BID34 . In a similar fashion, Generative",
"Moment Matching Networks BID30 and MMD nets BID12 directly optimize a generator network to match a training distribution by using a loss function based on the maximum mean discrepancy (MMD) criterion BID17 . These approaches were later expanded",
"to include an MMD criterion with learnable kernels and discriminators . The MMD criterion that these later approaches",
"optimize has a form similar to the energy function that Coulomb GANs optimize (cf. Eq. (33)). However, all MMD approaches end up",
"using",
"either",
"Gaussian or Laplace kernels, which are not guaranteed to find the optimal solution where the model distribution matches the target distribution. In contrast, the Plummer kernel which is employed",
"in this work has been shown to lead to the optimal solution BID22 . We show that even a simplified version of the Plummer",
"kernel, the low-dimensional Plummer kernel, ensures that gradient descent convergences to the optimal solution as stated by Theorem 1. Furthermore, most MMD GAN approaches use the MMD directly",
"as loss function though the number of possible samples in a mini-batch is limited. Therefore MMD approaches face a sampling problem in high-dimensional",
"spaces. The Coulomb GAN instead learns a discriminator network that gradually",
"improves its approximation of the potential field via learning Figure 1 : The vector field of a Coulomb GAN. The basic idea behind the Coulomb GAN: true samples (blue) and generated",
"samples (red) create a potential field (scalar field). Blue samples act as sinks that attract the red samples, which repel each",
"other. The superimposed vector field shows the forces acting on the generator samples",
"to equalize potential differences, and the background color shows the potential at each position. Best viewed in color.on many mini-batches. The discriminator network also tracks",
"the slowly changing generator distribution",
"during learning. Most importantly however, our approach is, to the best of our knowledge, the first",
"one for which optimality, i.e., ability to perfectly learn a target distribution, can be proved.The use of the Coulomb potential for learning is not new. Coulomb Potential Learning was proposed to store arbitrary many patterns in a potential",
"field with perfect recall and without spurious patterns BID35 . Another related work is the Potential Support Vector Machine (PSVM), which minimizes Coulomb",
"potential differences BID21 BID23 . BID22 also used a potential function based on Plummer kernels for optimal unsupervised learning",
", on which we base our work on Coulomb GANs.",
"The Coulomb GAN is a generative adversarial network with strong theoretical guarantees.",
"Our theoretical results show that the Coulomb GAN will be able to approximate the real distribution perfectly if the networks have sufficient capacity and training does not get stuck in local minima.",
"Our results show that the potential field used by the Coulomb GAN far outperforms MMD based approaches due to its low-dimensional Plummer kernel, which is better suited for modeling probability density functions, and is very effective at eliminating the mode collapse problem in GANs.",
"This is because our loss function forces the generated samples to occupy different regions of the learned distribution.",
"In practice, we have found that Coulomb GANs are able to produce a wide range of different samples.",
"However, in our experience, this sometimes leads to a small number of generated samples that are non-sensical interpolations of existing data modes.",
"While these are sometimes also present in other GAN models , we found that our model produces such images at a slightly higher rate.",
"This issue might be solved by finding better ways of learning the discriminator, as learning the correct potential field is crucial for the Coulomb GAN's performance.",
"We also observed that increasing the capacity of the discriminator seems to always increase the generative performance.",
"We thus hypothesize that the largest issue in learning Coulomb GANs is that the discriminator needs to approximate the potential field Φ very well in a high-dimensional space.",
"In summary, instead of directly optimizing a criterion based on local differences of densities which can exhibit many local minima, Coulomb GANs are based on a potential field that has no local minima.",
"The potential field is created by point charges in an analogy to electric field in physics.",
"We have proved that if learning converges then it converges to the optimal solution if the samples can be moved freely.",
"We showed that Coulomb GANs avoid mode collapsing, model the target distribution more truthfully than standard GANs, and do not overlook high probability regions of the target distribution.A APPENDIX"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.060606054961681366,
0.0833333283662796,
0.3333333432674408,
0.2702702581882477,
0.10256409645080566,
0.2222222238779068,
0.17142856121063232,
0,
0.2142857164144516,
0.21052631735801697,
0.0624999962747097,
0,
0.1538461446762085,
0.09756097197532654,
0,
0.1111111044883728,
0.12121211737394333,
0.0952380895614624,
0.06896550953388214,
0,
0.1875,
0.1111111044883728,
0.2142857164144516,
0.1463414579629898,
0.13793103396892548,
0.06896550953388214,
0.1666666567325592,
0,
0.1428571343421936,
0.1904761791229248,
0.06451612710952759,
0.22857142984867096,
0,
0.06896550953388214,
0.06451612710952759,
0.12121211737394333,
0.060606054961681366,
0.2142857164144516,
0.09999999403953552,
0.12765957415103912,
0.22857142984867096,
0.25,
0.2702702581882477,
0.1538461446762085,
0.13636362552642822,
0.13333332538604736,
0,
0.21052631735801697,
0.15686273574829102,
0,
0.21621620655059814,
0.09999999403953552,
0.10810810327529907,
0.14999999105930328,
0.21621620655059814,
0.14814814925193787,
0.2857142686843872,
0.277777761220932,
0.1428571343421936,
0.10256409645080566,
0.190476194024086,
0.13333332538604736,
0.26923075318336487,
0.15789473056793213,
0.1818181723356247,
0.1599999964237213,
0.1428571343421936,
0.1304347813129425,
0.24561403691768646,
0.12121211737394333,
0.1764705777168274,
0.05405404791235924,
0.04999999701976776,
0.3589743673801422,
0.06451612710952759,
0.3499999940395355,
0.3255814015865326,
0.19999998807907104,
0.1764705777168274,
0.1860465109348297
] | SkVqXOxCb | true | [
"Coulomb GANs can optimally learn a distribution by posing the distribution learning problem as optimizing a potential field"
] |
[
"Predicting properties of nodes in a graph is an important problem with applications in a variety of domains.",
"Graph-based Semi Supervised Learning (SSL) methods aim to address this problem by labeling a small subset of the nodes as seeds, and then utilizing the graph structure to predict label scores for the rest of the nodes in the graph.",
"Recently, Graph Convolutional Networks (GCNs) have achieved impressive performance on the graph-based SSL task.",
"In addition to label scores, it is also desirable to have a confidence score associated with them.",
"Unfortunately, confidence estimation in the context of GCN has not been previously explored.",
"We fill this important gap in this paper and propose ConfGCN, which estimates labels scores along with their confidences jointly in GCN-based setting.",
"ConfGCN uses these estimated confidences to determine the influence of one node on another during neighborhood aggregation, thereby acquiring anisotropic capabilities.",
"Through extensive analysis and experiments on standard benchmarks, we find that ConfGCN is able to significantly outperform state-of-the-art baselines.",
"We have made ConfGCN’s source code available to encourage reproducible research.",
"Graphs are all around us, ranging from citation and social networks to knowledge graphs.",
"Predicting properties of nodes in such graphs is often desirable.",
"For example, given a citation network, we may want to predict the research area of an author.",
"Making such predictions, especially in the semisupervised setting, has been the focus of graph-based semi-supervised learning (SSL) BID25 .",
"In graph-based SSL, a small set of nodes are initially labeled.",
"Starting with such supervision and while utilizing the rest of the graph structure, the initially unlabeled nodes are labeled.",
"Conventionally, the graph structure has been incorporated as an explicit regularizer which enforces a smoothness constraint on the labels estimated on nodes BID36 BID2 BID31 .",
"Recently proposed Graph Convolutional Networks (GCN) BID8 BID14 provide a framework to apply deep neural networks to graphstructured data.",
"GCNs have been employed successfully for improving performance on tasks such as semantic role labeling , machine translation BID1 , relation extraction BID28 BID35 , event extraction BID20 , shape segmentation BID34 , and action recognition BID12 .",
"GCN formulations for graph-based SSL have also attained state-of-the-art performance BID14 BID18 BID29 .",
"In this paper, we also focus on the task of graphbased SSL using GCNs.GCN iteratively estimates embedding of nodes in the graph by aggregating embeddings of neighborhood nodes, while backpropagating errors from a target loss function.",
"Finally, the learned node embeddings are used to estimate label scores on the nodes.",
"In addition to the label scores, it is desirable to also have confidence estimates associated with them.",
"Such confidence scores may be used to determine how much to trust the label scores estimated on a given node.",
"While methods to estimate label score confidence in non-deep graph-based SSL has been previously proposed BID21 , confidence-based GCN is still unexplored.",
"Figure 1: Label prediction on node a by Kipf-GCN and ConfGCN (this paper).",
"L 0 is a's true label.",
"Shade intensity of a node reflects the estimated score of label L 1 assigned to that node.",
"Since Kipf-GCN is not capable of estimating influence of one node on another, it is misled by the dominant label L 1 in node a's neighborhood and thereby making the wrong assignment.",
"ConfGCN, on the other hand, estimates confidences (shown by bars) over the label scores, and uses them to increase influence of nodes b and c to estimate the right label on a.",
"Please see Section 1 for details.",
"In this paper we present ConfGCN, a confidence based Graph Convolutional Network which estimates label scores along with their confidences jointly in GCN-based setting.",
"In ConfGCN, the influence of one node on another during aggregation is determined using the estimated confidences and label scores, thus inducing anisotropic behavior to GCN.",
"We demonstrate the effectiveness of ConfGCN against recent methods for semi-supervised node classification task and analyze its performance in different settings.",
"We make ConfGCN's source code available."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.07692307233810425,
0.1395348757505417,
0.1599999964237213,
0.14814814925193787,
0.0833333283662796,
0.1249999925494194,
0,
0,
0.09090908616781235,
0,
0,
0.0714285671710968,
0,
0.09090908616781235,
0,
0.05882352590560913,
0.20689654350280762,
0.04651162400841713,
0.0833333283662796,
0.04444444179534912,
0,
0.07407406717538834,
0.13793103396892548,
0.060606054961681366,
0.0833333283662796,
0,
0.07692307233810425,
0,
0.054054051637649536,
0.11764705181121826,
0.34285715222358704,
0,
0.1249999925494194,
0.11764705181121826
] | HklUN3RcFX | true | [
"We propose a confidence based Graph Convolutional Network for Semi-Supervised Learning."
] |
[
"In this paper we establish rigorous benchmarks for image classifier robustness.",
"Our first benchmark, ImageNet-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications.",
"Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations.",
"Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations.",
"We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers.",
"Afterward we discover ways to enhance corruption and perturbation robustness.",
"We even find that a bypassed adversarial defense provides substantial common perturbation robustness.",
"Together our benchmarks may aid future work toward networks that robustly generalize.",
"The human vision system is robust in ways that existing computer vision systems are not BID50 BID1 .",
"Unlike current deep learning classifiers BID36 BID21 BID60 , the human vision system is not fooled by small changes in query images.",
"Humans are also not confused by many forms of corruption such as snow, blur, pixelation, and novel combinations of these.",
"Humans can even deal with abstract changes in structure and style.",
"Achieving these kinds of robustness is an important goal for computer vision and machine learning.",
"It is also essential for creating deep learning systems that can be deployed in safety-critical applications.Most work on robustness in deep learning methods for vision has focused on the important challenges of robustness to adversarial examples BID54 BID5 , unknown unknowns BID25 BID23 BID41 , and model or data poisoning BID53 .",
"In contrast, we develop and validate datasets for two other forms of robustness.",
"Specifically, we introduce the IMAGETNET-C dataset for input corruption robustness and the IMAGENET-P dataset for input perturbation robustness.To create IMAGENET-C, we introduce a set of 75 common visual corruptions and apply them to the ImageNet object recognition challenge BID7 .",
"We hope that this will serve as a general dataset for benchmarking robustness to image corruptions and prevent methodological problems such as moving goal posts and result cherry picking.",
"We evaluate the performance of current deep learning systems and show that there is wide room for improvement on IMAGENET-C.",
"We also introduce a total of three methods and architectures that improve corruption robustness without losing accuracy.To create IMAGENET-P, we introduce a set of perturbed or subtly differing ImageNet images.",
"Using metrics we propose, we measure the stability of the network's predictions on these perturbed images.",
"Although these perturbations are not chosen by an adversary, currently existing networks exhibit surprising instability on common perturbations.",
"Then we then demonstrate that approaches which enhance corruption robustness can also improve perturbation robustness.",
"For example, some recent architectures can greatly improve both types of robustness.",
"More, we show that the Adversarial Logit Pairing ∞ adversarial example defense can yield substantial robustness gains on diverse and common perturbations.",
"By defining and benchmarking perturbation and corruption robustness, we facilitate research that can be overcome by future networks which do not rely on spurious correlations or cues inessential to the object's class.",
"In this paper, we introduced what are to our knowledge the first comprehensive benchmarks for corruption and perturbation robustness.",
"This was made possible by introducing two new datasets, IMAGENET-C and IMAGENET-P.",
"The first of which showed that many years of architectural advancements corresponded to minuscule changes in relative corruption robustness.",
"Therefore benchmarking and improving robustness deserves attention, especially as top-1 clean ImageNet accuracy nears its ceiling.",
"We also saw that classifiers exhibit unexpected instability on simple perturbations.",
"Thereafter we found that methods such as histogram equalization, multiscale architectures, and larger featureaggregating models improve corruption robustness.",
"These larger models also improve perturbation robustness.",
"However, we found that even greater perturbation robustness can come from an adversarial defense designed for adversarial ∞ perturbations, indicating a surprising interaction between adversarial and common perturbation robustness.",
"In this work, we found several methods to increase robustness, introduced novel experiments and metrics, and created new datasets for the rigorous study of model robustness, a pressing necessity as models are unleashed into safety-critical real-world settings."
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1818181723356247,
0.19354838132858276,
0.2857142686843872,
0.14814814925193787,
0.29629629850387573,
0.4761904776096344,
0.25,
0,
0,
0,
0.13333332538604736,
0.09090908616781235,
0.1538461446762085,
0.1071428507566452,
0.1666666567325592,
0.2380952388048172,
0.21052631735801697,
0.12903225421905518,
0.20512820780277252,
0.07999999821186066,
0,
0.23999999463558197,
0.08695651590824127,
0.12121211737394333,
0.190476194024086,
0.3333333432674408,
0.08695651590824127,
0.20689654350280762,
0.14814814925193787,
0.09090908616781235,
0.20689654350280762,
0.2222222238779068,
0.1666666567325592,
0.08695651590824127
] | HJz6tiCqYm | true | [
"We propose ImageNet-C to measure classifier corruption robustness and ImageNet-P to measure perturbation robustness"
] |
[
"The paper explores a novel methodology in source code obfuscation through the application of text-based recurrent neural network network (RNN) encoder-decoder models in ciphertext generation and key generation.",
"Sequence-to-sequence models\n",
"are incorporated into the model architecture to generate obfuscated code, generate the deobfuscation key, and live execution.",
"Quantitative benchmark comparison to existing obfuscation methods indicate significant improvement in stealth and execution cost for the proposed solution, and experiments regarding the model’s properties yield positive results regarding its character variation, dissimilarity to the original codebase, and consistent length of obfuscated code.",
"The field of code obfuscation has aimed to tackle reverse-engineering of code bases for years.",
"The entire basis of this methodology is that if a program is constructed with logic not easily recognizable by a reader, the logic would be preserved intact and the software would be intrinsically protected.",
"Traditional tactics include creative uses of whitespace, redundant logical operations, unnecessary conditional operations, amongst others.",
"The common issue with obfuscation is that it can be reverseengineered, the only factor for a malicious actor would be the amount of time needed to discover the logic.",
"DeepObfusCode is a proposed methodology to use neural networks to convert the plaintext source code into a cipher text by using the propagating architecture of neural networks to compound the randomness factor in the creation of the ciphertext.",
"Yet at the same time, neural networks have the ability to learn statistical patterns and generate weights to convert one text to another, in our case from the ciphertext to plaintext.",
"This would eventually permit users to simply load the ciphertext and the key to self-execute the program without foreign users viewing the inner workings of the code.",
"From an academic standpoint, this methodology redirects obfuscation methodology towards complete obfuscation in contrary of incremental obfuscation, and suggests the usage and development of asymmetric key infrastructure in obfuscation.",
"Beyond sequence-to-sequence network models, further obfuscation models could be built with greater degree of resilience, and other deep learning methods could be harnessed to develop alternative techniques to obfuscate code.",
"The methodology can be adopted for more efficient, more effective obfuscation for developers protecting their proprietary codebase or cloud computing services wishing to guarantee confidentiality to customers.",
"The algorithmic architecture could be further incorporated into larger frameworks or infrastructures to render homomorphic encryption and ensure complete anonymity of the codebase during execution from the execution provider.",
"This paper presents a novel obfuscation methodology using RNN encoder-decoder models to generate ciphertext from source code and generating and utilizing model weights as keys.",
"Figure 7 : Correlation matrix of properties obfuscation methods, it is at least on par in terms of stealth and is expected to outperform for larger code bases in terms of obscurity and readability, and though key generation may take a significant amount of time for larger code bases or require more computational resources, it would be less time-intensive than to manually obfuscate the source code.",
"This would be a good use case application for services that have confidential source code in plaintext but would prefer ciphertext yet require the ability to execute."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2222222238779068,
0.23076923191547394,
0.1666666567325592,
0.0833333283662796,
0.10256409645080566,
0,
0.054054051637649536,
0.1538461446762085,
0.10810810327529907,
0.25,
0.1764705777168274,
0.10526315122842789,
0,
0.10526315122842789,
0.17142856121063232,
0.12903225421905518,
0.1621621549129486
] | SygQlT4FwS | true | [
"Obfuscate code using seq2seq networks, and execute using the obfuscated code and key pair"
] |
[
"We propose a method for joint image and per-pixel annotation synthesis with GAN.",
"We demonstrate that GAN has good high-level representation of target data that can be easily projected to semantic segmentation masks.",
"This method can be used to create a training dataset for teaching separate semantic segmentation network.",
"Our experiments show that such segmentation network successfully generalizes on real data.",
"Additionally, the method outperforms supervised training when the number of training samples is small, and works on variety of different scenes and classes.",
"The source code of the proposed method will be publicly available."
] | [
1,
0,
0,
0,
0,
0
] | [
0.7272727489471436,
0,
0.1599999964237213,
0,
0.1428571343421936,
0.09999999403953552
] | SJllFpVYwS | false | [
"GAN-based method for joint image and per-pixel annotation synthesis"
] |
[
"Vision-Language Navigation (VLN) is the task where an agent is commanded to navigate in photo-realistic unknown environments with natural language instructions.",
"Previous research on VLN is primarily conducted on the Room-to-Room (R2R) dataset with only English instructions. ",
"The ultimate goal of VLN, however, is to serve people speaking arbitrary languages.",
"Towards multilingual VLN with numerous languages, we collect a cross-lingual R2R dataset, which extends the original benchmark with corresponding Chinese instructions.",
"But it is time-consuming and expensive to collect large-scale human instructions for every existing language.",
"Based on the newly introduced dataset, we propose a general cross-lingual VLN framework to enable instruction-following navigation for different languages.",
"We first explore the possibility of building a cross-lingual agent when no training data of the target language is available.",
"The cross-lingual agent is equipped with a meta-learner to aggregate cross-lingual representations and a visually grounded cross-lingual alignment module to align textual representations of different languages.",
"Under the zero-shot learning scenario, our model shows competitive results even compared to a model trained with all target language instructions.",
"In addition, we introduce an adversarial domain adaption loss to improve the transferring ability of our model when given a certain amount of target language data.",
"Our methods and dataset demonstrate the potentials of building a cross-lingual agent to serve speakers with different languages.",
"Recently, the Vision-Language Navigation (VLN) task (Anderson et al., 2018) , which requires the agent to follow natural language instructions and navigate in houses, has drawn significant attention.",
"In contrast to some existing navigation tasks (Mirowski et al., 2016; Zhu et al., 2017) , where the agent has an explicit representation of the target to know if the goal has been reached or not, an agent in the VLN task can only infer the target from natural language instructions.",
"Therefore, in addition to normal visual challenges in navigation tasks, language understanding and cross-modal alignment are essential to complete the VLN task.",
"However, existing benchmarks (Anderson et al., 2018; Chen et al., 2019) for the VLN task are all monolingual in that they only contain English instructions.",
"Specifically, the navigation agents are trained and tested with only English corpus and thus unable to serve non-English speakers.",
"To fill this gap, one can collect the corresponding instructions in the language that the agent is expected to execute.",
"But it is not scalable and practical as there are thousands of languages on this planet and collecting large-scale data for each language would be very expensive and time-consuming.",
"Therefore, in this paper, we study the task of cross-lingual VLN to endow an agent the ability to understand multiple languages.",
"First, can we learn a model that has been trained on existing English instructions but is still able to perform reasonably well on a different language (e.g. Chinese)?",
"This is indeed a zero-shot learning scenario where no training data of target language is available.",
"An intuitive approach is to train the agent with English data, and at test time, use a machine translation system to translate the target language instructions to English, which are then fed into the agent for testing (see the above part of Figure 1 ).",
"The inverse solution is also rational: we can translate all English instructions into the target language and train the agent on the translated data, so it can be directly tested with target language instructions (see the below part of Figure 1) .",
"The former agent is tested on translated instructions while the latter is trained on translated instructions.",
"Both solutions suffer from translation errors and deviation from the corresponding human-annotated instructions.",
"But meanwhile, the former is trained on human-annotated English instructions (which we view as \"golden\" data) and the latter is tested on \"golden\" target language instructions.",
"Motivated by this fact, we design a cross-lingual VLN framework that learns to benefit from both solutions.",
"As shown in Figure 1 , we combine these two principles and introduce a meta-learner, which learns to produce beliefs for human-annotated instruction and its translation pair and dynamically fuse the cross-lingual representations for better navigation.",
"In this case, however, the training and inference are mismatched.",
"During training, the agent takes source human language and target machine translation (MT) data as input, while during inference, it needs to navigate with target human instructions and source MT data.",
"To better align the source and target languages, we propose a visually grounded cross-lingual alignment module to align the paired instructions via the same visual feature because they are describing the same demonstration path.",
"The cross-lingual alignment loss can also implicitly alleviate the translation errors by aligning the human language and its MT pair in the latent visual space.",
"After obtaining an efficient zero-shot agent, we investigate the question that, given a certain amount of data for the target language, can we learn a better adaptation model to improve source-to-target knowledge transfer?",
"The meta-learner and visually grounded cross-lingual alignment module provide a foundation for solving the circumstances that the agent has access to the source language and (partial) target language instructions for training.",
"To further leverage the fact that the agent has access to the target language training data, we introduce an adversarial domain adaption loss to alleviate the domain shifting issue between human-annotated and MT data, thus enhancing the model's transferring ability.",
"To validate our methods, we collect a cross-lingual VLN dataset (XL-R2R) by extending complimentary Chinese instructions for the English instructions in the R2R dataset.",
"Overall, our contributions are four-fold: (1) We collect the first cross-lingual VLN dataset to facilitate navigation models towards accomplishing instructions of various languages such as English and Chinese, and conduct analysis between English and Chinese corpus.",
"(2) We introduce the task of cross-lingual visionlanguage navigation and propose a principled meta-learning method that dynamically utilizes the augmented MT data for zero-shot cross-lingual VLN.",
"(3) We propose a visually grounded crosslingual alignment module for better cross-lingual knowledge transfer.",
"(4) We investigate how to transfer knowledge between human-annotated and MT data and introduce an adversarial domain adaption loss to improve the navigation performance given a certain amount of human-annotated target language data.",
"In this paper, we introduce a new task, namely cross-lingual vision-language navigation, to study cross-lingual representation learning situated in the navigation task where cross-modal interaction with the real world is involved.",
"We collect a cross-lingual R2R dataset and conduct pivot studies towards solving this challenging but practical task.",
"The proposed cross-lingual VLN framework is proven effective in cross-lingual knowledge transfer.",
"There are still lots of promising future directions for this task and dataset, e.g. to incorporate recent advances in VLN and greatly improve the model capacity.",
"It would also be valuable to extend the cross-lingual setting to support numerous different languages in addition to English and Chinese."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.10810810327529907,
0.24242423474788666,
0,
0.21621620655059814,
0.1249999925494194,
0.4864864945411682,
0.22857142984867096,
0.15789473056793213,
0.10810810327529907,
0.1428571343421936,
0.2857142686843872,
0.13636362552642822,
0.10526315122842789,
0.21621620655059814,
0.19512194395065308,
0.11428570747375488,
0.05714285373687744,
0.13636362552642822,
0.2222222238779068,
0.09090908616781235,
0.0624999962747097,
0.1428571343421936,
0.11764705181121826,
0.13793103396892548,
0.13793103396892548,
0.15789473056793213,
0.23529411852359772,
0.23999999463558197,
0.14814814925193787,
0.09302324801683426,
0.21739129722118378,
0.14999999105930328,
0.12765957415103912,
0.23255813121795654,
0.11999999731779099,
0.31578946113586426,
0.23999999463558197,
0.4878048598766327,
0.32258063554763794,
0.21739129722118378,
0.3478260934352875,
0.3529411852359772,
0.2142857164144516,
0.23255813121795654,
0.1666666567325592
] | rkeZO1BFDB | true | [
"We introduce a new task and dataset on cross-lingual vision-language navigation, and propose a general cross-lingual VLN framework for the task."
] |
[
"Deep generative models have advanced the state-of-the-art in semi-supervised classification, however their capacity for deriving useful discriminative features in a completely unsupervised fashion for classification in difficult real-world data sets, where adequate manifold separation is required has not been adequately explored.",
"Most methods rely on defining a pipeline of deriving features via generative modeling and then applying clustering algorithms, separating the modeling and discriminative processes.",
"We propose a deep hierarchical generative model which uses a mixture of discrete and continuous distributions to learn to effectively separate the different data manifolds and is trainable end-to-end.",
"We show that by specifying the form of the discrete variable distribution we are imposing a specific structure on the model's latent representations.",
"We test our model's discriminative performance on the task of CLL diagnosis against baselines from the field of computational FC, as well as the Variational Autoencoder literature.",
"Variational Autoencoders (VAEs) have recently shown remarkable performance in unsupervised generative modeling of high-dimensional data generated by complex distributions BID19 , as well as semi-supervised classification where only a small subset of the data set is labeled .",
"While the interaction between the generative and the classification capabilities of semi-supervised models has been recently explored in literature , there has been little investigation of the discriminative capabilities of a purely unsupervised framework with most works focusing on the task of unsupervised clustering BID17 BID36 BID16 .",
"Furthermore, most of these works have been evaluated on mostly benchmark data sets which do not capture the difficulties that are often encountered on real-world data.",
"For instance, there has been no investigation of the performance of these methods on data sets with significant class imbalance.",
"The question that is, then, posed is whether deep generative models can be used effectively as unsupervised classifiers, which can, in essence, be cast into a question of what type of features and architectural choices are required to achieve good classification performance in an unsupervised manner.To examine the aforementioned questions we propose a deep hierarchical generative model and evaluate its performance on a difficult real world data set.",
"In principle, we train our model in a completely unsupervised fashion, however in our experiments we rely on labeled data to measure our model's performance using suitable metrics for the problem domain, as well as derive a stopping criterion for training.",
"Our model outperforms established state-of-the-art baselines used in the field of the problem domain.",
"Our contributions are summarized in the following:• A framework which utilizes a hierarchy of continuous representations which conclude in a discrete variable explicitly representing categories, resulting in complex, expressive, invariant and interpretable representations BID1 , which are crucial in separating widely overlapping manifolds and achieve good classification results in significantly imbalanced data sets.•",
"Controllable representation structure through specification of the form of the aforementioned discrete variable which better suits the task at hand given a problem scenario."
] | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.1818181723356247,
0.20512819290161133,
0.1395348757505417,
0.052631575614213943,
0.04999999329447746,
0.19607841968536377,
0.22641508281230927,
0.04878048226237297,
0.0555555522441864,
0.21333332359790802,
0.07843136787414551,
0.06666666269302368,
0.09836065024137497,
0.10526315122842789
] | ryb83alCZ | true | [
"Unsupervised classification via deep generative modeling with controllable feature learning evaluated in a difficult real world task"
] |
[
"Representation Learning over graph structured data has received significant attention recently due to its ubiquitous applicability.",
"However, most advancements have been made in static graph settings while efforts for jointly learning dynamic of the graph and dynamic on the graph are still in an infant stage.",
"Two fundamental questions arise in learning over dynamic graphs:",
"(i) How to elegantly model dynamical processes over graphs?",
"(ii) How to leverage such a model to effectively encode evolving graph information into low-dimensional representations?",
"We present DyRep - a novel modeling framework for dynamic graphs that posits representation learning as a latent mediation process bridging two observed processes namely -- dynamics of the network (realized as topological evolution) and dynamics on the network (realized as activities between nodes).",
"Concretely, we propose a two-time scale deep temporal point process model that captures the interleaved dynamics of the observed processes.",
"This model is further parameterized by a temporal-attentive representation network that encodes temporally evolving structural information into node representations which in turn drives the nonlinear evolution of the observed graph dynamics.",
"Our unified framework is trained using an efficient unsupervised procedure and has capability to generalize over unseen nodes.",
"We demonstrate that DyRep outperforms state-of-the-art baselines for dynamic link prediction and time prediction tasks and present extensive qualitative insights into our framework.",
"Representation learning over graph structured data has emerged as a keystone machine learning task due to its ubiquitous applicability in variety of domains such as social networks, bioinformatics, natural language processing, and relational knowledge bases.",
"Learning node representations to effectively encode high-dimensional and non-Euclidean graph information is a challenging problem but recent advances in deep learning has helped important progress towards addressing it BID4 BID17 Perozzi et al., 2014; Tang et al., 2015; Wang et al., 2016a; BID15 BID14 , with majority of the approaches focusing on advancing the state-of-the-art in static graph setting.",
"However, several domains now present highly dynamic data that exhibit complex temporal properties in addition to earlier cited challenges.",
"For instance, social network communications, financial transaction graphs or longitudinal citation data contain fine-grained temporal information on nodes and edges that characterize the dynamic evolution of a graph and its properties over time.These recent developments have created a conspicuous need for principled approaches to advance graph embedding techniques for dynamic graphs (Hamilton et al., 2017b) .",
"We focus on two pertinent questions fundamental to representation learning over dynamic graphs:",
"(i) What can serve as an elegant model for dynamic processes over graphs?",
"-A key modeling choice in existing representation learning techniques for dynamic graphs BID16 Zhou et al., 2018; BID14 Ngyuyen et al., 2018; Yu et al., 2018) assume that graph dynamics evolve as a single time scale process.",
"In contrast to these approaches, we observe that most real-world graphs exhibit at least two distinct dynamic processes that evolve at different time scales -Topological Evolution: where the number of nodes and edges are expected to grow (or shrink) over time leading to structural changes in the graph; and Node Interactions: which relates to activities between nodes that may or may not be structurally connected.",
"Modeling interleaved dependencies between these non-linearly evolving dynamic processes is a crucial next step for advancing the formal models of dynamic graphs.",
"(c) Communication Events (k=1) where nodes interact with each other.",
"For both these processes, t p,k=0 < (t 1 , t 2 , t 3 , t 4 , t 5 ) k=1 < t q,k=0 < (t 6 , t 7 ) k=1 < t r,k=0 .",
"(b) Evolving Representations.(ii",
") How can one leverage such a model to learn dynamic node representations that are effectively able to capture evolving graph information over time? -Existing",
"techniques in this direction can be divided into two approaches: a.) Discrete-Time",
"Approach, where the evolution of a dynamic graph is observed as collection of static graph snapshots over time (Zhu et al., 2016; BID16 Zhou et al., 2018) . These approaches",
"tend to preserve (encode) very limited structural information and capture temporal information at a very coarse level which leads to loss of information between snapshots and lack of ability to capture fine-grained temporal dynamics. Another challenge",
"in such approaches is the selection of appropriate aggregation granularity which is often misspecified. b.) Continuous-Time",
"Approach, where evolution is modeled at finer time granularity in order to address the above challenges. While existing approaches",
"have demonstrated to be very effective in specific settings, they either model simple structural and complex temporal properties in a decoupled fashion BID14 or use simple temporal models (exponential family in (Ngyuyen et al., 2018) ). But several domains exhibit",
"highly nonlinear evolution of structural properties coupled with complex temporal dynamics and it remains an open problem to effectively model and learn informative representations capturing various dynamical properties of such complex systems.As noted in BID5 , an important requirement to effectively learn over such dynamical systems is the ability to express the dynamical processes at different scales. We propose that any dynamic",
"graph must be minimally expressed as a result of two fundamental processes evolving at different time scales: Association Process (dynamics of the network), that brings change in the graph structure and leads to long lasting information exchange between nodes; and Communication Process (dynamics on the network), that relates to activities between (not necessarily connected) nodes which leads to temporary information flow between them BID15 BID1 . We, then, posit our goal of",
"learning node representations as modeling a latent mediation process that bridges the above two observed processes such that learned representations drive the complex temporal dynamics of both processes and these processes subsequently lead to the nonlinear evolution of node representations. Further, the information propagated",
"across the graph is governed by the temporal dynamics of communication and association histories of nodes with its neighborhood. For instance, in a social network,",
"when a node's neighborhood grows, it changes that node's representation which in turn affects her social interactions (association → embedding → communication). Similarly, when node's interaction",
"behavior changes, it affects the representation of her neighbors and herself which in turn changes the structure and strength of her connections due to link addition or deletion (communication → embedding → association). We call this phenomenon -evolution",
"through mediation and illustrate it graphically in FIG0 .In this work, we propose a novel representation",
"learning framework for dynamic graphs, DyRep, to model interleaved evolution of two observed processes through latent mediation process expressed above and effectively learn richer node representations over time. Our framework ingests dynamic graph information",
"in the form of association and communication events over time and updates the node representations as they appear in these events. We build a two-time scale deep temporal point process",
"approach to capture the continuous-time fine-grained temporal dynamics of the two observed processes. We further parameterize the conditional intensity function",
"of the temporal point process with a deep inductive representation network that learns functions to compute node representations. Finally, we couple the structural and temporal components",
"of our framework by designing a novel Temporal Attention Mechanism, which induces temporal attentiveness over neighborhood nodes using the learned intensity function. This allows to capture highly interleaved and nonlinear dynamics",
"governing node representations over time. We design an efficient unsupervised training procedure for end-to-end",
"training of our framework. We demonstrate consistent and significant improvement over state-of-the-art",
"representative baselines on two real-world dynamic graphs for the tasks of dynamic link prediction and time prediction. We further present an extensive qualitative analysis through embedding visualization",
"and ablation studies to discern the effectiveness of our framework.",
"We introduced a novel modeling framework for dynamic graphs that effectively and efficiently learns node representations by posing representation learning as latent mediation process bridging dynamic processes of topological evolution and node interactions.",
"We proposed a deep temporal point process model parameterized by temporally attentive representation network that models these complex and nonlinearly evolving dynamic processes and learns to encode structural-temporal information over graph into low dimensional representations.",
"Our superior evaluation performance demonstrates the effectiveness of our approach compared to state-of-the-art methods.",
"We present this work as the first generic and unified representation learning framework that adopts a novel modeling paradigm for dynamic graphs and support wide range of dynamic graph characteristics which can potentially have many exciting adaptations.",
"As a part of our framework, we also propose a novel temporal point process based attention mechanism that can attend over neighborhood based on the history of communications and association events in the graph.",
"Currently, DyRep does not support network shrinkage due to following reasons:",
"(i) It is difficult to procure data with fine grained deletion time stamps and",
"(ii) The temporal point process model requires more sophistication to support deletion.",
"For example, one can augment the model with a survival process formulation to account for lack of node/edge at future time.",
"Another interesting future direction could be to support encoding higher order dynamic structures.",
"contains h struct which is computed for updating each node involved in the event.",
"For node u, the update will come from h v struct (green flow) and for node v, the update will come from h u struct (red flow).",
"Please note all embeddings are dynamically evolving hence the information flow after every event is different and evolves in a complex fashion.",
"With this mechanism, the information is passed from neighbors of node u to node v and neighbors of node v to node u.",
"(i) Interaction events lead to temporary pathway -such events can occur between nodes which are not connected.",
"In that case, this flow will occur only once but it will not make u and v neighbors of each other (e.g. meeting at a conference).",
"(ii) Topological events lead to permanent pathway -in this case u and v becomes neighbor of each other and hence will contribute to structural properties moving forward (e.g. being academic friends).",
"The difference in number of blue arrows on each side signify different importance of each node to node u and node v respectively.",
"Overall Embedding Update Process.",
"As a starting point, neighborhood only includes nodes connected by a structural edge.",
"On observing an event, we update the embeddings of two nodes involved in the event using Eq 4.",
"For a node u, the first term of Eq 4 (Localized Embedding Propagation) requires h struct which is the information that is passed from neighborhood (N v ) of node v to node u via node v (one can visualize v as being the message passer from its neighborhood to u).",
"This information is used to update the embedding of node u.",
"However, we posit that node v does not relay equal amount of information from its neighbors to node u.",
"Rather, node v receives its information to be relayed based on its communication and association history with its neighbors (which relates to importance of each neighbor).",
"This requires to compute the attention coefficients on the structural edges between node v and its neighbors.",
"For any edge, we want this coefficient to be dependent on rate of events between the two nodes (thereby emulating real world phenomenon that one gains more information from people one interacts more with).",
"Hence, we parameterize our attention module with the temporal point process parameter S uv .",
"Algorithm 1 outlines the process of computing the value of this parameter.",
"where ∈ * ̅ is the node in neighborhood of node u.",
"DISPLAYFORM0 Figure 6: Temporal Point Process based Self-Attention: This figure illustrates the computation of h u struct for node u to pass to node v for the same event described before between nodes u and v at time t with any k.",
"h u struct is computed by aggregating information from neighbors (1,2,3) of u.",
"However, Nodes that are closely connected or has higher interactions tend to attend more to each other compared to nodes that are not connected or nodes between which interactions is less even in presence of connection.",
"Further, every node has a specific attention span for other node and therefore attention itself is a temporally evolving quantity.",
"DyRep computes the temporally evolving attention based on association and communication history between connected nodes.",
"The attention coefficient function (q's) is parameterized by S which is computed using the intensity of events between connected nodes.",
"Such attention mechanism allows the evolution of importance of neighbors to a particular node (u in this case) which aligns with real-world phenomenon."
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1666666567325592,
0.17777776718139648,
0.13793103396892548,
0.13793103396892548,
0,
0.42105263471603394,
0.20512819290161133,
0.07999999821186066,
0.10526315122842789,
0.09756097197532654,
0.18867923319339752,
0.1111111044883728,
0.051282044500112534,
0.1690140813589096,
0.24242423474788666,
0.24242423474788666,
0.15094339847564697,
0.18918918073177338,
0.19512194395065308,
0,
0,
0,
0.09090908616781235,
0.060606054961681366,
0.21739129722118378,
0.08510638028383255,
0.0555555522441864,
0,
0.035087715834379196,
0.14492753148078918,
0.1599999964237213,
0.29629629850387573,
0.09302324801683426,
0,
0.07547169178724289,
0.0555555522441864,
0.3461538553237915,
0.21739129722118378,
0.21052631735801697,
0.13636362552642822,
0.11764705181121826,
0.05882352590560913,
0.1875,
0.2666666507720947,
0.13333332538604736,
0.35999998450279236,
0.18518517911434174,
0.05882352590560913,
0.1818181723356247,
0.19999998807907104,
0,
0.05882352590560913,
0.0624999962747097,
0.09756097197532654,
0.060606054961681366,
0,
0.052631575614213943,
0.0476190410554409,
0.11428570747375488,
0,
0.08695651590824127,
0.11999999731779099,
0.1538461446762085,
0,
0,
0.10810810327529907,
0.06896550953388214,
0.06451612710952759,
0.052631575614213943,
0.1395348757505417,
0.1111111044883728,
0.11538460850715637,
0.05882352590560913,
0.13333332538604736,
0.06451612710952759,
0.072727270424366,
0.0624999962747097,
0.0416666604578495,
0.05405404791235924,
0.11428570747375488,
0.051282044500112534,
0.0476190410554409
] | HyePrhR5KX | true | [
"Models Representation Learning over dynamic graphs as latent hidden process bridging two observed processes of Topological Evolution of and Interactions on dynamic graphs."
] |
[
"With the success of modern machine learning, it is becoming increasingly important to understand and control how learning algorithms interact.",
"Unfortunately, negative results from game theory show there is little hope of understanding or controlling general n-player games.",
"We therefore introduce smooth markets (SM-games), a class of n-player games with pairwise zero sum interactions.",
"SM-games codify a common design pattern in machine learning that includes some GANs, adversarial training, and other recent algorithms.",
"We show that SM-games are amenable to analysis and optimization using first-order methods.",
"As artificial agents proliferate, it is increasingly important to analyze, predict and control their collective behavior (Parkes and Wellman, 2015; Rahwan et al., 2019) .",
"Unfortunately, despite almost a century of intense research since von Neumann (1928) , game theory provides little guidance outside a few special cases such as two-player zero-sum, auctions, and potential games (Monderer and Shapley, 1996; Nisan et al., 2007; Vickrey, 1961; von Neumann and Morgenstern, 1944) .",
"Nash equilibria provide a general solution concept, but are intractable in almost all cases for many different reasons (Babichenko, 2016; Daskalakis et al., 2009; Hart and Mas-Colell, 2003) .",
"These and other negative results (Palaiopanos et al., 2017) suggest that understanding and controlling societies of artificial agents is near hopeless.",
"Nevertheless, human societies -of billions of agents -manage to organize themselves reasonably well and mostly progress with time, suggesting game theory is missing some fundamental organizing principles.",
"In this paper, we investigate how markets structure the behavior of agents.",
"Market mechanisms have been studied extensively (Nisan et al., 2007) .",
"However, prior work has restricted to concrete examples, such as auctions and prediction markets, and strong assumptions, such as convexity.",
"Our approach is more abstract and more directly suited to modern machine learning where the building blocks are neural nets.",
"Markets, for us, encompass discriminators and generators trading errors in GANs (Goodfellow et al., 2014) and agents trading wins and losses in StarCraft (Vinyals et al., 2019) .",
"Machine learning has got a lot of mileage out of treating differentiable modules like plug-and-play lego blocks.",
"This works when the modules optimize a single loss and the gradients chain together seamlessly.",
"Unfortunately, agents with differing objectives are far from plug-and-play.",
"Interacting agents form games, and games are intractable in general.",
"Worse, positive feedback loops can cause individually well-behaved agents to collectively spiral out of control.",
"It is therefore necessary to find organizing principles -constraints -on how agents interact that ensure their collective behavior is amenable to analysis and control.",
"The pairwise zero-sum condition that underpins SM-games is one such organizing principle, which happens to admit an economic interpretation.",
"Our main result is that SM-games are legible: changes in aggregate forecasts are the sum of how individual firms expect their forecasts to change.",
"It follows that we can translate properties of the individual firms into guarantees on collective convergence, stability and boundedness in SM-games, see theorems 4-6.",
"Legibility is a local-to-global principle, whereby we can draw qualitative conclusions about the behavior of collectives based on the nature of their individual members.",
"Identifying and exploiting games that embed local-to-global principles will become increasingly important as artificial agents become more common."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.12903225421905518,
0.20689654350280762,
0.5185185074806213,
0.06666666269302368,
0.25,
0.05714285373687744,
0.11538460850715637,
0.04999999701976776,
0.0624999962747097,
0.10526315122842789,
0.08695651590824127,
0,
0.0714285671710968,
0.13333332538604736,
0,
0.14814814925193787,
0.07999999821186066,
0,
0.0952380895614624,
0.1538461446762085,
0.060606054961681366,
0.06666666269302368,
0.12121211737394333,
0.05714285373687744,
0.12121211737394333,
0.0714285671710968
] | B1xMEerYvB | true | [
"We introduce a class of n-player games suited to gradient-based methods."
] |
[
"While counter machines have received little attention in theoretical computer science since the 1960s, they have recently achieved a newfound relevance to the field of natural language processing (NLP).",
"Recent work has suggested that some strong-performing recurrent neural networks utilize their memory as counters.",
"Thus, one potential way to understand the sucess of these networks is to revisit the theory of counter computation.",
"Therefore, we choose to study the abilities of real-time counter machines as formal grammars.",
"We first show that several variants of the counter machine converge to express the same class of formal languages.",
"We also prove that counter languages are closed under complement, union, intersection, and many other common set operations.",
"Next, we show that counter machines cannot evaluate boolean expressions, even though they can weakly validate their syntax.",
"This has implications for the interpretability and evaluation of neural network systems: successfully matching syntactic patterns does not guarantee that a counter-like model accurately represents underlying semantic structures.",
"Finally, we consider the question of whether counter languages are semilinear.",
"This work makes general contributions to the theory of formal languages that are of particular interest for the interpretability of recurrent neural networks.",
"It is often taken for granted that modeling natural language syntax well requires a hierarchically structured grammar formalism.",
"Early work in linguistics established that finite-state models are insufficient for describing the dependencies in natural language data (Chomsky, 1956 ).",
"Instead, a formalism capable of expressing the relations in terms of hierarchical constituents ought to be necessary.",
"Recent advances in deep learning and NLP, however, challenge this long-held belief.",
"Neural network formalisms like the long short-term memory network (LSTM) (Hochreiter & Schmidhuber, 1997) have been shown to perform well on tasks requiring structure sensitivity (Linzen et al., 2016) , even though it is not obvious that such models have the capacity to represent hierarchical structure.",
"This mismatch raises interesting questions for both linguists and practitioners of NLP.",
"It is unclear what about the LSTM's structure lends itself towards good linguistic representations, and under what conditions these representations might fall short of grasping the structure and meaning of language.",
"Recent work has suggested that the expressive capacity of LSTMs resembles that of counter machines (Merrill, 2019; Suzgun et al., 2019; Weiss et al., 2018) .",
"Weiss et al. (2018) studied LSTMs with fully saturated weights (i.e. the activation functions evaluate to their asymptotic values instead of intermediate rational values) and showed that such models can express simplified counter languages.",
"Merrill (2019) , on the other hand, showed that the general counter languages are an upper bound on the expressive capacity of saturated LSTMs.",
"Thus, there seems to be a strong theoretical connection between LSTMs and the counter automata.",
"Merrill (2019) ; Suzgun et al. (2019) ; Weiss et al. (2018) also all report experimental results suggesting that some class of counter languages matches the learnable capacity of LSTMs trained by gradient descent.",
"Taking the counter machine as a simplified formal model of the LSTM, we study the formal properties of counter machines as grammars.",
"We do this with the hope of understanding to what degree counter machines, and LSTMs by extension, have computational properties well-suited for representing the structure of natural language.",
"The contributions of this paper are as follows:",
"• We prove that general counter machines, incremental counter machines, and stateless counter machines have equivalent expressive capacity, whereas simplified counter machines (Weiss et al., 2018) are strictly weaker than the general class.",
"• We demonstrate that counter languages are closed under complement, union, intersection, and many other common operations.",
"• We show that counter machines are incapable of representing the deep syntactic structure or semantics of boolean expressions, even though they can validate whether a boolean expression is well-formed.",
"• We prove that a certain subclass of the counter languages are semilinear, and conjecture that this result holds for all counter languages.",
"We have shown that many variants of the counter machine converge to express the same class of formal languages, which supports that CL is a robustly defined class.",
"We also proved that real-time counter languages are closed under a large number of common set operations.",
"This provides tools for future work investigating real-time counter automata.",
"We also showed that counter automata are incapable of evaluating boolean expressions, even though they are capable of verifying that boolean expressions are syntactically well-formed.",
"This result has a clear parallel in the domain of natural language, where deciding whether a sentence is grammatical is a different task than representing its deep syntactic or semantic structure.",
"A general take-away from our results is that just because a counter machine (or LSTM) is sensitive to surface patterns in linguistic data does not mean it can build correct semantic representations.",
"Counter memory can be exploited to weakly match patterns in language, which might provide the wrong kinds of inductive bias for achieving sophisticated natural language understanding.",
"Finally, we asked whether counter languages are semilinear as another way of studying their linguistic capacity.",
"We concluded only that a quite weak subclass of the counter languages are semilinear, and encourage future work to address the general case."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.20408162474632263,
0.21621620655059814,
0.31578946113586426,
0.3888888955116272,
0.41025641560554504,
0.14999999105930328,
0.04999999329447746,
0.19999998807907104,
0.24242423474788666,
0.380952388048172,
0.04999999329447746,
0.0476190410554409,
0.21052631735801697,
0,
0.0634920597076416,
0.05882352590560913,
0.0833333283662796,
0.1395348757505417,
0.17543859779834747,
0.1860465109348297,
0.21621620655059814,
0.2745097875595093,
0.3684210479259491,
0.25,
0.06666666269302368,
0.1599999964237213,
0.1538461446762085,
0.19999998807907104,
0.2857142686843872,
0.3478260934352875,
0.307692289352417,
0.1249999925494194,
0.1428571343421936,
0.11999999731779099,
0.11320754140615463,
0.1249999925494194,
0.15789473056793213,
0.3181818127632141
] | rylMgCNYvS | true | [
"We study the class of formal languages acceptable by real-time counter automata, a model of computation related to some types of recurrent neural networks."
] |
[
"Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences.",
"For longer documents and summaries however these models often include repetitive and incoherent phrases.",
"We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). \n",
"Models trained only with supervised learning often exhibit \"exposure bias\" - they assume ground truth is provided at each step during training.\n",
"However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable.\n",
"We evaluate this model on the CNN/Daily Mail and New York Times datasets.",
"Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.",
"Human evaluation also shows that our model produces higher quality summaries.",
"Text summarization is the process of automatically generating natural language summaries from an input document while retaining the important points.",
"By condensing large quantities of information into short, informative summaries, summarization can aid many downstream applications such as creating news digests, search, and report generation.There are two prominent types of summarization algorithms.",
"First, extractive summarization systems form summaries by copying parts of the input BID5 BID22 .",
"Second, abstractive summarization systems generate new phrases, possibly rephrasing or using words that were not in the original text BID4 .Neural",
"network models based on the attentional encoder-decoder model for machine translation BID0 were able to generate abstractive summaries with high ROUGE scores. However",
", these systems have typically been used for summarizing short input sequences (one or two sentences) to generate even shorter summaries. For example",
", the summaries on the DUC-2004 dataset generated by the state-of-the-art system by BID40 are limited to 75 characters. also applied",
"their abstractive summarization model on the CNN/Daily Mail dataset BID8 , which contains input sequences of up to 800 tokens and multisentence summaries of up to 100 tokens. But their analysis",
"illustrates a key problem with attentional encoder-decoder models: they often generate unnatural summaries consisting of repeated phrases.We present a new abstractive summarization model that achieves state-of-the-art results on the CNN/Daily Mail and similarly good results on the New York Times dataset (NYT) BID30 . To our knowledge,",
"this is the first end-to-end model for abstractive summarization on the NYT dataset. We introduce a key",
"attention mechanism and a new learning objective to address the FIG2 : Illustration of the encoder and decoder attention functions combined. The two context vectors",
"(marked \"C\") are computed from attending over the encoder hidden states and decoder hidden states. Using these two contexts",
"and the current decoder hidden state (\"H\"), a new word is generated and added to the output sequence. repeating phrase problem",
": (i) we use an intra-temporal",
"attention in the encoder that records previous attention weights for each of the input tokens while a sequential intra-attention model in the decoder takes into account which words have already been generated by the decoder. (ii) we propose a new objective",
"function by combining the maximum-likelihood cross-entropy loss used in prior work with rewards from policy gradient reinforcement learning to reduce exposure bias.Our model achieves 41.16 ROUGE-1 on the CNN/Daily Mail dataset. Moreover, we show, through human",
"evaluation of generated outputs, that our model generates more readable summaries compared to other abstractive approaches.",
"We presented a new model and training procedure that obtains state-of-the-art results in text summarization for the CNN/Daily Mail, improves the readability of the generated summaries and is better suited to long output sequences.",
"We also run our abstractive model on the NYT dataset for the first time.",
"We saw that despite their common use for evaluation, ROUGE scores have their shortcomings and should not be the only metric to optimize on summarization model for long sequences.",
"Our intra-attention decoder and combined training objective could be applied to other sequence-tosequence tasks with long inputs and outputs, which is an interesting direction for further research.A NYT DATASET"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
0.21621620655059814,
0.060606054961681366,
0.31372547149658203,
0.04651162400841713,
0,
0.12121211737394333,
0.10256409645080566,
0.12903225421905518,
0.051282044500112534,
0.07843136787414551,
0.05882352590560913,
0.09756097197532654,
0.23255813121795654,
0.1395348757505417,
0.052631575614213943,
0.21739129722118378,
0.158730149269104,
0.2222222238779068,
0.2380952388048172,
0.05405404791235924,
0.20512819290161133,
0,
0.18518517911434174,
0.17543859779834747,
0.1111111044883728,
0.3529411852359772,
0.12121211737394333,
0.3829787075519562,
0.2448979616165161
] | HkAClQgA- | true | [
"A summarization model combining a new intra-attention and reinforcement learning method to increase summary ROUGE scores and quality for long sequences."
] |
[
"Knowledge Distillation (KD) is a common method for transferring the ``knowledge'' learned by one machine learning model (the teacher) into another model (the student), where typically, the teacher has a greater capacity (e.g., more parameters or higher bit-widths).",
"To our knowledge, existing methods overlook the fact that although the student absorbs extra knowledge from the teacher, both models share the same input data -- and this data is the only medium by which the teacher's knowledge can be demonstrated.",
"Due to the difference in model capacities, the student may not benefit fully from the same data points on which the teacher is trained.",
"On the other hand, a human teacher may demonstrate a piece of knowledge with individualized examples adapted to a particular student, for instance, in terms of her cultural background and interests.",
"Inspired by this behavior, we design data augmentation agents with distinct roles to facilitate knowledge distillation.",
"Our data augmentation agents generate distinct training data for the teacher and student, respectively.",
"We focus specifically on KD when the teacher network has greater precision (bit-width) than the student network.\n\n",
"We find empirically that specially tailored data points enable the teacher's knowledge to be demonstrated more effectively to the student.",
"We compare our approach with existing KD methods on training popular neural architectures and demonstrate that role-wise data augmentation improves the effectiveness of KD over strong prior approaches.",
"The code for reproducing our results will be made publicly available.",
"Background and Motivation.",
"In the educational psychology literature, it is generally considered beneficial if teachers can adapt curricula based upon students' prior experiences (Bandura, 2002; Brumfiel, 2005; Gurlitt et al., 2006; Slotta & Chi, 2006) .",
"These vary widely depending on students' cultural backgrounds, previous educational experiences, interests, and motivations.",
"To help students with different prior experiences to comprehend, memorise, and consolidate a piece of knowledge, teachers may provide extra and customized teaching material during their teaching processes.",
"For instance, when teaching the concept of the color pink, a teacher may choose flamingos, sakura (cherry blossoms), or ice cream cones as the example, depending on a student's background.",
"Knowledge distillation (KD) (Bucilua et al., 2006; Hinton et al., 2014 ) is a common framework for training machine learning models.",
"It works by transferring knowledge from a higher-capacity teacher model to a lower-capacity student model.",
"Most KD methods can be categorized by how they define the knowledge stored in the teacher (i.e., the \"soft targets\" of training as defined in existing literature).",
"For instance, Hinton et al. (2014) originally proposed KD for neural networks, and they define the output class probabilities (i.e., soft labels) generated by the teacher as the targets for assisting the training of students.",
"In a follow up work, Romero et al. (2015) defined the soft targets via the feature maps in the teacher model's hidden layers.",
"To train a student network with KD effectively, it is important to distill as much knowledge from the teacher as possible.",
"However, previous methods overlook the importance of the medium by which the teacher's knowledge is demonstrated: the training data points.",
"We conjecture that there exist examples, not necessarily seen and ingested by the teacher, that might make it easier for the student to absorb the teacher's knowledge.",
"Blindly adding more training examples may not be beneficial because it may slow down training and introduce unnecessary biases (Ho et al., 2019) .",
"The analogy with how human teachers adjust their teaching to their students' particular situations (e.g., with the feedback gathered from the students during teaching) suggests that a reasonable yet uninvestigated approach might be to augment the training data for both the teacher and student according to distinct policies.",
"In this paper, we study whether and how adaptive data augmentation and knowledge distillation can be leveraged synergistically to better train student networks.",
"We propose a two-stage, rolewise data augmentation process for KD.",
"This process consists of: (1) training a teacher network till convergence while learning a schedule of policies to augment the training data specifically for the teacher; (2) distilling the knowledge from the teacher into a student network while learning another schedule of policies to augment the training data specifically for the student.",
"It is worth noting that this two-stage framework is orthogonal to existing methods for KD, which focus on how the knowledge to be distilled is defined; thus, our approach can be combined with previous methods straighforwardly.",
"Although our proposed method can in principle be applied to any models trained via KD, we focus specifically on how to use it to transfer the knowledge from a full-precision teacher network into a student network with lower bit-width.",
"Network quantization is crucial when deploying trained models on embedded devices, or in data centers to reduce energy consumption (Strubell et al., 2019) .",
"KD-based quantization (Zhuang et al., 2018; Polino et al., 2018) jointly trains a full-precision model, which acts as the teacher, alongside a low-precision model, which acts as the student.",
"Previous work has shown that distilling a full-precision teacher's knowledge into a low-precision student, followed by fine-tuning, incurs noticeable performance degradation, especially when the bit-widths are below four (Zhuang et al., 2018; Polino et al., 2018) .",
"We show that it is advantageous to use adaptive data augmentation to generate more training data for the low-precision network based on its specific weaknesses.",
"For example, low-precision networks may have difficulties learning rotationrelated patterns, 1 and the data augmentation agent should be aware of this and generate more such data points.",
"One positive side-effect for demonstrating the effectiveness of our method is that the improvement brought by our proposed method is more significant compared to the experiments on all full-precision models."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.06779660284519196,
0.21052631735801697,
0.13636362552642822,
0.19607841968536377,
0.20512819290161133,
0.277777761220932,
0.10256409645080566,
0.24390242993831635,
0.19999998807907104,
0.11764705181121826,
0.07692307233810425,
0.0357142798602581,
0.05405404791235924,
0.08163265138864517,
0.03999999538064003,
0.1860465109348297,
0.1666666567325592,
0.2448979616165161,
0.1071428507566452,
0.09090908616781235,
0.1395348757505417,
0.14999999105930328,
0.21276594698429108,
0.13333332538604736,
0.24242423474788666,
0.6666666865348816,
0.3030303120613098,
0.22641508281230927,
0.18518517911434174,
0.24137930572032928,
0.08510638028383255,
0.08888888359069824,
0.07017543166875839,
0.260869562625885,
0.2083333283662796,
0.0416666604578495
] | rJeidA4KvS | true | [
"We study whether and how adaptive data augmentation and knowledge distillation can be leveraged simultaneously in a synergistic manner for better training student networks."
] |
[
"Neural network models have shown excellent fluency and performance when applied to abstractive summarization.",
"Many approaches to neural abstractive summarization involve the introduction of significant inductive bias, such as pointer-generator architectures, coverage, and partially extractive procedures, designed to mimic human summarization.",
"We show that it is possible to attain competitive performance by instead directly viewing summarization as language modeling.",
"We introduce a simple procedure built upon pre-trained decoder-transformers to obtain competitive ROUGE scores using a language modeling loss alone, with no beam-search or other decoding-time optimization, and instead rely on efficient nucleus sampling and greedy decoding.",
"Neural network approaches to abstractive summarization generally encode the source document into some hidden state or representation, then decode this representation into a summarized, abstracted version of the source document [17] .",
"These approaches usually rely on a sequence-to-sequence [20] style architecture, and tend to produce fluent, well formed natural language summaries when coupled with beam search or other decoding techniques.",
"A weakness of traditional sequence-to-sequence learning when applied to summarization is the lack of a direct copy mechanism, leading to missing or misrepresented details in decoded summaries [2, 17] .",
"Though attention helps ameliorate this issue by directly learning to focus on specific words or phrases in a source document [2] , many have allowed for an explicit copy mechanism inspired by Pointer Networks [22] , by optimizing a differentiable decision whether to generate new text or directly copy from the source [5, 18] .",
"Peters et al. [15] , Devlin et al. [3] , Radford et al. [16] , and many others have shown the benefits of large-scale pretraining on large, unlabeled corpora on a variety of downstream tasks in transfer learning settings.",
"In particular, it has been shown that large-scale, attention-only language modeling via decoder-only transformers [11] as an unsupervised pretraining task admits the ability to perform zero-shot learning on meaningful tasks involving natural language generation [16] .",
"Motivated by this, we propose a simple method that exhibits competitive performance on abstractive summarization without using sequence-to-sequence architectures or other standard tools in the neural abstractive summarization toolbox, and instead using a decoder-only transformer language model with transfer learning.",
"This further illustrates the utility of finetuning language models trained on open domain text.",
"This work puts forward a simple approach to abstractive summarization by viewing sequence transduction as a language modeling problem.",
"We show the effectiveness of using decoder-only transformers for this task, in particular, when coupled with recent advances in large-scale language modeling and transfer learning.",
"We show that competitive performance on two benchmark datasets is possible without many of the standard tools in neural abstractive summarization, such as sequence-tosequence modeling, coverage mechanisms, direct ROUGE optimization via reinforcement learning, or beam search, instead relying on a purely language modeling loss and simple decoding mechanisms such as nucleus sampling and greedy decoding.",
"This approach yields highly fluent text, and illustrates the power of unsupervised representation learning-based transfer learning for downstream tasks.",
"Table 3 : Comparison of with existing methods on XSum, reported in Narayan et al. [13] ."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
0.27586206793785095,
0.14999999105930328,
0.24242423474788666,
0.3199999928474426,
0.1904761791229248,
0.1818181723356247,
0.1428571343421936,
0.06666666269302368,
0.043478257954120636,
0.12244897335767746,
0.19607843458652496,
0.13793103396892548,
0.3636363446712494,
0.10256409645080566,
0.1538461446762085,
0,
0
] | BJx5zpc58r | true | [
"We introduce a simple procedure to repurpose pre-trained transformer-based language models to perform abstractive summarization well."
] |
[
"This paper addresses the problem of incremental domain adaptation (IDA).",
"We assume each domain comes sequentially, and that we could only access data in the current domain.",
"The goal of IDA is to build a unified model performing well on all the encountered domains.",
"We propose to augment a recurrent neural network (RNN) with a directly parameterized memory bank, which is retrieved by an attention mechanism at each step of RNN transition.",
"The memory bank provides a natural way of IDA: when adapting our model to a new domain, we progressively add new slots to the memory bank, which increases the model capacity.",
"We learn the new memory slots and fine-tune existing parameters by back-propagation. \n",
"Experiments show that our approach significantly outperforms naive fine-tuning and previous work on IDA, including elastic weight consolidation and the progressive neural network. ",
"Compared with expanding hidden states, our approach is more robust for old domains, shown by both empirical and theoretical results.",
"Domain adaptation aims to transfer knowledge from a source domain to a target domain in a machine learning system.",
"This is important for neural networks, which are data-hungry and prone to overfitting.",
"In this paper, we focus on incremental domain adaptation (IDA), where we assume different domains come one after another.",
"We only have access to the data in the current domain, but hope to build a unified model that performs well on all the domains that we have encountered (Xu et al., 2014; Rusu et al., 2016; Kirkpatrick et al., 2017) .Incremental",
"domain adaptation is useful in various scenarios. Suppose a company",
"is doing business with different partners over a long period of time. The company can only",
"access the data of the partner with a current contract. However, the machine",
"learning model is the company's property (if complying with the contract). Therefore, it is desired",
"to preserve as much knowledge as possible in the model and not to rely on the availability of the data.Another important application of IDA is a quick adaptation to new domains. If the environment of a",
"deployed machine learning system changes frequently, traditional methods like jointly training all domains require the learning machine to be re-trained from scratch every time. Fine-tuning a neural network",
"by a few steps of gradient updates does transfer quickly, but it suffers from the catastrophic forgetting problem (Kirkpatrick et al., 2017) . Suppose we do not know the domain",
"of a data point when predicting; the (single) finetuned model cannot predict well for samples in previous domains, as it tends to \"forget\" quickly during fine-tuning.A recent trend of domain adaptation in the deep learning regime is the progressive neural network (Rusu et al., 2016) , which progressively grows the network capacity if a new domain comes. Typically, this is done by enlarging",
"the model with new hidden states and a new predictor ( FIG0 ). To avoid interfering with existing knowledge",
", the newly added hidden states are not fed back to the previously trained states. During training, they fix all existing parameters",
", and only train the newly added ones. For inference, they use the new predictor for all",
"domains. This is sometimes undesired as the new predictor",
"is trained with only the last domain.In this paper, we propose a progressive memory bank for incremental domain adaptation. Our model augments a recurrent neural network (RNN",
") with a memory bank, which is a set of distributed, real-valued vectors capturing domain knowledge. The memory is retrieved by an attention mechanism",
". When our model is adapted to new domains, we progressively",
"increase the slots in the memory bank. But different from (Rusu et al., 2016) , we fine-tune all",
"the parameters, including RNN and the existing memory slots. Empirically, when the model capacity increases, the RNN does",
"not forget much even if the entire network is fine-tuned. Compared with expanding RNN hidden states, the newly added memory",
"slots do not contaminate existing knowledge in RNN states, as will be shown by a theorem. We evaluate our approach 1 on Natural Language Inference and Dialogue",
"Response Generation. Experiments support our hypothesis that the proposed approach adapts",
"well to target domains without catastrophic forgetting of the source. Our model outperforms the naïve fine-tuning method, the original progressive",
"neural network, as well as other IDA techniques including elastic weight consolidation (EWC) (Kirkpatrick et al., 2017) .Detailed related work is provided in Appendix A.",
"In this paper, we propose a progressive memory network for incremental domain adaptation (IDA).",
"We augment an RNN with an attention-based memory bank.",
"During IDA, we add new slots to the memory bank and tune all parameters by back-propagation.",
"Empirically, the progressive memory network does not suffer from the catastrophic forgetting problem as in naïve fine-tuning.",
"Our intuition is that the new memory slots increase the neural network's model capacity, and thus, the new knowledge less overrides the existing network.",
"Compared with expanding hidden states, our progressive memory bank provides a more robust way of increasing model capacity, shown by both a theorem and experiments.",
"We also outperform previous work for IDA, including elastic weight consolidation (EWC) and the original progressive neural network."
] | [
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1599999964237213,
0.19354838132858276,
0.0624999962747097,
0.1428571343421936,
0.04999999701976776,
0.1428571343421936,
0.10526315122842789,
0.2857142686843872,
0.13333332538604736,
0.2142857164144516,
0.12121211737394333,
0.08163265138864517,
0.1599999964237213,
0.06451612710952759,
0.07692307233810425,
0,
0.09302324801683426,
0.09756097197532654,
0.09090908616781235,
0.11428570747375488,
0.1249999925494194,
0,
0.12903225421905518,
0,
0.24390242993831635,
0.1111111044883728,
0,
0,
0.06896550953388214,
0,
0.1428571343421936,
0,
0,
0.04999999701976776,
0.27586206793785095,
0.08695651590824127,
0.06451612710952759,
0,
0.11428570747375488,
0.10256409645080566,
0.24242423474788666
] | rJgkE5HsnV | true | [
"We present a neural memory-based architecture for incremental domain adaptation, and provide theoretical and empirical results."
] |
[
"In compressed sensing, a primary problem to solve is to reconstruct a high dimensional sparse signal from a small number of observations.",
"In this work, we develop a new sparse signal recovery algorithm using reinforcement learning (RL) and Monte CarloTree Search (MCTS).",
"Similarly to orthogonal matching pursuit (OMP), our RL+MCTS algorithm chooses the support of the signal sequentially.",
"The key novelty is that the proposed algorithm learns how to choose the next support as opposed to following a pre-designed rule as in OMP.",
"Empirical results are provided to demonstrate the superior performance of the proposed RL+MCTS algorithm over existing sparse signal recovery algorithms.",
"We consider the compressed sensing (CS) problem [1; 2; 3] , where for a given matrix A ∈ R m×n , m n, and a (noiseless) observation vector y = Ax 0 , we want to recover a k-sparse vector/signal x 0 (k < m).",
"Formally, it can be formulated as:",
"subject to Ax = Ax 0 (2)",
"Related work There is a large collection of algorithms for solving the CS problem.",
"Some foundational and classic algorithms include convex relaxation, matching and subspace pursuit [4; 5; 6] and iterative thresholding [7; 8] .",
"In particular, two well-established methods are",
"(i) Orthogonal Matching Pursuit (OMP) and",
"(ii) Basis Pursuit (BP).",
"OMP recovers x 0 by choosing the columns of A iteratively until we choose k columns [9] .",
"BP recovers x 0 by solving min Ax=y ||x|| 1 [2] .",
"Because OMP and BP are extremely well studied theoretically [1; 2] and empirically [10] , we use these two algorithms as the main baseline methods to compare against when evaluating the proposed RL+MCTS algorithm.",
"Recent advancements in machine learning have opened a new frontier for signal recovery algorithms.",
"Specifically, these algorithms take a deep learning approach to CS and the related error correction problem.",
"The works in [11] , [12] , [13] and [14] apply ANNs and RNNs for encoding and/or decoding of signals x 0 .",
"Modern generative models such as Autoencoder, Variational Autoencoder, and Generative Adversarial Networks have also been used to tackle the CS problem with promising theoretical and empirical results [15; 16; 17] .",
"These works involve using generative models for encoding structured signals, as well as for designing the measurement matrix A. Notably, the empirical results in these works typically use structured signals in x 0 .",
"For example, in [16] and [17] , MNIST digits and celebrity images are used for training and testing.",
"Our contribution Differently from the above learning-based works, our innovation with machine learning is on signal recovery algorithms (as opposed to signal encoding or measurement matrix design).",
"We do not assume the signals to be structured (such as images), but cope with general sparse signals.",
"This underlying model for x 0 is motivated by the same assumptions in the seminal work on universal phase transitions by Donoho and Tanner in [10] .",
"Moreover, we assume the measurement matrix A is given.",
"Extending to varying matrices A is left for future investigation.",
"In this work, we approach the signal recovery problem using reinforcement learning (RL).",
"Specifically, we leverage the Monte Carlo Tree Search (MCTS) technique with RL, which was shown to achieve outstanding performance in the game of Go [18; 19] .",
"We further introduce special techniques to reduce the computational complexity for dealing with higher signal sparsity in CS.",
"Experimental results show that the proposed RL+MCTS algorithm significantly outperforms OMP and BP for matrix A of various sizes.",
"We have shown that the proposed RL+MCTS algorithm is a highly effective sparse signal decoder for the compressed sensing problem assuming no signal structure other than sparsity.",
"Even without using MCTS in testing, the RL+MCTS algorithm's performance exceeds that of existing sparse signal recovery algorithms such as OMP and BP.",
"The flexibility in the RL+MCTS algorithm's design further offers many interesting avenues for future research.",
"For one, it is possible that the features chosen in our model can be further improved.",
"Secondly, since the true signal x 0 is known in training, one may be able to leverage the information about x 0 to increase training sample efficiency.",
"The training hyper-parameters may also be further tuned to improve performance.",
"Broader settings of problems such as noisy observations and varying observation matrices A are under active investigation."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.21739129722118378,
0.2978723347187042,
0.2380952388048172,
0.2857142686843872,
0.260869562625885,
0.1492537260055542,
0,
0.060606058686971664,
0.1463414579629898,
0.04444443807005882,
0,
0.060606058686971664,
0,
0.1395348757505417,
0,
0.16949151456356049,
0.1463414579629898,
0.1860465109348297,
0.08510638028383255,
0.145454540848732,
0.07407406717538834,
0.04651162400841713,
0.18867923319339752,
0.1818181723356247,
0.11999999731779099,
0.1111111044883728,
0.054054051637649536,
0.20000000298023224,
0.1538461446762085,
0.13333332538604736,
0.17391303181648254,
0.19230768084526062,
0.35999998450279236,
0.0476190447807312,
0.09302324801683426,
0.11999999731779099,
0.052631575614213943,
0.13636362552642822
] | Sygfa739LS | true | [
"Formulating sparse signal recovery as a sequential decision making problem, we develop a method based on RL and MCTS that learns a policy to discover the support of the sparse signal. "
] |
[
"Many real-world sequential decision-making problems can be formulated as optimal control with high-dimensional observations and unknown dynamics.",
"A promising approach is to embed the high-dimensional observations into a lower-dimensional latent representation space, estimate the latent dynamics model, then utilize this model for control in the latent space.",
"An important open question is how to learn a representation that is amenable to existing control algorithms?",
"In this paper, we focus on learning representations for locally-linear control algorithms, such as iterative LQR (iLQR).",
"By formulating and analyzing the representation learning problem from an optimal control perspective, we establish three underlying principles that the learned representation should comprise: 1) accurate prediction in the observation space, 2) consistency between latent and observation space dynamics, and 3) low curvature in the latent space transitions.",
"These principles naturally correspond to a loss function that consists of three terms: prediction, consistency, and curvature (PCC).",
"Crucially, to make PCC tractable, we derive an amortized variational bound for the PCC loss function.",
"Extensive experiments on benchmark domains demonstrate that the new variational-PCC learning algorithm benefits from significantly more stable and reproducible training, and leads to superior control performance. ",
"Further ablation studies give support to the importance of all three PCC components for learning a good latent space for control.",
"Decomposing the problem of decision-making in an unknown environment into estimating dynamics followed by planning provides a powerful framework for building intelligent agents.",
"This decomposition confers several notable benefits.",
"First, it enables the handling of sparse-reward environments by leveraging the dense signal of dynamics prediction.",
"Second, once a dynamics model is learned, it can be shared across multiple tasks within the same environment.",
"While the merits of this decomposition have been demonstrated in low-dimensional environments (Deisenroth & Rasmussen, 2011; Gal et al., 2016) , scaling these methods to high-dimensional environments remains an open challenge.",
"The recent advancements in generative models have enabled the successful dynamics estimation of high-dimensional decision processes (Watter et al., 2015; Ha & Schmidhuber, 2018; Kurutach et al., 2018) .",
"This procedure of learning dynamics can then be used in conjunction with a plethora of decision-making techniques, ranging from optimal control to reinforcement learning (RL) (Watter et al., 2015; Banijamali et al., 2018; Finn et al., 2016; Chua et al., 2018; Ha & Schmidhuber, 2018; Kaiser et al., 2019; Hafner et al., 2018; Zhang et al., 2019) .",
"One particularly promising line of work in this area focuses on learning the dynamics and conducting control in a low-dimensional latent embedding of the observation space, where the embedding itself is learned through this process (Watter et al., 2015; Banijamali et al., 2018; Hafner et al., 2018; Zhang et al., 2019) .",
"We refer to this approach as learning controllable embedding (LCE).",
"There have been two main approaches to this problem:",
"1) to start by defining a cost function in the high-dimensional observation space and learn the embedding space, its dynamics, and reward function, by interacting with the environment in a RL fashion (Hafner et al., 2018; Zhang et al., 2019) , and",
"2) to first learn the embedding space and its dynamics, and then define a cost function in this low-dimensional space and conduct the control (Watter et al., 2015; Banijamali et al., 2018) .",
"This can be later combined with RL for extra fine-tuning of the model and control.",
"In this paper, we take the second approach and particularly focus on the important question of what desirable traits should the latent embedding exhibit for it to be amenable to a specific class of control/learning algorithms, namely the widely used class of locally-linear control (LLC) algorithms?",
"We argue from an optimal control standpoint that our latent space should exhibit three properties.",
"The first is prediction: given the ability to encode to and decode from the latent space, we expect the process of encoding, transitioning via the latent dynamics, and then decoding, to adhere to the true observation dynamics.",
"The second is consistency: given the ability to encode a observation trajectory sampled from the true environment, we expect the latent dynamics to be consistent with the encoded trajectory.",
"Finally, curvature: in order to learn a latent space that is specifically amenable to LLC algorithms, we expect the (learned) latent dynamics to exhibit low curvature in order to minimize the approximation error of its first-order Taylor expansion employed by LLC algorithms.",
"Our contributions are thus as follows: (1) We propose the Prediction, Consistency, and Curvature (PCC) framework for learning a latent space that is amenable to LLC algorithms and show that the elements of PCC arise systematically from bounding the suboptimality of the solution of the LLC algorithm in the latent space.",
"(2) We design a latent variable model that adheres to the PCC framework and derive a tractable variational bound for training the model.",
"(3) To the best of our knowledge, our proposed curvature loss for the transition dynamics (in the latent space) is novel.",
"We also propose a direct amortization of the Jacobian calculation in the curvature loss to help training with curvature loss more efficiently.",
"(4) Through extensive experimental comparison, we show that the PCC model consistently outperforms E2C (Watter et al., 2015) and RCE (Banijamali et al., 2018) on a number of control-from-images tasks, and verify via ablation, the importance of regularizing the model to have consistency and low-curvature.",
"In this paper, we argue from first principles that learning a latent representation for control should be guided by good prediction in the observation space and consistency between latent transition and the embedded observations.",
"Furthermore, if variants of iterative LQR are used as the controller, the low-curvature dynamics is desirable.",
"All three elements of our PCC models are critical to the stability of model training and the performance of the in-latent-space controller.",
"We hypothesize that each particular choice of controller will exert different requirement for the learned dynamics.",
"A future direction is to identify and investigate the additional bias for learning an effective embedding and latent dynamics for other type of model-based control and planning methods.",
"where D TV is the total variation distance of two distributions.",
"The first inequality is based on the result of the above lemma, the second inequality is based on Pinsker's inequality (Ordentlich & Weinberger, 2005) , and the third inequality is based on Jensen's inequality (Boyd & Vandenberghe, 2004) of (·) function.",
"Now consider the expected cumulative KL cost:",
"t=0 KL(P (·|x t , u t )|| P (·|x t , u t )) | P, x 0 with respect to some arbitrary control action sequence {u t } T −1 t=0 .",
"Notice that this arbitrary action sequence can always be expressed in form of deterministic policy u t = π (x t , t) with some nonstationary state-action mapping π .",
"Therefore, this KL cost can be written as:",
"where the expectation is taken over the state-action occupation measure",
"t=0 P(x t = x, u t = u|x 0 , U ) of the finite-horizon problem that is induced by data-sampling policy U .",
"The last inequality is due to change of measures in policy, and the last inequality is due to the facts that",
"(i) π is a deterministic policy,",
"(ii) dU (u t ) is a sampling policy with lebesgue measure 1/U over all control actions,",
"(iii) the following bounds for importance sampling factor holds:",
"To conclude the first part of the proof, combining all the above arguments we have the following inequality for any model P and control sequence U :",
"For the second part of the proof, consider the solution of (SOC3), namely (U * 3 , P * 3 ).",
"Using the optimality condition of this problem one obtains the following inequality:",
"Using the results in (11) and (12), one can then show the following chain of inequalities:",
"where U * 1 is the optimizer of (SOC1) and (U * 3 , P *",
"3 ) is the optimizer of (SOC3).",
"Therefore by letting λ 3 = √ 2T 2 · c max U and R 3 ( P ) = E x,u KL(P (·|x, u)|| P (·|x, u)) and by combining all of the above arguments, the proof of the above lemma is completed.",
"A.2",
"PROOF OF LEMMA 2",
"For the first part of the proof, at any time-step t ≥ 1, for any arbitrary control action sequence {u t } T −1 t=0 , and any model P , consider the following decomposition of the expected cost :",
". Now consider the following cost function: E[c(x t−1 , u t−1 ) + c(x t , u t ) | P , x 0 ] for t > 2. Using the above arguments, one can express this cost as",
"By continuing the above expansion, one can show that",
"where the last inequality is based on Jensen's inequality of (·) function.",
"For the second part of the proof, following similar arguments as in the second part of the proof of Lemma 1, one can show the following chain of inequalities for solution of (SOC3) and (SOC2):",
"where the first and third inequalities are based on the first part of this Lemma, and the second inequality is based on the optimality condition of problem (SOC2).",
"This completes the proof."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3333333432674408,
0.24242423474788666,
0.09090908616781235,
0.1666666567325592,
0.04444444179534912,
0,
0.09090908616781235,
0.060606058686971664,
0.14814814925193787,
0.06666666269302368,
0,
0,
0,
0.054054051637649536,
0.05882352590560913,
0.0833333283662796,
0.08695651590824127,
0.11764705181121826,
0,
0.14999999105930328,
0.11764705926179886,
0.27272728085517883,
0.1304347813129425,
0.09090908616781235,
0,
0.06451612710952759,
0,
0.043478257954120636,
0.07407406717538834,
0.07999999821186066,
0.07692307233810425,
0,
0.15789473056793213,
0,
0,
0.08695651590824127,
0.1875,
0,
0,
0,
0.125,
0.05882352590560913,
0,
0,
0,
0,
0,
0.1666666567325592,
0.1249999925494194,
0.12903225421905518,
0,
0,
0,
0,
0,
0,
0,
0.10256409645080566,
0.054054051637649536,
0,
0,
0.06451612710952759,
0,
0
] | BJxG_0EtDS | true | [
"Learning embedding for control with high-dimensional observations"
] |
[
"The interplay between inter-neuronal network topology and cognition has been studied deeply by connectomics researchers and network scientists, which is crucial towards understanding the remarkable efficacy of biological neural networks.",
"Curiously, the deep learning revolution that revived neural networks has not paid much attention to topological aspects.",
"The architectures of deep neural networks (DNNs) do not resemble their biological counterparts in the topological sense.",
"We bridge this gap by presenting initial results of Deep Connectomics Networks (DCNs) as DNNs with topologies inspired by real-world neuronal networks.",
"We show high classification accuracy obtained by DCNs whose architecture was inspired by the biological neuronal networks of C. Elegans and the mouse visual cortex.",
"Recent advancements in neural network models have emerged through research in network architectures (He et al., 2016; Krizhevsky et al., 2012; Simonyan & Zisserman, 2014) , optimization (Kingma & Ba, 2014; Liu et al., 2019; Luo et al., 2019) , and generalization techniques (Srivastava et al., 2014; Ioffe & Szegedy, 2015; Zhang et al., 2019) , with convolutional layers inspired from receptive fields and functional architecture in cats' visual cortex.",
"However, the field of deep neural networks, with all its neuro-biologically inspired building blocks, has mostly left the topology story out.",
"1 Curiously, in the Cambrian explosion of neural network architectures in the post-AlexNet era, none seem to be inspired by the ideas prevalent in the domain of brain connectomics.",
"We demonstrated initial findings from applying networks inspired by real-world neuronal network topologies to deep learning.",
"Our experiments show the trainability of a DNN based on the neuronal network C.Elegans and the visual cortex of a mouse with and without freezing the parameters of the graph modules, which outperforms WS graphs with good theoretical small-world properties.",
"In future work, we will examine more principled methods for constructing a DAG from these networks and examine the impact of spectral properties of the graph topologies used both in the architectures we proposed and in the architecture proposed by (Xie et al., 2019) , while extending to other connectomes."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
0.1599999964237213,
0.20512819290161133,
0.20512819290161133,
0.04651162400841713,
0.35555556416511536,
0.138888880610466,
0.1428571343421936,
0.2222222238779068,
0.2631579041481018,
0.37037035822868347,
0.1904761791229248
] | BJg6EmYL8B | true | [
"Initial findings in the intersection of network neuroscience and deep learning. C. Elegans and a mouse visual cortex learn to recognize handwritten digits."
] |
[
"Convolutional neural networks and recurrent neural networks are designed with network structures well suited to the nature of spacial and sequential data respectively.",
"However, the structure of standard feed-forward neural networks (FNNs) is simply a stack of fully connected layers, regardless of the feature correlations in data.",
"In addition, the number of layers and the number of neurons are manually tuned on validation data, which is time-consuming and may lead to suboptimal networks.",
"In this paper, we propose an unsupervised structure learning method for learning parsimonious deep FNNs.",
"Our method determines the number of layers, the number of neurons at each layer, and the sparse connectivity between adjacent layers automatically from data.",
"The resulting models are called Backbone-Skippath Neural Networks (BSNNs).",
"Experiments on 17 tasks show that, in comparison with FNNs, BSNNs can achieve better or comparable classification performance with much fewer parameters.",
"The interpretability of BSNNs is also shown to be better than that of FNNs.",
"Deep neural networks have made breakthroughs in all kinds of machine learning tasks BID13 BID22 , specifically with convolutional neural networks (CNNs) for tasks with spacial data BID17 and recurrent neural networks (RNNs) for tasks with sequential data .",
"One of the key reasons for the effectiveness of CNNs and RNNs is the well-designed network structures together with the parameter sharing schemes.",
"For example, in the convolution layers of CNNs, each neuron is connected to a local region in the input volume instead of all the input neurons.",
"Besides, the neurons in the same channel share the same set of weights.",
"This design utilizes the local and \"stationary\" properties of spacial data and consequently forms effective feature extractors.",
"In addition, it also prevents CNNs from having an exploding number of parameters when the networks become deeper and deeper.However, in practice, there are also many data which are neither spacial nor sequential, and hence the only applicable neural networks are the standard feed-forward neural networks (FNNs).",
"In contrast to CNN and RNN, FNN's network structure is simple.",
"It consists of multiple layers of neurons and each layer is fully connected to the next layer up, without considering any correlations in data or among neurons.",
"The network structure has two main shortcomings.",
"The first is that, there can be high connection redundancies.",
"As the number of layers and the number of neuron at each layer increase, the number of parameters increases quickly, which can cause severe overfitting.",
"The other shortcoming is that, ignoring all the correlations existing in data weakens the model's strength (as a feature extractor) and hurts the model's interpretability.We are interested in learning parsimonious deep feed-forward neural networks.",
"The goal is to learn FNNs which contain as few parameters as possible.",
"Parsimonious FNNs are desirable for several reasons.",
"Firstly, fewer parameters can ease overfitting.",
"Secondly, parsimonious FNNs require less storage and computation than FNNs, which makes it possible to be run on devices like mobile phones.",
"Lastly, parsimonious FNNs can have very flexible and different structures from each other depending on the specific tasks and data.",
"This would help the models fit the data well and also have good interpretability.",
"In general, it is desirable to solve a problem using the simplest model possible because it implies a good understanding of the problem.",
"connections (x − h 1 , h 1 − h 2 ) form the Backbone path.",
"The narrow fully-connected layers (x − h 3 , h 1 − h 3 , h 2 − h 3 ) are the Skip-paths.",
"The number of units at h 3 is relatively smaller than that at x, h 1 and h 2 .Learning",
"parsimonious FNNs is challenging mainly because we need to determine the sparse connectivity between layers. Network",
"pruning is a potential way to achieve this. However",
", it requires to start from a network which is much larger than necessary for the task at hand. This can",
"cause a lot of computations wasted on those useless connections. In addition",
", network pruning is not able to learn the number of units and number of layers.In this paper, we assume that data are generated by a sparse probabilistic model with multiple layers of latent variables, and view the feed-forward network to be built as a way to approximate the relationships between the observed variables and the top-level latent variables in the probabilistic model. The level",
"1 latent variables induce correlations among the observed variables. Therefore",
", it is possible to determine them by analysing how the observed variables are correlated. Similarly",
", by analysing how the level 1 latent variables are correlated, we can determine the level 2 latent variables, and so on. We empirically",
"show that our method can significantly reduce the number of parameters in FNNs, and the resulting model still achieves better or comparable results than FNNs in 17 classification tasks.",
"Structure learning for deep neural network is a challenging and interesting research problem.",
"We have proposed an unsupervised structure learning method which utilizes the correlation information in data for learning parsimonious deep feed-forward networks.",
"In comparison with standard FNN, although the resulting model of our method contains much fewer parameters, it achieves better or comparable classification performance in all kinds of tasks.",
"Our method is also shown to learn models with better interpretability, which is also an important problem in deep learning.",
"In the future, we will generalize our method to other networks like RNNs and CNNs."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.06451612710952759,
0,
0.4166666567325592,
0.06666666269302368,
0.10526315122842789,
0,
0,
0.15789473056793213,
0.06896550953388214,
0,
0,
0,
0,
0.0952380895614624,
0,
0.11764705181121826,
0,
0,
0.04878048226237297,
0,
0.23529411852359772,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0.06451612710952759,
0,
0,
0,
0,
0,
0.054054051637649536,
0.17391303181648254,
0.3333333432674408,
0.054054051637649536,
0.1428571343421936,
0.07999999821186066
] | HJMN-xWC- | true | [
"An unsupervised structure learning method for Parsimonious Deep Feed-forward Networks."
] |
[
"Bayesian inference is used extensively to infer and to quantify the uncertainty in a field of interest from a measurement of a related field when the two are linked by a mathematical model.",
"Despite its many applications, Bayesian inference faces challenges when inferring fields that have discrete representations of large dimension, and/or have prior distributions that are difficult to characterize mathematically.",
"In this work we demonstrate how the approximate distribution learned by a generative adversarial network (GAN) may be used as a prior in a Bayesian update to address both these challenges.",
"We demonstrate the efficacy of this approach by inferring and quantifying uncertainty in a physics-based inverse problem and an inverse problem arising in computer vision.",
"In this latter example, we also demonstrate how the knowledge of the spatial variation of uncertainty may be used to select an optimal strategy of placing the sensors (i.e. taking measurements), where information about the image is revealed one sub-region at a time.",
"Bayesian inference is a principled approach to quntify uncertainty in inverse problems that are constrained by mathematical model (Kaipio and Somersalo [2006] , Dashti and Stuart [2016] , Polpo et al. [2018] ).",
"It has found applications in diverse fields such as geophysics (Gouveia and Scales [1997] , Martin et al. [2012] , Isaac et al. [2015] ), climate modeling (Jackson et al. [2004] ), chemical kinetics ), heat conduction (Wang and Zabaras [2004] ), astrophysics (Loredo [1990] , Asensio Ramos et al. [2007] ), materials modeling (Sabin et al. [2000] ) and the detection and diagnosis of disease (Siltanen et al. [2003] , Kolehmainen et al. [2006] ).",
"The two critical ingredients of a Bayesian inference problem are -an informative prior representing the prior belief about the parameters and an efficient method for sampling from the posterior distribution.",
"In this manuscript we describe how a deep generative model (generative adversarial networks (GANs)) can be used in these roles.",
"In a typical inverse problem, we wish to infer a vector of parameters x ∈ R N from the measurement of a related vector y ∈ R P , where the two are related through a forward model y = f (x).",
"A noisy measurement of y is denoted byŷ = f (x) + η, where η ∈ R P represents noise.",
"While the forward map is typically well-posed, its inverse is not, and hence to infer x from the measurementŷ requires techniques that account for this ill-posedness.",
"Classical techniques based on regularization tackle this ill-posedness by using additional information about the sought parameter field explicitly or implicitly (Tarantola [2005] ).",
"Bayesian inference offers a different solution to this problem by modeling the unknown parameter and the measurements as random variables and allows for the characterization of the uncertainty in the inferred parameter field.",
"For additive noise, the posterior distribution of x, determined using Bayes' theorem after accounting for the observationŷ is given by",
"where Z is the prior-predictive distribution of y, p prior X (x) is the prior distribution of x, and p l (y|x) is the likelihood, often determined by the distribution of the error in the model, denoted by p η .",
"Despite its numerous applications, Bayesian inference faces significant challenges.",
"These include constructing a reliable and informative prior distribution from a collection of prior measurements denoted by the S = {x",
"(1) , · · · , x (S) }, and efficiently sampling from the posterior distribution when the dimension of x is large.",
"In this work we consider the use of GANs (Goodfellow et al. [2014] ) in addressing these challenges.",
"These networks are useful in this role because of",
"(a) they are able to generate samples of x from p gen X (x) while ensuring closeness (in an appropriate measure) between p gen X (x) and the true distribution, and (b) because they accomplish this by sampling from the much simpler distribution of the latent vector z, whose dimension is much smaller than that of x.",
"Related work and our contribution: The main idea in this work involves training a GAN using the sample set S, and then using the distribution learned by the GAN as the prior distribution in Bayesian inference.",
"This leads to a useful method for representing complex prior distributions and an efficient approach for sampling from the posterior distribution in terms of the latent vector z.",
"The solution of inverse problems using sample-based priors has a rich history (see Vauhkonen et al. [1997] , Calvetti and Somersalo [2005] for example).",
"As does the idea of dimension reduction in parameter space , Lieberman et al. [2010] ).",
"However, the use of GANs in these tasks is novel.",
"Recently, a number of authors have considered the use machine learning-based methods for solving inverse problems.",
"These include the use of convolutional neural networks (CNNs) to solve physics-driven inverse problems (Adler and Öktem [2017] , Jin et al. [2017] , Patel et al. [2019] ), and GANs to solve problems in computer vision (Chang et al., Kupyn et al. [2018] , Yang et al. [2018] , Ledig et al., Anirudh et al. [2018] , Isola et al. [2016] , Zhu et al. [2017] , Kim et al. [2017] ).",
"There is also a growing body of work on using GANs to learn regularizers in inverse problems (Lunz et al. [2018] ) and in compressed sensing (Bora et al. [2017 (Bora et al. [ , 2018 , Kabkab et al. [2018] , Wu et al. [2019] , Shah and Hegde [2018] ).",
"However, these approaches differ from ours in that they solve the inverse problem as an optimization problem and do not quantify uncertainty in a Bayesian framework .",
"More recently, the approach described in (Adler and Öktem [2018] ) utilizes GANs in a Bayesian setting; however the GAN is trained to approximate the posterior distribution, and training is done in a supervised fashion with paired samples of the measurementŷ and the corresponding true solution x."
] | [
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1621621549129486,
0.21621620655059814,
0.09999999403953552,
0.0624999962747097,
0.03999999538064003,
0.0952380895614624,
0.09677419066429138,
0.2631579041481018,
0,
0.04651162400841713,
0.06451612710952759,
0.05714285373687744,
0,
0.2631579041481018,
0.13333332538604736,
0.0555555522441864,
0.19999998807907104,
0.06666666269302368,
0.06896550953388214,
0.13793103396892548,
0.09999999403953552,
0.0363636314868927,
0.15789473056793213,
0.21621620655059814,
0.17142856121063232,
0.07407406717538834,
0.1904761791229248,
0.14814814925193787,
0.07843136787414551,
0.08695651590824127,
0.11428570747375488,
0.125
] | HJlL2Q2qLS | true | [
"Using GANs as priors for efficient Bayesian inference of complex fields."
] |
[
"We argue that the widely used Omniglot and miniImageNet benchmarks are too simple because their class semantics do not vary across episodes, which defeats their intended purpose of evaluating few-shot classification methods.",
"The class semantics of Omniglot is invariably “characters” and the class semantics of miniImageNet, “object category”.",
"Because the class semantics are so similar, we propose a new method called Centroid Networks which can achieve surprisingly high accuracies on Omniglot and miniImageNet without using any labels at metaevaluation time.",
"Our results suggest that those benchmarks are not adapted for supervised few-shot classification since the supervision itself is not necessary during meta-evaluation.",
"The Meta-Dataset, a collection of 10 datasets, was recently proposed as a harder few-shot classification benchmark.",
"Using our method, we derive a new metric, the Class Semantics Consistency Criterion, and use it to quantify the difficulty of Meta-Dataset.",
"Finally, under some restrictive assumptions, we show that Centroid Networks is faster and more accurate than a state-of-the-art learning-to-cluster method (Hsu et al., 2018).",
"Supervised few-shot classification, sometimes simply called few-shot learning, consists in learning a classifier from a small number of examples.",
"Being able to quickly learn new classes from a small number of labeled examples is desirable from a practical perspective because it removes the need to label large datasets.",
"Typically, supervised few-shot classification is formulated as meta-learning on episodes, where each episode corresponds to two small sets of labeled examples called support and query sets.",
"The goal is to train a classifier on the support set and to classify the query set with maximum accuracy.",
"The Omniglot (Lake et al., 2011) and miniImageNet (Vinyals et al., 2016; Ravi & Larochelle, 2017) benchmarks have been heavily used to evaluate and compare supervised few-shot classification methods in the last few years (Vinyals et al., 2016; Ravi & Larochelle, 2017; Snell et al., 2017; Finn et al., 2017; Sung et al., 2018) .",
"Despite their popularity and their important role in pioneering the few-shot learning field, we argue that the Omniglot and miniImageNet benchmarks should not be taken as gold standards for evaluating supervised few-shot classification because they rely on consistent class semantics across episodes.",
"Specifically, Omniglot classes always correspond to alphabet characters, while miniImageNet classes always correspond to object categories as defined by the WordNet taxonomy (Miller, 1995; Russakovsky et al., 2015) .",
"One consequence is that benchmarks with consistent class semantics have similar class semantics between meta-training and meta-evaluation 1 .",
"Therefore, they are too \"easy\" because they do not test the ability of supervised few-shot classification methods to adapt to new class semantics.",
"From an applications perspective, being able to adapt to changing class semantics is a desirable feature.",
"For instance, if the application is to organize users' personal photo gallery, different users might want to sort their personal photo gallery according to the different semantics, such as person identity, place or time.",
"From a methodological perspective, we argue that supervised few-shot classification becomes an awkward task in the ideal case where the class semantics are perfectly consistent.",
"Indeed, if the end goal of every episode is to classify the query set according to the same class semantics, do we even need the support set to define the classes, once the semantics are learned ?",
"Consider the characters below, extracted from the \"Mongolian\" alphabet of Omniglot.",
"How would you group the characters below?",
"This task is not particularly hard, even if the reader was never shown labeled examples prior to the task, simply because the reader was already familiar with the class semantics of interest (characters), and can generalize them to new classes.",
"This simple observation suggests that when class semantics are consistent, few-shot learning algorithms might not actually need labels during metaevaluation.",
"To show this, we introduce a new learning-to-cluster 2 method called Centroid Networks which achieves surprisingly high accuracies on Omniglot and miniImageNet without using any labels at meta-evaluation time.",
"3 The method is very similar to Prototypical Networks (Snell et al., 2017) , but the key difference is that the labels of the support set can be reliably recovered through clustering whenever the cluster semantics are consistent across tasks.",
"A harder benchmark would involve selecting different cluster semantics across episodes.",
"For example, consider the following set of shapes:",
"In this case, the task remains ambiguous because clustering semantics (e.g. shape, color, border style) have not been specified.",
"To classify such a set requires either supervision, such as a labeled support set, or to somehow know the class semantics beforehand.",
"Following that spirit, the Meta-Dataset, a collection of 10 datasets, was recently proposed as a harder and more realistic few-shot classification benchmark (Triantafillou et al., 2019) .",
"Among other things such as variable numbers of ways and shots, a key difficulty of the Meta-Dataset is that class semantics vary across episodes, since episodes are generated from a randomly selected dataset.",
"We propose to use Centroid Networks to benchmark how hard this dataset is.",
"In particular, we suggest looking at the gap between the performance of Prototypical Networks and Centroid Networks, which we call the class semantics consistency criterion (CSCC).",
"We proposed Centroid Networks for performing clustering without labels at meta-evaluation time, and with the idea of using it to assess the difficulty of few-shot classification benchmarks.",
"First, we validate our method by beating a state-of-the-art few-shot clustering method (Hsu et al., 2018) in the setting of a known number of equally-sized clusters, with the advantage that our method is easier to train and orders of magnitude faster to run.",
"Then, we define the CSCC metric from the unsupervised accuracy of Centroid Networks, and use it for quantifying the difficulty of current few-shot learning benchmarks in terms of class semantics consistency.",
"We find that Omniglot has extremely consistent class semantics (CSCC close to 1), and that miniImageNet has fairly high CSCC as well (CSCC close to 0.8), which backs the intuition that its class semantics invariably correspond to object categories.",
"Our results on the Meta-Dataset benchmark show that it has much lower CSCCs than Omniglot in all settings, and lower CSCCs than miniImageNet in the ILSVRC only setting, which confirms that Meta-Dataset has harder and more diverse class semantics.",
"As future work, we would like to improve the CSCC by making it more interpretable and less dependent on the backbone architectures.",
"A APPENDIX : BACKGROUND AND IMPLEMENTATION DETAILS A.1",
"SINKHORN DISTANCES The Wasserstein-2 distance is a distance between two probability masses p and q.",
"Given a base distance d(x, x ), we define the cost of transporting one unit of mass from x to x as d(x, x ) 2 .",
"The Wasserstein-2 distance is defined as the cheapest cost for transporting all mass from p to q.",
"When the transportation plan is regularized to have large entropy, we obtain Sinkhorn distances, which can be computed very efficiently for discrete distributions (Cuturi, 2013; Cuturi & Doucet, 2014) (entropy-regularization makes the problem strongly convex).",
"Sinkhorn distances are the basis of the Sinkhorn K-Means algorithm, which is the main component of Centroid Networks.",
"In Algorithm 1, we describe the Sinkhorn algorithm in the particular case where we want to transport mass from the weighted data points (x i , R j ) to the weighted centroids (c j , C j ), where R j and C j are the weights of the data points and centroids, respectively.",
"In practice, we leverage the log-sum-exp trick in the to avoid numerical underflows.",
"A.2",
"DATA SPLITS AND ARCHITECTURE FOR OMNIGLOT AND miniIMAGENET EXPERIMENTS For the embedding network for the Omniglot and miniImageNet, we reuse exactly the same simple convolutional architecture as in Prototypical Networks (Snell et al., 2017) , which consists of four stacked blocks (2D convolution with 3 × 3 kernel and stride 1, BatchNorm, ReLU, and 2 × 2 max-pooling), the output of which is flattened.",
"This results in a 64-dimensional embedding for Omniglot and 1600-dimensional embedding for miniImageNet.",
"For miniImageNet, we pretrain the embedding function using prototypical networks to solve 30-way problems instead of 5, which is the recommended trick in the paper (Snell et al., 2017) .",
"For the other settings, we train from scratch.",
"Omniglot (Lake et al., 2011) consists of a total of 1623 classes of handwritten characters from 50 alphabets, with 20 examples per class.",
"Images are grayscale with size 28 × 28.",
"We follow the same protocol as in Prototypical Networks and use the \"Vinyals\" train/validation/test splits.",
"We consider 5-way 5-shot and 20-way 5-shot settings (15 query points per class).",
"miniImageNet (Vinyals et al., 2016) consists of 100 classes, each containing 600 color images of size 84 × 84.",
"We follow the \"Ravi\" splits: 64 classes for training, 16 for validation, and 20 for testing.",
"We consider the 5-way 5-shot setting (15 query points per class)."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.27586206793785095,
0.09999999403953552,
0.4067796468734741,
0.1666666567325592,
0.1428571343421936,
0.1249999925494194,
0.1538461446762085,
0.1818181723356247,
0.07547169178724289,
0.1538461446762085,
0.13636362552642822,
0.1249999925494194,
0.2769230604171753,
0.11538460850715637,
0.09302324801683426,
0.1666666567325592,
0.0476190447807312,
0.0363636314868927,
0.15686273574829102,
0.0714285671710968,
0.054054051637649536,
0,
0.16393442451953888,
0.25531914830207825,
0.3571428656578064,
0.1269841194152832,
0,
0,
0.04255318641662598,
0.08510638028383255,
0.15094339847564697,
0.13793103396892548,
0,
0.07999999821186066,
0.26923075318336487,
0.1904761791229248,
0.18518517911434174,
0.13793103396892548,
0.10526315122842789,
0.0833333283662796,
0,
0.09756097197532654,
0.1249999925494194,
0.09090908616781235,
0.09836065024137497,
0.04878048226237297,
0.0952380895614624,
0.05128204822540283,
0.17499999701976776,
0.2631579041481018,
0.145454540848732,
0.05714285373687744,
0.12244897335767746,
0.11764705926179886,
0.09756097197532654,
0.05128204822540283,
0.04444443807005882,
0.09756097197532654,
0
] | SygeY1SYvr | true | [
"Omniglot and miniImageNet are too simple for few-shot learning because we can solve them without using labels during meta-evaluation, as demonstrated with a method called centroid networks"
] |
[
"We learn to identify decision states, namely the parsimonious set of states where decisions meaningfully affect the future states an agent can reach in an environment.",
"We utilize the VIC framework, which maximizes an agent’s `empowerment’, ie the ability to reliably reach a diverse set of states -- and formulate a sandwich bound on the empowerment objective that allows identification of decision states. Unlike previous work, our decision states are discovered without extrinsic rewards -- simply by interacting with the world.",
"Our results show that our decision states are:",
"1) often interpretable, and",
"2) lead to better exploration on downstream goal-driven tasks in partially observable environments."
] | [
0,
0,
1,
0,
0
] | [
0.19999998807907104,
0.1269841194152832,
0.23999999463558197,
0,
0
] | SJeQGJrKwH | false | [
"Identify decision states (where agent can take actions that matter) without reward supervision, use it for transfer."
] |
[
"Inverse problems are ubiquitous in natural sciences and refer to the challenging task of inferring complex and potentially multi-modal posterior distributions over hidden parameters given a set of observations.",
"Typically, a model of the physical process in the form of differential equations is available but leads to intractable inference over its parameters.",
"While the forward propagation of parameters through the model simulates the evolution of the system, the inverse problem of finding the parameters given the sequence of states is not unique.",
"In this work, we propose a generalisation of the Bayesian optimisation framework to approximate inference.",
"The resulting method learns approximations to the posterior distribution by applying Stein variational gradient descent on top of estimates from a Gaussian process model.",
"Preliminary results demonstrate the method's performance on likelihood-free inference for reinforcement learning environments.",
"We consider the problem of estimating parameters θ of a physical system according to observed data y.",
"The forward model of the system is approximated by a computational model that generates dataŷ θ based on the given parameter settings θ.",
"In many cases, the corresponding likelihood function p(ŷ θ |θ) is not available, and one resorts to likelihoodfree methods, such as approximate Bayesian computation (ABC) (Robert, 2016) , conditional density estimation (Papamakarios and Murray, 2016) , etc.",
"For certain applications in robotics and reinforcement learning, however, the number of simulations might be limited by resource constraints, imposing challenges to current approaches.",
"Recent methods address the problem of efficiency in the use of simulations by either constructing conditional density estimators from joint data {θ i ,ŷ i } N i=1 , using, for example, mixture density networks (Papamakarios and Murray, 2016; Ramos et al., 2019) , or by sequentially learning approximations to the likelihood function (Gutmann and Corander, 2016; Papamakarios et al., 2019) and then running Markov chain Monte Carlo (MCMC).",
"In particular, Gutmann and Corander (2016) derive an active learning approach using Bayesian optimisation (BO) (Shahriari et al., 2016) to propose parameters for simulations.",
"Their approach reduces the number of simulator runs from the typical thousands to a few hundreds.",
"This paper investigates an approach to combine the flexible representative power of variational inference methods (Liu and Wang, 2016) with the data efficiency of Bayesian optimisation.",
"We present a Thompson sampling strategy (Russo and Van Roy, 2016) to sequentially refine variational approximations to a black-box posterior.",
"Parameters for new simulations are proposed by running Stein variational gradient descent (SVGD) (Liu and Wang, 2016) over samples from a Gaussian process (GP) (Rasmussen and Williams, 2006) .",
"The approach is also equipped with a method to optimally subsample the variational approximations for batch evaluations of the simulator models at each round.",
"In the following, we present the derivation of our approach and preliminary experimental results.",
"This paper presented a Bayesian optimisation approach to inverse problems on simulator parameters.",
"Preliminary results demonstrated the potential of the method for reinforcement learning applications.",
"In particular, results show that distributional Bayesian optimisation is able to provide a more sample-efficient approach than other likelihood-free inference methods when inferring parameters of a classical reinforcement learning environment.",
"Future work includes further scalability and theoretical analysis of the method.",
"3. OpenAI Gym: https://gym.openai.com",
"4. Code available at: https://github.com/rafaol/dbo-aabi2019 , instead."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.14999999105930328,
0.11764705181121826,
0.060606054961681366,
0.2857142686843872,
0.10810810327529907,
0.07692307233810425,
0.06896550953388214,
0,
0.12765957415103912,
0.10810810327529907,
0.057971011847257614,
0.2631579041481018,
0.1428571343421936,
0.4324324131011963,
0.19354838132858276,
0.09999999403953552,
0.1666666567325592,
0.1538461446762085,
0.4615384638309479,
0,
0.2380952388048172,
0.0833333283662796,
0,
0
] | HJxvcJhVYS | true | [
"An approach to combine variational inference and Bayesian optimisation to solve complicated inverse problems"
] |
Subsets and Splits