doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1711.02255 | 39 | Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Rezende, Danilo Jimenez and Mohamed, Shakir. Variational In Proceedings of inference with normalizing ï¬ows. the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pp. 1530â1538, 2015.
Sønderby, Casper Kaae, Raiko, Tapani, Maaløe, Lars, Sønderby, Søren Kaae, and Winther, Ole. Ladder vari- ational autoencoders. In Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pp. 3738â3746, 2016.
Yu, Fisher and Koltun, Vladlen. Multi-scale context arXiv preprint aggregation by dilated convolutions. arXiv:1511.07122, 2015.
# A. Conditions for Invertibility
Kingma, Diederik P and Welling, Max. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. | 1711.02255#39 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 40 | Kingma, Diederik P and Welling, Max. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
Kingma, Diederik P., Salimans, Tim, J´ozefowicz, Rafal, Chen, Xi, Sutskever, Ilya, and Welling, Max. Improv- ing variational autoencoders with inverse autoregressive ï¬ow. In Advances in Neural Information Processing Sys- tems 29: Annual Conference on Neural Information Pro- cessing Systems 2016, December 5-10, 2016, Barcelona, Spain, pp. 4736â4744, 2016.
Lake, Brenden M., Salakhutdinov, Ruslan, and Tenenbaum, Joshua B. One-shot learning by inverting a compositional causal process. In Advances in Neural Information Pro- cessing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pp. 2526â2534, 2013.
The ConvFlow proposed in Section 3 is invertible, as long as every term in the main diagonal of the Jacobian speciï¬ed in Eq. (10) is non-zero, i.e., for all i = 1, 2, ..., d, | 1711.02255#40 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 41 | wyu;h' (conv(z, w)) +140 (21)
where wu; is the i-th entry of the scaling vector u. When using h(x) = tanh(x), since hâ(z) = 1 â tanh?(x) ⬠(0, 1], a sufficient condition for invertibility is to ensure wy u; > â1. Thus a new scaling vector uâ can be created from free parameter wu to satisfy the condition as
u ifw, = v w= -z + softplus(w) if w; >0 (22) â Zh â softplus(u) if w) <0
Larochelle, Hugo and Murray, Iain. The neural autore- In Proceedings of the gressive distribution estimator. Fourteenth International Conference on Artiï¬cial Intel- ligence and Statistics, AISTATS 2011, Fort Lauderdale, USA, April 11-13, 2011, pp. 29â37, 2011. | 1711.02255#41 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.01239 | 0 | 7 1 0 2
c e D 1 3 ] G L . s c [
2 v 9 3 2 1 0 . 1 1 7 1 : v i X r a
ROUTING NETWORKS: ADAPTIVE SELECTION OF NON-LINEAR FUNCTIONS FOR MULTI-TASK LEARN- ING
Clemens Rosenbaum College of Information and Computer Sciences University of Massachusetts Amherst 140 Governors Dr., Amherst, MA 01003 [email protected]
Tim Klinger & Matthew Riemer IBM Research AI 1101 Kitchawan Rd, Yorktown Heights, NY 10598 {tklinger,mdriemer}@us.ibm.com
# ABSTRACT | 1711.01239#0 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 1 | Multi-task learning (MTL) with neural networks leverages commonalities in tasks to improve performance, but often suffers from task interference which reduces the beneï¬ts of transfer. To address this issue we introduce the routing network paradigm, a novel neural network and training algorithm. A routing network is a kind of self-organizing neural network consisting of two components: a router and a set of one or more function blocks. A function block may be any neural net- work â for example a fully-connected or a convolutional layer. Given an input the router makes a routing decision, choosing a function block to apply and passing the output back to the router recursively, terminating when a ï¬xed recursion depth is reached. In this way the routing network dynamically composes different func- tion blocks for each input. We employ a collaborative multi-agent reinforcement learning (MARL) approach to jointly train the router and function blocks. We evaluate our model against cross-stitch networks and shared-layer baselines on multi-task settings of the MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a signiï¬cant improvement in accuracy, with sharper | 1711.01239#1 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 3 | # INTRODUCTION
Multi-task learning (MTL) is a paradigm in which multiple tasks must be learned simultaneously. Tasks are typically separate prediction problems, each with their own data distribution. In an early formulation of the problem, (Caruana, 1997) describes the goal of MTL as improving generalization performance by âleveraging the domain-speciï¬c information contained in the training signals of related tasks.â This means a model must leverage commonalities in the tasks (positive transfer) while minimizing interference (negative transfer). In this paper we propose a new architecture for MTL problems called a routing network, which consists of two trainable components: a router and a set of function blocks. Given an input, the router selects a function block from the set, applies it to the input, and passes the result back to the router, recursively up to a ï¬xed recursion depth. If the router needs fewer iterations then it can decide to take a PASS action which leaves the current state unchanged. Intuitively, the architecture allows the network to dynamically self-organize in response to the input, sharing function blocks for different tasks when positive transfer is possible, and using separate blocks to prevent negative transfer. | 1711.01239#3 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 4 | The architecture is very general allowing many possible router implementations. For example, the router can condition its decision on both the current activation and a task label or just one or the other. It can also condition on the depth (number of router invocations), ï¬ltering the function mod- ule choices to allow layering. In addition, it can condition its decision for one instance on what was historically decided for other instances, to encourage re-use of existing functions for improved compression. The function blocks may be simple fully-connected neural network layers or whole
1
networks as long as the dimensionality of each function block allows composition with the previous function block choice. They neednât even be the same type of layer. Any neural network or part of a network can be âroutedâ by adding its layers to the set of function blocks, making the architecture applicable to a wide range of problems. Because the routers make a sequence of hard decisions, which are not differentiable, we use reinforcement learning (RL) to train them. We discuss the train- ing algorithm in Section 3.1, but one way we have modeled this as an RL problem is to create a separate RL agent for each task (assuming task labels are available in the dataset). Each such task agent learns its own policy for routing instances of that task through the function blocks. | 1711.01239#4 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 5 | To evaluate we have created a âroutedâ version of the convnet used in (Ravi & Larochelle, 2017) and use three image classiï¬cation datasets adapted for MTL learning: a multi-task MNIST dataset that we created, a Mini-imagenet data split as introduced in (Vinyals et al., 2016), and CIFAR-100 (Krizhevsky, 2009), where each of the 20 label superclasses are treated as different tasks.1 We conduct extensive experiments comparing against cross-stitch networks (Misra et al., 2016) and the popular strategy of joint training with layer sharing as described in (Caruana, 1997). Our results indicate a signiï¬cant improvement in accuracy over these strong baselines with a speedup in con- vergence and often orders of magnitude improvement in training time over cross-stitch networks.
# 2 RELATED WORK | 1711.01239#5 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 6 | # 2 RELATED WORK
Work on multi-task deep learning (Caruana, 1997) traditionally includes signiï¬cant hand design of neural network architectures, attempting to ï¬nd the right mix of task-speciï¬c and shared parameters. For example, many architectures share low-level features like those learned in shallow layers of deep convolutional networks or word embeddings across tasks and add task-speciï¬c architectures in later layers. By contrast, in routing networks, we learn a fully dynamic, compositional model which can adjust its structure differently for each task.
Routing networks share a common goal with techniques for automated selective transfer learning using attention (Rajendran et al., 2017) and learning gating mechanisms between representations (Stollenga et al., 2014), (Misra et al., 2016), (Ruder et al., 2017). In the latter two papers, experi- ments are performed on just 2 tasks at a time. We consider up to 20 tasks in our experiments and compare directly to (Misra et al., 2016). | 1711.01239#6 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 7 | Our work is also related to mixtures of experts architectures (Jacobs et al., 1991), (Jordan & Jacobs, 1994) as well as their modern attention based (Riemer et al., 2016) and sparse (Shazeer et al., 2017) variants. The gating network in a typical mixtures of experts model takes in the input and chooses an appropriate weighting for the output of each expert network. This is generally implemented as a soft mixture decision as opposed to a hard routing decision, allowing the choice to be differen- tiable. Although the sparse and layer-wise variant presented in (Shazeer et al., 2017) does save some computational burden, the proposed end-to-end differentiable model is only an approximation and doesnât model important effects such as exploration vs. exploitation tradeoffs, despite their impact on the system. Mixtures of experts have recently been considered in the transfer learning setting (Aljundi et al., 2016), however, the decision process is modelled by an autoencoder-reconstruction- error-based heuristic and is not scaled to a large number of tasks. | 1711.01239#7 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 8 | In the use of dynamic representations, our work is also related to single task and multi-task models that learn to generate weights for an optimal neural network (Ha et al., 2016), (Ravi & Larochelle, 2017), (Munkhdalai & Yu, 2017). While these models are very powerful, they have trouble scaling to deep models with a large number of parameters (Wichrowska et al., 2017) without tricks to simplify the formulation. In contrast, we demonstrate that routing networks can be applied to create dynamic network architectures for architectures like convnets by routing some of their layers.
Our work extends an emerging line of recent research focused on automated architecture search. In this work, the goal is to reduce the burden on the practitioner by automatically learning black box algorithms that search for optimal architectures and hyperparameters. These include techniques based on reinforcement learning (Zoph & Le, 2017), (Baker et al., 2017), evolutionary algorithms (Miikkulainen et al., 2017), approximate random simulations (Brock et al., 2017), and adaptive growth (Cortes et al., 2016). To the best of our knowledge we are the ï¬rst to apply this idea to multi- task learning. Our technique can learn to construct a very general class of architectures without the
1All dataset splits and the code will be released with the publication of this paper.
2 | 1711.01239#8 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 9 | 1All dataset splits and the code will be released with the publication of this paper.
2
need for human intervention to manually choose which parameters will be shared and which will be kept task-speciï¬c.
Also related to our work is the literature on minimizing computation cost for single-task problems by conditional routing. These include decisions trained with REINFORCE (Denoyer & Gallinari, 2014), (Bengio et al., 2015), (Hamrick et al., 2017), Q Learning (Liu & Deng, 2017), and actor-critic methods (McGill & Perona, 2017). Our approach differs however in the introduction of several novel elements. Speciï¬cally, our work explores the multi-task learning setting, it uses a multi-agent reinforcement learning training algorithm, and it is structured as a recursive decision process.
There is a large body of related work which focuses on continual learning, in which tasks are pre- sented to the network one at a time, potentially over a long period of time. One interesting recent paper in this setting, which also uses the notion of routes (âpathsâ), but uses evolutionary algorithms instead of RL is Fernando et al. (2017). | 1711.01239#9 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 10 | While a routing network is a novel artiï¬cial neural network formulation, the high-level idea of task speciï¬c âroutingâ as a cognitive function is well founded in biological studies and theories of the human brain (Gurney et al., 2001), (Buschman & Miller, 2010), (Stocco et al., 2010).
3 ROUTING NETWORKS
router(v, t, 1) router(v, t, 2) router(v, t, 3) f13 f23 f33 input: v, t f12 f22 f32 Ëy = f32(f21(f13(v, t))) f11 f21 f31
Figure 1: Routing (forward) Example | 1711.01239#10 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 11 | Figure 1: Routing (forward) Example
A routing network consists of two components: a router and a set of function blocks, each of which can be any neural network layer. The router is a function which selects from among the function blocks given some input. Routing is the process of iteratively applying the router to select a se- quence of function blocks to be composed and applied to the input vector. This process is illustrated in Figure 1. The input to the routing network is an instance to be classiï¬ed (v, t), v â Rd is a repre- sentation vector of dimension d and t is an integer task identiï¬er. The router is given v, t and a depth (=1), the depth of the recursion, and selects from among a set of function block choices available at depth 1, {f13, f12, f11}, picking f13 which is indicated with a dashed line. f13 is applied to the input (v, t) to produce an output activation. The router again chooses a function block from those available at depth 2 (if the function blocks are of different dimensions then the router is constrained to select dimensionally matched blocks to apply) and so on. Finally the router chooses a function block from the last (classiï¬cation) layer function block set and produces the classiï¬cation Ëy. | 1711.01239#11 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 12 | Algorithm 1 gives the routing procedure in detail. The algorithm takes as input a vector v, task label t and maximum recursion depth n. It iterates n times choos- ing a function block on each iteration and applying it to produce an output representation vector. A special PASS action (see Appendix Section 7.2 for details) just skips to the next iteration. Some experiments donât require a task label and in that case we just pass a dummy value. For simplicity we assume the algorithm has access to the router function and function blocks and donât include them explicitly in the input. The router decision function router : Rd à Z+ à Z+ â {1, 2, . . . , k, P ASS} (for d the input representation dimension and k the number of function blocks) maps the current representation v, task label t â Z+, and current depth i â Z+ to the index of the function block to route next in the ordered set function block.
Algorithm 1: Routing Algorithm input : x, t, n:
x â Rd, d the representation dim; t integer task id; n max depth
output: v - the vector result of applying the composition of the selected functions to the input x
1 v â x 2 for i in 1...n do 3
# a + router(z, t, 7) if a A PASS then
4
# we | 1711.01239#12 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 13 | 1 v â x 2 for i in 1...n do 3
# a + router(z, t, 7) if a A PASS then
4
# we
5 x â function blocka(x)
5
# 6 return v
3
If the routing network is run for d invocations then we say it has depth d. For N function blocks a routing network run to a depth d can select from N d distinct trainable functions (the paths in the network). Any neural network can be represented as a routing network by adding copies of its layers as routing network function blocks. We can group the function blocks for each network layer and constrain the router to pick from layer 0 function blocks at depth 0, layer 1 blocks at depth 1, and so on. If the number of function blocks differs from layer to layer in the original network, then the router may accommodate this by, for example, maintaining a separate decision function for each depth.
3.1 ROUTER TRAINING USING RL
Algorithm 2: Router-Trainer: Training of a Routing Network. input: A dataset D of samples (v, t, y), v the input representation, t an integer task label, y a
ground-truth target label
1 for each sample s = (v, t, y) â D do 2 | 1711.01239#13 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 14 | ground-truth target label
1 for each sample s = (v, t, y) â D do 2
Do a forward pass through the network, applying Algorithm 1 to sample s. Store a trace T = (S, A, R, rf inal), where S = sequence of visited states (si); A = sequence of actions taken (ai); R = sequence of immediate action rewards (ri) for action ai; and the ï¬nal reward rf inal. The last output as the networkâs prediction Ëy and the ï¬nal reward rf inal is +1 if the prediction Ëy is correct; -1 if not. Compute the loss L(Ëy, y) between prediction Ëy and ground truth y and backpropagate along the function blocks on the selected route to train their parameters. Use the trace T to train the router using the desired RL training algorithm.
4 | 1711.01239#14 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 15 | 4
We can view routing as an RL problem in the following way. The states of the MDP are the triples (v, t, i) where v ⬠R@ isa representation vector (initially the input), ¢ is an integer task label for v, and i is the depth (initially 1). The actions are function block choices (and PASS) in {1,...k, PASS} for k the number of function blocks. Given a state s = (v,t,i), the router makes a decision about which action to take. For the non-PASS actions, the state is then updated sâ = (vâ,t,i + 1) and the process continues. The PASS action produces the same representation vector again but increments the depth, so sâ = (v,t,i +1). We train the router policy using a variety of RL algorithms and settings which we will describe in detail in the next section.
Regardless of the RL algorithm applied, the router and function blocks are trained jointly. For each instance we route the instance through the network to produce a prediction Ëy. Along the way we record a trace of the states si and the actions ai taken as well as an immediate reward ri for action ai. When the last function block is chosen, we record a ï¬nal reward which depends on the prediction Ëy and the true label y. | 1711.01239#15 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 16 | f13 âL âf13 f21 âL âf21 f32 âL âf32 L(Ëy, y) Routing Example (see Figure 1) a1 +r1 a2 +r2 a3 +r3 rf inal Ëy = f32(f21(f13(v, t)))
Figure 2: Training (backward) Example
We train the selected function blocks using SGD/backprop. In the example of Figure 1 this means computing gradients for f32, f21 and f13. We then use the computed trace to train the router using an RL algorithm. The high-level procedure is summarized in Algorithm 2 and illustrated in Figure 2. To keep the presentation uncluttered we assume the RL training algorithm has access to the router function, function blocks, loss function, and any speciï¬c hyper-parameters such as discount rate needed for the training and donât include them explicitly in the input.
# 3.1.1 REWARD DESIGN
A routing network uses two kinds of rewards: immediate action rewards ri given in response to an action ai and a ï¬nal reward rf inal, given at the end of the routing. The ï¬nal reward is a function
4 | 1711.01239#16 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 17 | 4
of the networkâs performance. For the classiï¬cation problems focused on in this paper, we set it to +1 if the prediction was correct (Ëy = y), and â1 otherwise. For other domains, such as regression domains, the negative loss (âL(Ëy, y)) could be used.
We experimented with an immediate reward that encourages the router to use fewer function blocks when possible. Since the number of function blocks per-layer needed to maximize performance is not known ahead of time (we just take it to be the same as the number of tasks), we wanted to see whether we could achieve comparable accuracy while reducing the number of function blocks ever chosen by the router, allowing us to reduce the size of the network after training. We experimented with two such rewards, multiplied by a hyper-parameter Ï â [0, 1]: the average number of times that block was chosen by the router historically and the average historical probability of the router choosing that block. We found no signiï¬cant difference between the two approaches and use the average probability in our experiments. We evaluated the effect of Ï on ï¬nal performance and report the results in Figure 12 in the appendix. We see there that generally Ï = 0.0 (no collaboration reward) or a small value works best and that there is relatively little sensitivity to the choice in this range. | 1711.01239#17 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 19 | To train the router we evaluate both single-agent and multi-agent RL strategies. Figure 3 shows three variations which we consider. In Figure 3(a) there is just a single agent which makes the routing decision. This is be trained using either policy-gradient (PG) or Q-Learning experiments. Figure 3(b) shows a multi-agent approach. Here there are a ï¬xed number of agents and a hard rule which assigns the input instance to a an agent responsible for routing it. In our experiments we create one agent per task and use the input task label as an index to the agent responsible for routing that instance. Figure 3(c) shows a multi-agent approach in which there is an additional agent, denoted αd and called a dispatching agent which learns to assign the input to an agent, instead of using a ï¬xed rule. For both of these multi-agent scenarios we additionally experiment with a MARL algorithm called Weighted Policy Learner (WPL). | 1711.01239#19 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 20 | We experiment with storing the policy both as a table and in form of an approximator. The tabular representation has the invocation depth as its row dimension and the function block as its column dimension with the entries containing the probability of choosing a given function block at a given depth. The approximator representation can consist of either one MLP that is passed the depth (represented in 1-hot), or a vector of d MLPs, one for each decision/depth.
Both the Q-Learning and Policy Gradient algorithms are applicable with tabular and approximation function policy representations. We use REINFORCE (Williams, 1992) to train both the approx- imation function and tabular representations. For Q-Learning the table stores the Q-values in the entries. We use vanilla Q-Learning (Watkins, 1989) to train tabular representation and train the approximators to minimize the £2 norm of the temporal difference error.
Implementing the router decision policy using multiple agents turns the routing problem into a stochastic game, which is a multi-agent extension of an MDP. In stochastic games multiple agents interact in the environment and the expected return for any given policy may change without any action on that agentâs part. In this view incompatible agents need to compete for blocks to train, since negative transfer will make collaboration unattractive, while compatible agents can gain by
5 | 1711.01239#20 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 21 | 5
sharing function blocks. The agentâs (locally) optimal policies will correspond to the gameâs Nash equilibrium 2.
For routing networks, the environment is non-stationary since the function blocks are being trained as well as the router policy. This makes the training considerably more difï¬cult than in the single- agent (MDP) setting. We have experimented with single-agent policy gradient methods such as REINFORCE but ï¬nd they are less well adapted to the changing environment and changes in other agentâs behavior, which may degrade their performance in this setting. | 1711.01239#21 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 22 | One MARL algorithm specifically designed to address this problem, and which has also been shown to con- verge in non-stationary environments, is the weighted policy learner (WPL) algorithm (Abdallah & Lesser, 2006), shown in Algorithm 3. WPL is a PG algorithm designed to dampen oscillation and push the agents to converge more quickly. This is done by scaling the gradient of the expected return for an action a ac- cording the probability of taking that action 7(a) (if the gradient is positive) or 1 â (a) (if the gradient is negative). Intuitively, this has the effect of slow- ing down the learning rate when the policy is mov- ing away from a Nash equilibrium strategy and in- creasing it when it approaches one. The full WPL algorithm is shown in Algorithm 3. It is assumed that the historical average return Ri for each action a; is initialized to 0 before the start of training. The function simplex-projection projects the updated pol- icy values to make it a valid probability distribution. The projection is defined as: clip()/ >°(clip(7)), where clip(a) = max(0,min(1,a)). The states Sin the trace are not used by the WPL algorithm. Details, | 1711.01239#22 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 23 | >°(clip(7)), where clip(a) = max(0,min(1,a)). The states Sin the trace are not used by the WPL algorithm. Details, including convergence proofs and more examAlgorithm 3: Weighted Policy Learner input : A trace T = (S, A, Rr finat) n the maximum depth; R, the historical average returns (initialized to 0 at the start of training); 7 the discount factor ; and A, the policy learning rate output: An updated router policy 7 1 for each action a; ⬠Ado 2 Compute the return: ng 3 Ri HT final + Vi=i yr; Update the average return: Ri & (LâAn)Ri + AnRi Compute the gradient: A(ai) â Ri = Ri Update the policy: 4 | A(ai) â A(ai)(1 â x(a:)) " else L Alai) â Alai)(m(ai)) 13 am < simplex-projection(7 + A,,A) if A(a;) < 0 then | 1711.01239#23 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 24 | Details, including convergence proofs and more exam- ples giving the intuition behind the algorithm, can be found in (Abdallah & Lesser, 2006). A longer expla- nation of the algorithm can be found in Section 7.4 in the appendix. The WPL-Update algorithm is deï¬ned only for the tabular setting. It is future work to adapt it to work with function approximators.
As we have described it, the training of the router and function blocks is performed independently after computing the loss. We have also experimented with adding the gradients from the router choices â(ai) to those for the function blocks which produce their input. We found no advantage but leave a more thorough investigation for future work.
# 4 QUANTITATIVE RESULTS | 1711.01239#24 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 25 | # 4 QUANTITATIVE RESULTS
We experiment with three datasets: multi-task versions of MNIST (MNIST-MTL) (Lecun et al., 1998), Mini-Imagenet (MIN-MTL) (Vinyals et al., 2016) as introduced by (Ravi & Larochelle, 2017), and CIFAR-100 (CIFAR-MTL) (Krizhevsky, 2009) where we treat the 20 superclasses as tasks. In the binary MNIST-MTL dataset, the task is to differentiate instances of a given class c from non-instances. We create 10 tasks and for each we use 1k instances of the positive class c and 1k each of the remaining 9 negative classes for a total of 10k instances per task during training, which we then test on 200 samples per task (2k samples in total). MIN-MTL is a smaller version of ImageNet (Deng et al., 2009) which is easier to train in reasonable time periods. For mini-ImageNet we randomly choose 50 labels and create tasks from 10 disjoint random subsets of 5 labels each chosen from these. Each label has 800 training instances and 50 testing instances â so 4k training and 250 testing instances per task. For all 10 tasks we have a total of 40k training instances. Finally, | 1711.01239#25 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 26 | 2A Nash equilibrium is a set of policies for each agent where each agentâs expected return will be lower if that agent unilaterally changes its policy
6
CIFAR-100 has coarse and ï¬ne labels for its instances. We follow existing work (Krizhevsky, 2009) creating one task for each of the 20 coarse labels and include 500 instances for each of the corre- sponding ï¬ne labels. There are 20 tasks with a total of 2.5k instances per task; 2.5k for training and 500 for testing. All results are reported on the test set and are averaged over 3 runs. The data are summarized in Table 1.
Each of these datasets has interesting characteristics which challenge the learning in different ways. CIFAR-MTL is a ânaturalâ dataset whose tasks correspond to human categories. MIN-MTL is ran- domly generated so will have less task coherence. This makes positive transfer more difï¬cult to achieve and negative transfer more of a problem. And MNIST-MTL, while simple, has the difï¬cult property that the same instance can appear with different labels in different tasks, causing interfer- ence. For example, in the â0 vs other digitsâ task, â0â appears with a positive label but in the â1 vs other digitsâ task it appears with a negative label. | 1711.01239#26 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 27 | Our experiments are conducted on a convnet archi- tecture (SimpleConvNet) which appeared recently in (Ravi & Larochelle, 2017). This model has 4 convo- lutional layers, each consisting of a 3x3 convolution and 32 ï¬lters, followed by batch normalization and a ReLU. The convolutional layers are followed by 3 fully connected layers, with 128 hidden units each. Our routed version of the network routes the 3 fully connected layers and for each routed layer we supply one randomly initialized function block per task in the dataset. When we use neural net approximators for the router agents they are always 2 layer MLPs with a hidden dimension of 64. A state (v, t, i) is encoded for input to the ap- proximator by concatenating v with a 1-hot representation of t (if used). That is, encoding(s) = concat(v, one hot(t)).
# Training Dataset 50k CIFAR-MTL MIN-MTL 40k MNIST-MTL 100k # Testing 10k 2.5k 2k Table 1: Dataset training and testing splits | 1711.01239#27 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 28 | We did a parameter sweep to ï¬nd the best learning rate and Ï value for each algorithm on each dataset. We use Ï = 0.0 (no collaboration reward) for CIFAR-MTL and MIN-MTL and Ï = 0.3 for MNIST-MTL. The learning rate is initialized to 10â2 and annealed by dividing by 10 every 20 epochs. We tried both regular SGD as well as Adam Kingma & Ba (2014), but chose SGD as it resulted in marginally better performance. The SimpleConvNet has batch normalization layers but we use no dropout.
For one experiment, we dedicate a special âPASSâ action to allow the agents to skip layers dur- ing training which leaves the current state unchanged (routing-all-fc recurrent/+PASS). A detailed description of the PASS action is provided in the Appendix in Section 7.2.
All data are presented in Table 2 in the Appendix. | 1711.01239#28 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 29 | All data are presented in Table 2 in the Appendix.
In the ï¬rst experiment, shown in Figure 4, we compare different RL training algorithms on CIFAR- MTL. We compare ï¬ve algorithms: MARL:WPL; a single agent REINFORCE learner with a sep- arate approximation function per layer; an agent-per-task REINFORCE learner which maintains a separate approximation function for each layer; an agent-per-task Q learner with a separate approx- imation function per layer; and an agent-per-task Q learner with a separate table for each layer. The best performer is the WPL algorithm which outperforms the nearest competitor, tabular Q-Learning by about 4%. We can see that (1) the WPL algorithm works better than a similar vanilla PG, which has trouble learning; (2) having multiple agents works better than having a single agent; and (3) the tabular versions, which just use the task and depth to make their predictions, work better here than the approximation versions, which all use the representation vector in addition predict the next action. | 1711.01239#29 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 30 | The next experiment compares the best performing algorithm WPL against other routing approaches, including the already introduced REINFORCE: single agent (for which WPL is not applicable). All of these algorithms route the full-connected layers of the SimpleConvNet using the layering ap- proach we discussed earlier. To make the next comparison clear we rename MARL:WPL to routing- all-fc in Figure 5 to reï¬ect the fact that it routes all the fully connected layers of the SimpleConvNet, and rename REINFORCE: single agent to routing-all-fc single agent. We compare against several other approaches. One approach, routing-all-fc-recurrent/+PASS, has the same setup as routing-all- fc, but does not constrain the router to pick only from layer 0 function blocks at depth 0, etc. It is allowed to choose any function block from two of the layers (since the ï¬rst two routed layers
7
70 (TNO T Te tate ne ZN Y mee é al low i ve Accuracy in % & t 1 / Epoch
70 604 i Accuracy in % id 8 Epoch
Inï¬uence of the RL algorithm on Figure 4: CIFAR-MTL. Detailed descriptions of the im- plementation each approach can be found in the Appendix in Section 7.3. | 1711.01239#30 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 31 | Figure 5: Comparison of Routing Architec- tures on CIFAR-MTL. Implementation details of each approach can be found in the Appendix in Section 7.3.
have identical input and output dimensions; the last is the classiï¬cation layer). Another approach, soft-mixture-fc, is a soft version of the router architecture. This soft version uses the same function blocks as the routed version, but replaces the hard selection with a trained softmax attention (see the discussion below on cross-stitch networks for the details). We also compare against the single agent architecture shown in 3(a) called routing-all-fc single agent and the dispatched architecture shown in Figure 3(c) called routing-all-fc dispatched. Neither of these approached the performance of the per-task agents. The best performer by a large margin is routing-all-fc, the fully routed WPL algorithm.
We next compare routing-all-fc on different domains against the cross-stitch networks of Misra et al. (2016) and two challenging baselines: task speciï¬c-1-fc and task speciï¬c-all-fc, described below. | 1711.01239#31 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 32 | Cross-stitch networks Misra et al. (2016) are a kind of linear-combination model for multi-task learning. They maintain one model per task with a shared input layer, and ââcross stitchâ connection layers, which allow sharing between tasks. Instead of selecting a single function block in the next layer to route to, a cross-stitch network routes to all the function blocks simultaneously, with the input for a function block i in layer / given by a linear combination of the activations computed by all the function blocks of layer /â1. That is: input,; = an Why-1,5> for learned weights why and layer | â 1 activations v;_1,;. For our experiments, we add a cross-stitch layer to each of the routed layers of SimpleConvNet. We additional compare to a similar âsoft routingâ version soft-mixture-fc in Figure 5. Soft-routing uses a softmax to normalize the weights used to combine the activations of previous layers and it shares parameters for a given layer so that wi = wh), for all i, iâ, 1.
Figure 6: Results on domain CIFAR-MTL
70 60 Fa <0 z g FA < ââ retinal = = = task specific-alkfc task specific l-fe ross sttealFe 20 40 © Ea 100 Epoch | 1711.01239#32 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 33 | 70 60 Fa <0 z g FA < ââ retinal = = = task specific-alkfc task specific l-fe ross sttealFe 20 40 © Ea 100 Epoch
60 50+ --9s7 7 -â Fa Fe a ee) alee < ye anem z a g Ne 3° 1 < " 30 fi U 20 40 Epoch
# Figure 7: Results on domain MIN-MTL (mini ImageNet)
8
The task-speciï¬c-1-fc baseline has a separate last fully connected layer for each task and shares the rest of the layers for all tasks. The task speciï¬c-all-fc baseline has a separate set of all the fully con- nected layers for each task. These baseline architectures allow considerable sharing of parameters but also grant the network private parameters for each task to avoid interference. However, unlike routing networks, the choice of which parameters are shared for which tasks, and which parameters are task-private is made statically in the architecture, independent of task. | 1711.01239#33 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 34 | The results are shown in Figures 6, 7, and 8. In each case the routing net routing-all-fc performs consistently better than the cross-stitch networks and the baselines. On CIFAR-MTL, the routing net beats cross-stitch networks by 7% and the next closest baseline task-speciï¬c-1-fc by 11%. On MIN-MTL, the routing net beats cross-stitch networks by about 2% and the nearest baseline task- speciï¬c-1-fc by about 6%. We surmise that the results are better on CIFAR-MTL because the task instances have more in common whereas the MIN-MTL tasks are randomly constructed, making sharing less proï¬table.
On MNIST-MTL the random baseline is 90%. We experimented with several learning rates but were unable to get the cross-stitch networks to train well here. Routing nets beats the cross-stitch networks by 9% and the nearest baseline (task-speciï¬c-all-fc) by 3%. The soft version also had trouble learning on this dataset. | 1711.01239#34 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 35 | In all these experiments routing makes a signiï¬cant difference over both cross-stitch networks and the baselines and we conclude that a dynamic policy which learns the function blocks to compose on a per-task basis yields better accuracy and sharper convergence than simple static sharing baselines or a soft attention approach.
In addition, router training is much faster. On CIFAR-MTL for example, training time on a sta- ble compute cluster was reduced from roughly 38 hours to 5.6, an 85% improvement. We have conducted a set of scaling experiments to compare the training computation of routing networks and cross-stitch networks trained with 2, 3, 5, and 10 function blocks. The results are shown in the appendix in Figure 15. Routing networks consistently perform better than cross-stitch networks and the baselines across all these problems. Adding function blocks has no apparent effect on the computation involved in training routing networks on a dataset of a given size. On the other hand, cross-stitch networks has a soft routing policy that scales computation linearly with the number of function blocks. Because the soft policy backpropagates through all function blocks and the hard routing policy only backpropagates through the selected block, the hard policy can much more easily scale to many task learning scenarios that require many diverse types of functional primitives. | 1711.01239#35 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 36 | To explore why the multi-agent approach seems to do better than the single-agent, we manually compared their policy dynamics for several CIFAR-MTL examples. For these experiments Ï = 0.0 so there is no collaboration reward which might encourage less diversity in the agent choices. In the cases we examined we found that the single agent often chose just 1 or 2 function blocks at each depth, and then routed all tasks to those. We suspect that there is simply too little signal available to the agent in the early, random stages, and once a bias is established its decisions suffer from a lack of diversity.
100 took specttetfe task spocicalc crossstitchvalhfc ae 4 oa prone iy noocccccce . 88 : : - - Accuracy in % Epoch
The routing network on the other hand learns a policy which, unlike the baseline static models, partitions the network quite differently for each task, and also achieves considerable diversity in its choices as can be seen in Figure 11. This ï¬g- ure shows the routing decisions made over the whole MNIST MTL dataset. Each task is la- beled at the top and the decisions for each of the three routed layers are shown below. We believe that because the routing network has separate policies for each task, it is less sen- sitive to a bias for one or two function blocks and each agent learns more independently what works for its assigned task. | 1711.01239#36 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 37 | Figure 8: Results on domain MNIST-MTL
9
° âââ = âs oo | âae ll âe â == in =e a ee 0 ââ = ââ = x â= a a a â â ES © â ee | Lâââ_â_â_â_âââââ E oo ee © SL F a 2 wn To] â_ 5 20 40 co 30 Processed Samples
10 08 0.6 3 * oa 0.2 0.0 # 0 20 40 60 80 100 Processed Samples
Figure 10: The Probabilities of all Agents of taking Block 7 for the ï¬rst 100 samples of each task (totalling 1000 samples) of MNIST-MTL
Figure 9: The Policies of all Agents for the ï¬rst function block layer for the ï¬rst 100 samples of each task of MNIST-MTL
# 5 QUALITATIVE RESULTS | 1711.01239#37 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 38 | # 5 QUALITATIVE RESULTS
To better understand the agent interaction we have created several views of the policy dynamics. First, in Figure 9, we chart the policy over time for the ï¬rst decision. Each rectangle labeled Ti on the left represents the evolution of the agentâs policy for that task. For each task, the horizontal axis is number of samples per task and the vertical axis is actions (decisions). Each vertical slice shows the probability distribution over actions after having seen that many samples of its task, with darker shades indicating higher probability. From this picture we can see that, in the beginning, all task agents have high entropy. As more samples are processed each agent develops several candidate function blocks to use for its task but eventually all agents converge to close to 100% probability for one particular block. In the language of games, the agents ï¬nd a pure strategy for routing. | 1711.01239#38 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 39 | In the next view of the dynamics, we pick one partic- ular function block (block 7) and plot the probabil- ity, for each agent, of choosing that block over time. The horizontal axis is time (sample) and the verti- cal axis is the probability of choosing block 7. Each colored curve corresponds to a different task agent. Here we can see that there is considerable oscillation over time until two agents, pink and green, emerge as the âvictorsâ for the use of block 7 and each assign close to 100% probability for choosing it in routing their respective tasks. It is interesting to see that the eventual winners, pink and green, emerge earlier as well as strongly interested in block 7. We have no- ticed this pattern in the analysis of other blocks and speculate that the agents who want to use the block are being pulled away from their early Nash equilibrium as other agents try to train the block away.
# Figure 11: An actual MNIST-MTL.
routing map for
Finally, in Figure 11 we show a map of the routing for MNIST-MTL. Here tasks are at the top and each layer below represents one routing decision. Conventional wisdom has it that networks will beneï¬t from sharing early, using the ï¬rst layers for common representations, diverging later to accommodate differences in the tasks. This is the setup for our baselines. It is interesting to see
10 | 1711.01239#39 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 40 | 10
that this is not what the network learns on its own. Here we see that the agents have converged on a strategy which ï¬rst uses 7 function blocks, then compresses to just 4, then again expands to use 5. It is not clear if this is an optimal strategy but it does certainly give improvement over the static baselines.
# 6 FUTURE WORK
We have presented a general architecture for routing and multi-agent router training algorithm which performs signiï¬cantly better than cross-stitch networks and baselines and other single-agent ap- proaches. The paradigm can easily be applied to a state-of-the-art network to allow it to learn to dynamically adjust its representations.
As described in the section on Routing Networks, the state space to be learned grows exponentially with the depth of the routing, making it challenging to scale the routing to deeper networks in their entirety. It would be interesting to try hierarchical RL techniques (Barto & Mahadevan (2003)) here.
Our most successful experiments have used the multi-agent architecture with one agent per task, trained with the Weighted Policy Learner algorithm (Algorithm 3). Currently this approach is tabular but we are investigating ways to adapt it to use neural net approximators. | 1711.01239#40 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 41 | We have also tried routing networks in an online setting, training over a sequence of tasks for few shot learning. To handle the iterative addition of new tasks we add a new routing agent for each and overï¬t it on the few shot examples while training the function modules with a very slow learning rate. Our results so far have been mixed, but this is a very useful setting and we plan to return to this problem.
# REFERENCES
Sherief Abdallah and Victor Lesser. Learning the task allocation game. In Proceedings of the ï¬fth international joint conference on Autonomous agents and multiagent systems, pp. 850â857. ACM, 2006. URL http://dl.acm.org/citation.cfm?id=1160786.
Rahaf Aljundi, Punarjay Chakravarty, and Tinne Tuytelaars. Expert gate: Lifelong learning with a network of experts. arXiv preprint arXiv:1611.06194, 2016.
Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. Designing neural network architec- tures using reinforcement learning. ICLR, 2017. | 1711.01239#41 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 42 | Andrew G. Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforcement learning. Discrete Event Dynamic Systems, 13(4):341â379, 2003. URL http://link.springer. com/article/10.1023/A:1025696116075.
Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, and Doina Precup. Conditional computation in neural networks for faster models. CoRR, abs/1511.06297, 2015. URL http://arxiv.org/ abs/1511.06297.
Andrew Brock, Theodore Lim, James M. Ritchie, and Nick Weston. SMASH: one-shot model archi- tecture search through hypernetworks. CoRR, abs/1708.05344, 2017. URL http://arxiv. org/abs/1708.05344.
Timothy J Buschman and Earl K Miller. Shifting the spotlight of attention: evidence for discrete computations in cognition. Frontiers in human neuroscience, 4, 2010.
Rich Caruana. Multitask learning. Machine Learning, 28(1):41â75, Jul 1997. ISSN 1573-0565. doi: 10.1023/A:1007379606734. URL https://doi.org/10.1023/A:1007379606734. | 1711.01239#42 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 43 | Corinna Cortes, Xavi Gonzalvo, Vitaly Kuznetsov, Mehryar Mohri, and Scott Yang. Adanet: Adap- tive structural learning of artiï¬cial neural networks. arXiv preprint arXiv:1607.01097, 2016.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
11
Ludovic Denoyer and Patrick Gallinari. Deep sequential neural network. arXiv preprint arXiv:1410.0510, 2014.
Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols, David Ha, Andrei A. Rusu, Alexander Pritzel, and Daan Wierstra. Pathnet: Evolution channels gradient descent in super neural networks. CoRR, abs/1701.08734, 2017. URL http://arxiv.org/abs/1701. 08734.
Kevin Gurney, Tony J Prescott, and Peter Redgrave. A computational model of action selection in the basal ganglia. i. a new functional anatomy. Biological cybernetics, 84(6):401â410, 2001. | 1711.01239#43 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 44 | David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016.
Jessica B Hamrick, Andrew J Ballard, Razvan Pascanu, Oriol Vinyals, Nicolas Heess, and Peter W Battaglia. Metacontrol for adaptive imagination-based optimization. ICLR, 2017.
Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. Adaptive mixtures of local experts. Neural computation, 3(1):79â87, 1991.
Michael I Jordan and Robert A Jacobs. Hierarchical mixtures of experts and the em algorithm. Neural computation, 6(2):181â214, 1994.
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980.
Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009.
Yann Lecun, Lon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, pp. 2278â2324, 1998. | 1711.01239#44 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 45 | Lanlan Liu and Jia Deng. Dynamic deep neural networks: Optimizing accuracy-efï¬ciency trade-offs by selective execution. arXiv preprint arXiv:1701.00299, 2017.
Mason McGill and Pietro Perona. Deciding how to decide: Dynamic routing in artiï¬cial neural networks. International Conference on Machine Learning, 2017.
Risto Miikkulainen, Jason Liang, Elliot Meyerson, Aditya Rawal, Dan Fink, Olivier Francon, Bala Raju, Arshak Navruzyan, Nigel Duffy, and Babak Hodjat. Evolving deep neural networks. arXiv preprint arXiv:1703.00548, 2017.
Ishan Misra, Abhinav Shrivastava, Abhinav Gupta, and Martial Hebert. Cross-stitch networks for In Proceedings of the IEEE Conference on Computer Vision and Pattern multi-task learning. Recognition, pp. 3994â4003, 2016.
Tsendsuren Munkhdalai and Hong Yu. Meta networks. International Conference on Machine Learn- ing, 2017. | 1711.01239#45 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 46 | Tsendsuren Munkhdalai and Hong Yu. Meta networks. International Conference on Machine Learn- ing, 2017.
Janarthanan Rajendran, P. Prasanna, Balaraman Ravindran, and Mitesh M. Khapra. ADAAPT: attend, adapt, and transfer: Attentative deep architecture for adaptive policy transfer from multiple sources in the same domain. ICLR, abs/1510.02879, 2017. URL http://arxiv.org/abs/ 1510.02879.
Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. ICLR, 2017.
Matthew Riemer, Aditya Vempaty, Flavio Calmon, Fenno Heath, Richard Hull, and Elham Khabiri. Correcting forecasts with multifactor neural attention. In International Conference on Machine Learning, pp. 3010â3019, 2016.
Sebastian Ruder, Joachim Bingel, Isabelle Augenstein, and Anders Søgaard. Sluice networks: Learning what to share between loosely related tasks. arXiv preprint arXiv:1705.08142, 2017. | 1711.01239#46 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 47 | Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. ICLR, 2017.
12
Andrea Stocco, Christian Lebiere, and John R Anderson. Conditional routing of information to the cortex: A model of the basal ganglias role in cognitive coordination. Psychological review, 117 (2):541, 2010.
Marijn F Stollenga, Jonathan Masci, Faustino Gomez, and Juergen Schmidhuber. Deep networks with internal selective attention through feedback connections. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Pro- cessing Systems 27, pp. 3545â3553. Curran Associates, Inc., 2014.
Oriol Vinyals, Charles Blundell, Timothy P. Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching networks for one shot learning. CoRR, abs/1606.04080, 2016. URL http://arxiv. org/abs/1606.04080. | 1711.01239#47 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 48 | Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. PhD thesis, Kingâs College, Cambridge, 1989.
Olga Wichrowska, Niru Maheswaranathan, Matthew W Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Nando de Freitas, and Jascha Sohl-Dickstein. Learned optimizers that scale and generalize. arXiv preprint arXiv:1703.04813, 2017.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992. ISSN 0885-6125.
Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. ICLR, 2017.
13
# 7 APPENDIX
7.1 IMPACT OF RHO
70. 604 Accuracy in % 304 20 20 40 60 80 100 Epoch
Figure 12: Inï¬uence of the âcollaboration rewardâ Ï on CIFAR-MTL. The architecture is routing- all-fc with WPL routing agents. | 1711.01239#48 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 49 | 0.035. _--¢ 0.034 eorre F a § 0.025] , 3 a z â a 4 x + 8 2 oo g SER routing © oor =O = cross-stitch 0.05) +&â~âeâ_â_ââââ_8 ° num tasks:
Figure 13: Comparison of per-task training cost for cross-stitch and routing networks. We add a function block per task and normalize the training time per epoch by dividing by the number of tasks to isolate the effect of adding function blocks on computation.
7.2 THE PASS ACTION
When routing networks, some resulting sets of function blocks can be applied repeatedly. While there might be other constraints, the prevalent one is dimensionality - input and output dimensions need to match. Applied to the SimpleConvNet architecture used throughout the paper, this means that of the fc layers - (convolution â 48), (48 â 48), (48 â #classes), the middle transforma- tion can be applied an arbitrary number of times. In this case, the routing network becomes fully recurrent and the PASS action is applicable. This allows the network to shorten the recursion depth.
7.3 OVERVIEW OF IMPLEMENTATIONS
We have tested 9 different implementation variants of the routing architectures. The architectures are summarized in Tables 3 and 4. The columns are: | 1711.01239#49 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 50 | 7.3 OVERVIEW OF IMPLEMENTATIONS
We have tested 9 different implementation variants of the routing architectures. The architectures are summarized in Tables 3 and 4. The columns are:
#Agents refers to how many agents are used to implement the router. In most of the experiments, each router consists of one agent per task. However, as described in 3.1, there are implementations with 1 and #tasks + 1 agents.
14 | 1711.01239#50 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 51 | RL (Figure 4) arch (Figure 5) CIFAR (Figure 6) MIN (Figure 7) MNIST (Figure 8) Epoch REINFORCE: approx Qlearning: approx Qlearning: table MARL-WPL: table routing-all-fc routing-all-fc recursive routing-all-fc dispatched soft mixture-all-fc routing-all-fc single agent routing-all-fc task speciï¬c-all-fc task speciï¬c-1-fc cross stitch-all-fc routing-all-fc task speciï¬c-all-fc task speciï¬c-1fc cross-stitch-all-fc routing-all-fc task speciï¬c-all-fc task speciï¬c-1fc soft mixture-all-fc cross-stitch-all-fc 1 20 20 20 31 31 31 20 20 20 31 21 27 26 34 22 29 29 90 90 90 90 90 5 20 20 36 53 53 43 23 24 23 53 29 34 37 54 30 38 41 90 91 90 90 90 10 20 20 47 57 57 45 28 27 33 57 33 39 42 57 37 43 48 98 94 91 90 90 20 20 20 50 58 58 48 37 30 42 58 36 42 49 55 43 46 53 99 95 92 90 90 50 20 24 55 | 1711.01239#51 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 53 | Table 2: Numeric results (in % accuracy) for Figures 4 through 8
70 cod x © 505 id g 5 g 40 3 304 |: ââ routing-alk fe SS task speciicaltfe seeeeeee, cross stitch-alkfe 20. T T T. T 20 40 oo 80 100 Epoch
70 604 x ⬠504 id g 5 ¥ 40 & 30-f q 20 M x 40 oo 100 Epoch
(a) ï¬rst 2 tasks (b) ï¬rst 3 tasks
70 £ > g g 3 g fd â routing arte F TIT task spociicaltfe a oss stitch-alkfc 20. T T T T 20 4 0 80 100 Epoch
70 604 ¥ 50 ig = 5 g 40 = r 20 x 4 0 100 Epoch
(c) ï¬rst 5 tasks
(d) ï¬rst 10 tasks
Figure 15: Results on the ï¬rst n tasks of CIFAR-MTL
15 | 1711.01239#53 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 54 | (d) ï¬rst 10 tasks
Figure 15: Results on the ï¬rst n tasks of CIFAR-MTL
15
Num Agents Name MARL:WPL Num Tasks REINFORCE Num Tasks Num Tasks Q-Learning Num Tasks Q-Learning Policy Representation Tabular (num layers x num function blocks) Vector (num layers) of approx functions Vector (num layers) of approx functions Tabular (num layers x num function blocks) Part of State = (v, t, d) Used t, d v, t, d v, t, d t, d
Table 3: Implementation details for Figure 4. All approx functions are 2 layer MLPs with a hidden dim of 64.
Name routing-all-fc routing-all-fc non-layered soft-routing-all-fc dispatched-routing-all-fc single-agent-routing-all-fc Num Agents Num Tasks Num Tasks Num Tasks Num Tasks + 1 Vector (num layers) of appox functions + dispatcher 1 Policy Representation Tabular (num layers x num function blocks) tabular (num layers x num function blocks) Vector (num layers) of appox functions Vector (num layers) of approx functions)
Part of State = (v, t, d) Used t, d t, d v, t, d v, t, d v, t, d | 1711.01239#54 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 55 | Part of State = (v, t, d) Used t, d t, d v, t, d v, t, d v, t, d
Table 4: Implementation details for Figure 5. All approx functions are 2 layer MLPâs with a hidden dim of 64.
Policy Representation There are two dominant representation variations, as described in 3.1. In the ï¬rst, the policy is stored as a table. Since the table needs to store values for each of the different layers of the routing network, it is of size num layersà num actions. In the second, it is represented either as vector of MLPâs with a hidden layer of dimension 64, one for each layer of the routing network. In this case the input to the MLP is the representation vector v concatenated with a one-hot representation of the task identiï¬er. | 1711.01239#55 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 56 | Policy Input describes which parts of the state are used in the decision of the routing action. For tabular policies, the task is used to index the agent responsible for handling that task. Each agent then uses the depth as a row index into into the table. For approximation-based policies, there are two variations. For the single agent case the depth is used to index an approximation function which takes as input concat(v, one-hot(t)). For the multi-agent (non-dispatched) case the task label is used to index the agent and then the depth is used to index the corresponding approximation function for that depth, which is given concat(v, one-hot(t)) as input. In the dispatched case, the dispatcher is given concat(v, one-hot(t)) and predicts an agent index. That agent uses the depth to ï¬nd the approximation function for that depth which is then given concat(v, one-hot(t)) as input.
7.4 EXPLANATION OF THE WEIGHTED POLICY LEARNER (WPL) ALGORITHM | 1711.01239#56 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.01239 | 57 | 7.4 EXPLANATION OF THE WEIGHTED POLICY LEARNER (WPL) ALGORITHM
The WPL algorithm is a multi-agent policy gradient algorithm designed to help dampen policy oscillation and encourage convergence. It does this by slowly scaling down the learning rate for an agent after a gradient change in that agents policy. It determines when there has been a gradient change by using the difference between the immediate reward and historical average reward for the action taken. Depending on the sign of the gradient the algorithm is in one of two scenarios. If the gradient is positive then it is scaled by 1 â Ï(ai). Over time if the gradient remains positive it will cause Ï(ai) to increase and so 1 â Ï(ai) will go to 0, slowing the learning. If the gradient is negative then it is scaled by Ï(ai). Here again if the gradient remains negative over time it will cause Ï(ai) to decrease eventually to 0, slowing the learning again. Slowing the learning after gradient changes dampens the policy oscillation and helps drive the policies towards convergence.
16 | 1711.01239#57 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | [
{
"id": "1701.00299"
},
{
"id": "1609.09106"
},
{
"id": "1703.04813"
},
{
"id": "1607.01097"
},
{
"id": "1611.06194"
},
{
"id": "1703.00548"
},
{
"id": "1705.08142"
}
] |
1711.00740 | 1 | Mahmoud Khademiâ Simon Fraser University Burnaby, BC, Canada [email protected]
# ABSTRACT
Learning tasks on source code (i.e., formal languages) have been considered re- cently, but most work has tried to transfer natural language methods and does not capitalize on the unique opportunities offered by codeâs known sematics. For example, long-range dependencies induced by using the same variable or function in distant locations are often not considered. We propose to use graphs to represent both the syntactic and semantic structure of code and use graph-based deep learning methods to learn to reason over program structures. In this work, we present how to construct graphs from source code and how to scale Gated Graph Neural Networks training to such large graphs. We evaluate our method on two tasks: VARNAMING, in which a network attempts to predict the name of a variable given its usage, and VARMISUSE, in which the network learns to reason about selecting the correct variable that should be used at a given program location. Our comparison to methods that use less structured program representations shows the advantages of modeling known structure, and suggests that our models learn to infer meaningful names and to solve the VARMISUSE task in many cases. Additionally, our testing showed that VARMISUSE identiï¬es a number of bugs in mature open-source projects.
# INTRODUCTION | 1711.00740#1 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 2 | The advent of large repositories of source code as well as scalable machine learning methods naturally leads to the idea of âbig codeâ, i.e., largely unsupervised methods that support software engineers by generalizing from existing source code (Allamanis et al., 2017). Currently, existing deep learning models of source code capture its shallow, textual structure, e.g. as a sequence of tokens (Hindle et al., 2012; Raychev et al., 2014; Allamanis et al., 2016), as parse trees (Maddison & Tarlow, 2014; Bielik et al., 2016), or as a ï¬at dependency networks of variables (Raychev et al., 2015). Such models miss out on the opportunity to capitalize on the rich and well-deï¬ned semantics of source code. In this work, we take a step to alleviate this by including two additional signal sources in source code: data ï¬ow and type hierarchies. We do this by encoding programs as graphs, in which edges represent syntactic relationships (e.g. âtoken before/afterâ) as well as semantic relationships (âvariable last used/written hereâ, âformal parameter for argument | 1711.00740#2 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 3 | âtoken before/afterâ) as well as semantic relationships (âvariable last used/written hereâ, âformal parameter for argument is called streamâ, etc.). Our key insight is that exposing these semantics explicitly as structured input to a machine learning model lessens the requirements on amounts of training data, model capacity and training regime and allows us to solve tasks that are beyond the current state of the art. | 1711.00740#3 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 4 | We explore two tasks to illustrate the advantages of exposing more semantic structure of programs. First, we consider the VARNAMING task (Allamanis et al., 2014; Raychev et al., 2015), in which given some source code, the âcorrectâ variable name is inferred as a sequence of subtokens. This requires some understanding of how a variable is used, i.e., requires reasoning about lines of code far
âWork done as an intern in Microsoft Research, Cambridge, UK.
1
Published as a conference paper at ICLR 2018
var clazz=classTypes["Root"].Single() as JsonCodeGenerator.ClassType; Assert.NotNull(clazz); var first=classTypes["RecClass"].Single() as JsonCodeGenerator.ClassType; Assert.NotNull( clazz ); Assert.Equal("string", first.Properties["Name"].Name); Assert.False(clazz.Properties["Name"].IsArray);
Figure 1: A snippet of a detected bug in RavenDB an open-source C# project. The code has been slightly simpliï¬ed. Our model detects correctly that the variable used in the highlighted (yellow) slot is incorrect. Instead, first should have been placed at the slot. We reported this problem which was ï¬xed in PR 4138. | 1711.00740#4 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 5 | apart in the source ï¬le. Secondly, we introduce the variable misuse prediction task (VARMISUSE), in which the network aims to infer which variable should be used in a program location. To illustrate the task, Figure 1 shows a slightly simpliï¬ed snippet of a bug our model detected in a popular open-source project. Speciï¬cally, instead of the variable clazz, variable first should have been used in the yellow highlighted slot. Existing static analysis methods cannot detect such issues, even though a software engineer would easily identify this as an error from experience. | 1711.00740#5 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 6 | To achieve high accuracy on these tasks, we need to learn representations of program semantics. For both tasks, we need to learn the semantic role of a variable (e.g., âis it a counter?â, âis it a ï¬lename?â). Additionally, for VARMISUSE, learning variable usage semantics (e.g., âa ï¬lename is needed hereâ) is required. This âï¬ll the blank elementâ task is related to methods for learning distributed representations of natural language words, such as Word2Vec (Mikolov et al., 2013) and GLoVe (Pennington et al., 2014). However, we can learn from a much richer structure such as data ï¬ow information. This work is a step towards learning program representations, and we expect them to be valuable in a wide range of other tasks, such as code completion (âthis is the variable you are looking forâ) and more advanced bug ï¬nding (âyou should lock before using this objectâ). | 1711.00740#6 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 7 | To summarize, our contributions are: (i) We deï¬ne the VARMISUSE task as a challenge for machine learning modeling of source code, that requires to learn (some) semantics of programs (cf. section 3). (ii) We present deep learning models for solving the VARNAMING and VARMISUSE tasks by modeling the codeâs graph structure and learning program representations over those graphs (cf. section 4). (iii) We evaluate our models on a large dataset of 2.9 million lines of real-world source code, showing that our best model achieves 32.9% accuracy on the VARNAMING task and 85.5% accuracy on the VARMISUSE task, beating simpler baselines (cf. section 5). (iv) We document practical relevance of VARMISUSE by summarizing some bugs that we found in mature open-source software projects (cf. subsection 5.3). Our implementation of graph neural networks (on a simpler task) can be found at https://github.com/Microsoft/gated-graph-neural-network-samples and the dataset can be found at https://aka.ms/iclr18-prog-graphs-dataset.
# 2 RELATED WORK | 1711.00740#7 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 8 | # 2 RELATED WORK
Our work builds upon the recent ï¬eld of using machine learning for source code artifacts (Allamanis et al., 2017). For example, Hindle et al. (2012); Bhoopchand et al. (2016) model the code as a sequence of tokens, while Maddison & Tarlow (2014); Raychev et al. (2016) model the syntax tree structure of code. All works on language models of code ï¬nd that predicting variable and method identiï¬ers is one of biggest challenges in the task.
Closest to our work is the work of Allamanis et al. (2015) who learn distributed representations of variables using all their usages to predict their names. However, they do not use data ï¬ow information and we are not aware of any model that does so. Raychev et al. (2015) and Bichsel et al. (2016) use conditional random ï¬elds to model a variety of relationships between variables, AST elements and types to predict variable names and types (resp. to deobfuscate Android apps), but without considering the ï¬ow of data explicitly. In these works, all variable usages are deterministically known beforehand (as the code is complete and remains unmodiï¬ed), as in Allamanis et al. (2014; 2015).
2 | 1711.00740#8 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 9 | 2
Published as a conference paper at ICLR 2018
Our work is remotely related to work on program synthesis using sketches (Solar-Lezama, 2008) and automated code transplantation (Barr et al., 2015). However, these approaches require a set of speciï¬cations (e.g. input-output examples, test suites) to complete the gaps, rather than statistics learned from big code. These approaches can be thought as complementary to ours, since we learn to statistically complete the gaps without any need for speciï¬cations, by learning common variable usage patterns from code.
Neural networks on graphs (Gori et al., 2005; Li et al., 2015; Defferrard et al., 2016; Kipf & Welling, 2016; Gilmer et al., 2017) adapt a variety of deep learning methods to graph-structured input. They have been used in a series of applications, such as link prediction and classiï¬cation (Grover & Leskovec, 2016) and semantic role labeling in NLP (Marcheggiani & Titov, 2017). Somewhat related to source code is the work of Wang et al. (2017) who learn graph-based representations of mathematical formulas for premise selection in theorem proving.
# 3 THE VARMISUSE TASK | 1711.00740#9 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 10 | # 3 THE VARMISUSE TASK
Detecting variable misuses in code is a task that requires understanding and reasoning about program semantics. To successfully tackle the task one needs to infer the role and function of the program elements and understand how they relate. For example, given a program such as Fig. 1, the task is to automatically detect that the marked use of clazz is a mistake and that first should be used instead. While this task resembles standard code completion, it differs signiï¬cantly in its scope and purpose, by considering only variable identiï¬ers and a mostly complete program. | 1711.00740#10 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 11 | Task Description We view a source code ï¬le as a sequence of tokens t0 . . . tN = T , in which some tokens tλ0, tλ1 . . . are variables. Furthermore, let Vt â V refer to the set of all type-correct variables in scope at the location of t, i.e., those variables that can be used at t without raising a compiler error. We call a token tokλ where we want to predict the correct variable usage a slot. We deï¬ne a separate task for each slot tλ: Given t0 . . . tλâ1 and tλ+1, . . . , tN , correctly select tλ from Vtλ. For training and evaluation purposes, a correct solution is one that simply matches the ground truth, but note that in practice, several possible assignments could be considered correct (i.e., when several variables refer to the same value in memory).
# 4 MODEL: PROGRAMS AS GRAPHS
In this section, we discuss how to transform program source code into program graphs and learn representations over them. These program graphs not only encode the program text but also the semantic information that can be obtained using standard compiler tools. | 1711.00740#11 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 12 | Gated Graph Neural Networks Our work builds on Gated Graph Neural Networks (Li et al., 2015) (GGNN) and we summarize them here. A graph G = (V, E, X) is composed of a set of nodes V, node features X, and a list of directed edge sets E = (E1, . . . , EK) where K is the number of edge types. We annotate each v â V with a real-valued vector x(v) â RD representing the features of the node (e.g., the embedding of a string label of that node). | 1711.00740#12 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 13 | We associate every node v with a state vector Aâ), initialized from the node label aâ). The sizes of the state vector and feature vector are typically the same, but we can use larger state vectors through padding of node features. To propagate information throughout the graph, âmessagesâ of type k are sent from each v to its neighbors, where each message is computed from its current state vector as m\â) = fx(h). Here, f, can be an arbitrary function; we choose a linear layer in our case. By computing messages for all graph edges at the same time, all states can be updated at the same time. In particular, a new state for a node v is computed by aggregating all incoming messages as mi) = g{m | there is an edge of type k from u to v}). g is an aggregation function, which we implement as elementwise summation. Given the aggregated message 7°) and the current state vector h\â) of node v, the state of the next time step bhâ) is computed as bhâ) = GRU(mâ¢), A), where GRU is the recurrent cell function of gated recurrent unit (GRU) (Cho et al.| [2014). The
3
Published as a conference paper at ICLR 2018 | 1711.00740#13 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 14 | 3
Published as a conference paper at ICLR 2018
ExpressionStatement InvocationExpression MemberAccessExpression ArgumentList Assert . NotNull ( . . .
x 1 y 2 x 3 x 4 x 5 y 6
(a) Simpliï¬ed syntax graph for line 2 of Fig. 1, where blue rounded boxes are syntax nodes, black rectan- gular boxes syntax tokens, blue edges Child edges and double black edges NextToken edges.
(b) Data ï¬ow edges for (x 1,y 2) = Foo(); while (x 3 > 0) x 4 = x 5 + y 6 (indices added for clarity), with red dotted LastUse edges, green dashed LastWrite edges and dashdotted purple ComputedFrom edges.
Figure 2: Examples of graph edges used in program representation.
dynamics deï¬ned by the above equations are repeated for a ï¬xed number of time steps. Then, we use the state vectors from the last time step as the node representations.1 | 1711.00740#14 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 15 | Program Graphs We represent program source code as graphs and use different edge types to model syntactic and semantic relationships between different tokens. The backbone of a program graph is the programâs abstract syntax tree (AST), consisting of syntax nodes (corresponding to non- terminals in the programming languageâs grammar) and syntax tokens (corresponding to terminals). We label syntax nodes with the name of the nonterminal from the programâs grammar, whereas syntax tokens are labeled with the string that they represent. We use Child edges to connect nodes according to the AST. As this does not induce an order on children of a syntax node, we additionally add NextToken edges connecting each syntax token to its successor. An example of this is shown in Fig. 2a. | 1711.00740#15 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 16 | To capture the ï¬ow of control and data through a program, we add additional edges connecting different uses and updates of syntax tokens corresponding to variables. For such a token v, let DR(v) be the set of syntax tokens at which the variable could have been used last. This set may contain several nodes (for example, when using a variable after a conditional in which it was used in both branches), and even syntax tokens that follow in the program code (in the case of loops). Similarly, let DW (v) be the set of syntax tokens at which the variable was last written to. Using these, we add LastRead (resp. LastWrite) edges connecting v to all elements of DR(v) (resp. DW (v)). Additionally, whenever we observe an assignment v = expr , we connect v to all variable tokens occurring in expr using ComputedFrom edges. An example of such semantic edges is shown in Fig. 2b. | 1711.00740#16 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 17 | We extend the graph to chain all uses of the same variable using LastLexicalUse edges (independent of data ï¬ow, i.e., in if (...) { ... v ...} else { ... v ...}, we link the two oc- currences of v). We also connect return tokens to the method declaration using ReturnsTo edges (this creates a âshortcutâ to its name and type). Inspired by Rice et al. (2017), we connect arguments in method calls to the formal parameters that they are matched to with FormalArgName edges, i.e., if we observe a call Foo(bar) and a method declaration Foo(InputStream stream), we connect the bar token to the stream token. Finally, we connect every token corresponding to a variable to enclosing guard expressions that use the variable with GuardedBy and Guarded- ByNegation edges. For example, in if (x > y) { ... x ...} else { ... y ...}, we add a GuardedBy edge from x (resp. a GuardedByNegation edge from y) to the AST node corresponding to x > y.
Finally, for all types of edges we introduce their respective backwards edges (transposing the adjacency matrix), doubling the number of edges and edge types. Backwards edges help with propagating information faster across the GGNN and make the model more expressive. | 1711.00740#17 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 18 | 1Graph Convolutional Networks (GCN) (Kipf & Welling, 2016; Schlichtkrull et al., 2017) would be a simpler replacement for GGNNs. They correspond to the special case of GGNNs in which no gated recurrent units are used for state updates and the number of propagation steps per GGNN layer is ï¬xed to 1. Instead, several layers are used. In our experiments, GCNs generalized less well than GGNNs.
4
Published as a conference paper at ICLR 2018 | 1711.00740#18 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 19 | Leveraging Variable Type Information We assume a statically typed language and that the source code can be compiled, and thus each variable has a (known) type Ï (v). To use it, we deï¬ne a learnable embedding function r(Ï ) for known types and additionally deï¬ne an âUNKTYPEâ for all unknown/unrepresented types. We also leverage the rich type hierarchy that is available in many object-oriented languages. For this, we map a variableâs type Ï (v) to the set of its supertypes, i.e. Ï â(v) = {Ï : Ï (v) implements type Ï } ⪠{Ï (v)}. We then compute the type representation râ(v) of a variable v as the element-wise maximum of {r(Ï ) : Ï â Ï â(v)}. We chose the maximum here, as it is a natural pooling operation for representing partial ordering relations (such as type lattices). Using all types in Ï â(v) allows us to generalize to unseen types that implement common supertypes or interfaces. For example, List<K> has multiple concrete types (e.g. List<int>, List<string>). Nevertheless, these types implement a common interface | 1711.00740#19 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 20 | or interfaces. For example, List<K> has multiple concrete types (e.g. List<int>, List<string>). Nevertheless, these types implement a common interface (IList) and share common characteristics. During training, we randomly select a non-empty subset of Ï â(v) which ensures training of all known types in the lattice. This acts both like a dropout mechanism and allows us to learn a good representation for all types in the type lattice. | 1711.00740#20 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 21 | Initial Node Representation To compute the initial node state, we combine information from the textual representation of the token and its type. Concretely, we split the name of a node representing a token into subtokens (e.g. classTypes will be split into two subtokens class and types) on camelCase and pascal_case. We then average the embeddings of all subtokens to retrieve an embedding for the node name. Finally, we concatenate the learned type representation râ(v), computed as discussed earlier, with the node name representation, and pass it through a linear layer to obtain the initial representations for each node in the graph. | 1711.00740#21 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 22 | Programs Graphs for VARNAMING Given a program and an existing variable v, we build a program graph as discussed above and then replace the variable name in all corresponding variable tokens by a special <SLOT> token. To predict a name, we use the initial node labels computed as the concatenation of learnable token embeddings and type embeddings as discussed above, run GGNN propagation for 8 time steps2 and then compute a variable usage representation by averaging the representations for all <SLOT> tokens. This representation is then used as the initial state of a one-layer GRU, which predicts the target name as a sequence of subtokens (e.g., the name inputStreamBuffer is treated as the sequence [input, stream, buffer]). We train this graph2seq architecture using a maximum likelihood objective. In section 5, we report the accuracy for predicting the exact name and the F1 score for predicting its subtokens. | 1711.00740#22 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 23 | Program Graphs for VARMISUSE To model VARMISUSE with program graphs we need to modify the graph. First, to compute a context representation c(t) for a slot t where we want to predict the used variable, we insert a new node v<SLOT> at the position of t, corresponding to a âholeâ at this point, and connect it to the remaining graph using all applicable edges that do not depend on the chosen variable at the slot (i.e., everything but LastUse, LastWrite, LastLexicalUse, and GuardedBy edges). Then, to compute the usage representation u(t, v) of each candidate variable v at the target slot, we insert a âcandidateâ node vt,v for all v in Vt, and connect it to the graph by inserting the LastUse, LastWrite and LastLexicalUse edges that would be used if the variable were to be used at this slot. Each of these candidate nodes represents the speculative placement of the variable within the scope. | 1711.00740#23 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 24 | Using the initial node representations, concatenated with an extra bit that is set to one for the candidate nodes vt,v, we run GGNN propagation for 8 time steps.2 The context and usage representation are then the ï¬nal node states of the nodes, i.e., c(t) = h(v<SLOT>) and u(t, v) = h(vt,v). Finally, the correct variable usage at the location is computed as arg maxv W [c(t), u(t, v)] where W is a linear layer that uses the concatenation of c(t) and u(t, v). We train using a max-margin objective.
4.1 IMPLEMENTATION
Using GGNNs for sets of large, diverse graphs requires some engineering effort, as efï¬cient batching is hard in the presence of diverse shapes. An important observation is that large graphs are normally very sparse, and thus a representation of edges as an adjacency list would usually be advantageous to reduce memory consumption. In our case, this can be easily implemented using a sparse tensor
2We found fewer steps to be insufï¬cient for good results and more propagation steps to not help substantially.
5
Published as a conference paper at ICLR 2018 | 1711.00740#24 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 25 | 5
Published as a conference paper at ICLR 2018
representation, allowing large batch sizes that exploit the parallelism of modern GPUs efï¬ciently. A second key insight is to represent a batch of graphs as one large graph with many disconnected components. This just requires appropriate pre-processing to make node identities unique. As this makes batch construction somewhat CPU-intensive, we found it useful to prepare minibatches on a separate thread. Our TensorFlow (Abadi et al., 2016) implementation scales to 55 graphs per second during training and 219 graphs per second during test-time using a single NVidia GeForce GTX Titan X with graphs having on average 2,228 (median 936) nodes and 8,350 (median 3,274) edges and 8 GGNN unrolling iterations, all 20 edge types (forward and backward edges for 10 original edge types) and the size of the hidden layer set to 64. The number of types of edges in the GGNN contributes proportionally to the running time. For example, a GGNN run for our ablation study using only the two most common edge types (NextToken, Child) achieves 105 graphs/second during training and 419 graphs/second at test time with the same hyperparameters. Our (generic) implementation of GGNNs is available at https://github.com/Microsoft/ gated-graph-neural-network-samples, using a simpler demonstration task. | 1711.00740#25 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 26 | 5 EVALUATION
Dataset We collected a dataset for the VARMISUSE task from open source C# projects on GitHub. To select projects, we picked the top-starred (non-fork) projects in GitHub. We then ï¬ltered out projects that we could not (easily) compile in full using Roslyn3, as we require a compilation to extract precise type information for the code (including those types present in external libraries). Our ï¬nal dataset contains 29 projects from a diverse set of domains (compilers, databases, . . . ) with about 2.9 million non-empty lines of code. A full table is shown in Appendix D.
For the task of detecting variable misuses, we collect data from all projects by selecting all variable usage locations, ï¬ltering out variable declarations, where at least one other type-compatible replace- ment variable is in scope. The task is then to infer the correct variable that originally existed in that location. Thus, by construction there is at least one type-correct replacement variable, i.e. picking it would not raise an error during type checking. In our test datasets, at each slot there are on average 3.8 type-correct alternative variables (median 3, Ï = 2.6). | 1711.00740#26 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 27 | From our dataset, we selected two projects as our development set. From the rest of the projects, we selected three projects for UNSEENPROJTEST to allow testing on projects with completely unknown structure and types. We split the remaining 23 projects into train/validation/test sets in the proportion 60-10-30, splitting along ï¬les (i.e., all examples from one source ï¬le are in the same set). We call the test set obtained like this SEENPROJTEST. | 1711.00740#27 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 28 | Baselines For VARMISUSE, we consider two bidirectional RNN-based baselines. The local model (LOC) is a simple two-layer bidirectional GRU run over the tokens before and after the target location. For this baseline, c(t) is set to the slot representation computed by the RNN, and the usage context of each variable u(t, v) is the embedding of the name and type of the variable, computed in the same way as the initial node labels in the GGNN. This baseline allows us to evaluate how important the usage context information is for this task. The ï¬at dataï¬ow model (AVGBIRNN) is an extension to LOC, where the usage representation u(t, v) is computed using another two-layer bidirectional RNN run over the tokens before/after each usage, and then averaging over the computed representations at the variable token v. The local context, c(t), is identical to LOC. AVGBIRNN is a signiï¬cantly stronger baseline that already takes some structural information into account, as the averaging over all variables usages helps with long-range dependencies. Both models pick the variable that maximizes c(t)T u(t, v). | 1711.00740#28 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 29 | For VARNAMING, we replace LOC by AVGLBL, which uses a log-bilinear model for 4 left and 4 right context tokens of each variable usage, and then averages over these context representations (this corresponds to the model in Allamanis et al. (2015)). We also test AVGBIRNN on VARNAMING, which essentially replaces the log-bilinear context model by a bidirectional RNN.
6
Published as a conference paper at ICLR 2018
Table 1: Evaluation of models. SEENPROJTEST refers to the test set containing projects that have ï¬les in the training set, UNSEENPROJTEST refers to projects that have no ï¬les in the training data. Results averaged over two runs.
SEENPROJTEST UNSEENPROJTEST LOC AVGLBL AVGBIRNN GGNN LOC AVGLBL AVGBIRNN 50.0 0.788 â â â â 36.1 44.0 73.7 0.941 42.9 50.1 85.5 0.980 53.6 65.8 28.9 0.611 â â â â 22.7 30.6 60.2 0.895 23.4 32.0
Table 2: Ablation study for the GGNN model on SEENPROJTEST for the two tasks. | 1711.00740#29 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 30 | Table 2: Ablation study for the GGNN model on SEENPROJTEST for the two tasks.
Ablation Description Accuracy (%) VARMISUSE VARNAMING Standard Model (reported in Table 1) 85.5 53.6 Only NextToken, Child, LastUse, LastWrite edges Only semantic edges (all but NextToken, Child) Only syntax edges (NextToken, Child) 80.6 78.4 55.3 31.2 52.9 34.3 Node Labels: Tokens instead of subtokens Node Labels: Disabled 85.6 84.3 34.5 31.8
5.1 QUANTITATIVE EVALUATION
Table 1 shows the evaluation results of the models for both tasks.4 As LOC captures very little information, it performs relatively badly. AVGLBL and AVGBIRNN, which capture information from many variable usage sites, but do not explicitly encode the rich structure of the problem, still lag behind the GGNN by a wide margin. The performance difference is larger for VARMISUSE, since the structure and the semantics of code are far more important within this setting. | 1711.00740#30 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 31 | Generalization to new projects Generalizing across a diverse set of source code projects with different domains is an important challenge in machine learning. We repeat the evaluation using the UNSEENPROJTEST set stemming from projects that have no ï¬les in the training set. The right side of Table 1 shows that our models still achieve good performance, although it is slightly lower compared to SEENPROJTEST. This is expected since the type lattice is mostly unknown in UNSEENPROJTEST.
We believe that the dominant problem in applying a trained model to an unknown project (i.e., domain) is the fact that its type hierarchy is unknown and the used vocabulary (e.g. in variables, method and class names, etc.) can differ substantially. | 1711.00740#31 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 32 | Ablation Study To study the effect of some of the design choices for our models, we have run some additional experiments and show their results in Table 2. First, we varied the edges used in the program graph. We ï¬nd that restricting the model to syntactic information has a large impact on performance on both tasks, whereas restricting it to semantic edges seems to mostly impact performance on VARMISUSE. Similarly, the ComputedFrom, FormalArgName and ReturnsTo edges give a small boost on VARMISUSE, but greatly improve performance on VARNAMING. As evidenced by the experiments with the node label representation, syntax node and token names seem to matter little for VARMISUSE, but naturally have a great impact on VARNAMING.
5.2 QUALITATIVE EVALUATION
Figure 3 illustrates the predictions that GGNN makes on a sample test snippet. The snippet recursively searches for the global directives ï¬le by gradually descending into the root folder. Reasoning about the correct variable usages is hard, even for humans, but the GGNN correctly predicts the variable
3http://roslyn.io 4Sect. A additionally shows ROC and precision-recall curves for the GGNN model on the VARMISUSE task.
7
Published as a conference paper at ICLR 2018 | 1711.00740#32 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 33 | 7
Published as a conference paper at ICLR 2018
bool TryFindGlobalDirectivesFile(string baseDirectory, string fullPath, out string path){ baseDirectory1 = baseDirectory2.TrimEnd(Path.DirectorySeparatorChar); var directivesDirectory = Path.GetDirectoryName(fullPath3) .TrimEnd(Path.DirectorySeparatorChar); while(directivesDirectory4 != null && directivesDirectory5.Length >= baseDirectory6.Length){ path7 = Path.Combine(directivesDirectory8, GlobalDirectivesFileName9); if (File.Exists(path10)) return true; directivesDirectory11=Path.GetDirectoryName(directivesDirectory12) .TrimEnd(Path.DirectorySeparatorChar); } path13 = null; return false; } | 1711.00740#33 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 34 | 1: path:59%, baseDirectory:35%, fullPath:6%, GlobalDirectivesFileName:1% 2: baseDirectory:92%, fullPath:5%, GlobalDirectivesFileName:2%, path:0.4% 3: fullPath:88%, baseDirectory:9%, GlobalDirectivesFileName:2%, path:1% 4: directivesDirectory:86%, path:8%, baseDirectory:2%, GlobalDirectivesFileName:1%, fullPath:0.1% 5: directivesDirectory:46%, path:24%, baseDirectory:16%, GlobalDirectivesFileName:10%, fullPath:3% 6: baseDirectory:64%, path:26%, directivesDirectory:5%, fullPath:2%, GlobalDirectivesFileName:2% 7: path:99%, directivesDirectory:1%, GlobalDirectivesFileName:0.5%, baseDirectory:7e-5, fullPath:4e-7 8: fullPath:60%, directivesDirectory:21%, baseDirectory:18%, path:1%, GlobalDirectivesFileName:4e-4 9: GlobalDirectivesFileName:61%, baseDirectory:26%, fullPath:8%, path:4%, directivesDirectory:0.5% 10: path:70%, directivesDirectory:17%, baseDirectory:10%, GlobalDirectivesFileName:1%, fullPath:0.6% | 1711.00740#34 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 35 | directivesDirectory:0.5% 10: path:70%, directivesDirectory:17%, baseDirectory:10%, GlobalDirectivesFileName:1%, fullPath:0.6% 11: directivesDirectory:93%, path:5%, GlobalDirectivesFileName:1%, baseDirectory:0.1%, fullPath:4e-5% 12: directivesDirectory:65%, path:16%, baseDirectory:12%, fullPath:5%, GlobalDirectivesFileName:3% 13: path:97%, baseDirectory:2%, directivesDirectory:0.4%, fullPath:0.3%, GlobalDirectivesFileName:4e-4 | 1711.00740#35 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 36 | Figure 3: VARMISUSE predictions on slots within a snippet of the SEENPROJTEST set for the ServiceStack project. Additional visualizations are available in Appendix B. The underlined tokens are the correct tokens. The model has to select among a number of string variables at each slot, where all of them represent some kind of path. The GGNN accurately predicts the correct variable usage in 11 out of the 13 slots reasoning about the complex ways the variables interact among them.
public ArraySegment<byte> ReadBytes(int length){ int size = Math.Min(length, _len - _pos); var buffer = EnsureTempBuffer( length ); var used = Read(buffer, 0, size);
Figure 4: A bug found (yellow) in RavenDB open-source project. The code unnecessarily ensures that the buffer is of size length rather than size (which our model predicts as the correct variable here).
usages at all locations except two (slot 1 and 8). As a software engineer is writing the code, it is imaginable that she may make a mistake misusing one variable in the place of another. Since all variables are string variables, no type errors will be raised. As the probabilities in Fig. 3 suggest most potential variable misuses can be ï¬agged by the model yielding valuable warnings to software engineers. Additional samples with comments can be found in Appendix B. | 1711.00740#36 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 37 | Furthermore, Appendix C shows samples of pairs of code snippets that share similar representations as computed by the cosine similarity of the usage representation u(t, v) of GGNN. The reader can notice that the network learns to group variable usages that share semantic similarities together. For example, checking for null before the use of a variable yields similar distributed representations across code segments (Sample 1 in Appendix C).
5.3 DISCOVERED VARIABLE MISUSE BUGS
We have used our VARMISUSE model to identify likely locations of bugs in RavenDB (a document database) and Roslyn (Microsoftâs C# compiler framework). For this, we manually reviewed a sample of the top 500 locations in both projects where our model was most conï¬dent about a choosing a variable differing from the ground truth, and found three bugs in each of the projects.
Figs. 1,4,5 show the issues discovered in RavenDB. The bug in Fig. 1 was possibly caused by copy-pasting, and cannot be easily caught by traditional methods. A compiler will not warn about
8
Published as a conference paper at ICLR 2018
if (IsValidBackup(backupFilename) == false) { output("Error:"+ backupLocation +" doesnât look like a valid backup"); throw new InvalidOperationException( backupLocation + " doesnât look like a valid backup"); | 1711.00740#37 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 38 | Figure 5: A bug found (yellow) in the RavenDB open-source project. Although backupFilename is found to be invalid by IsValidBackup, the user is notiï¬ed that backupLocation is invalid instead.
unused variables (since first is used) and virtually nobody would write a test testing another test. Fig. 4 shows an issue that, although not critical, can lead to increased memory consumption. Fig. 5 shows another issue arising from a non-informative error message. We privately reported three additional bugs to the Roslyn developers, who have ï¬xed the issues in the meantime (cf. https://github.com/dotnet/roslyn/pull/23437). One of the reported bugs could cause a crash in Visual Studio when using certain Roslyn features.
Finding these issues in widely released and tested code suggests that our model can be useful during the software development process, complementing classic program analysis tools. For example, one usage scenario would be to guide the code reviewing process to locations a VARMISUSE model has identiï¬ed as unusual, or use it as a prior to focus testing or expensive code analysis efforts.
# 6 DISCUSSION & CONCLUSIONS | 1711.00740#38 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 39 | # 6 DISCUSSION & CONCLUSIONS
Although source code is well understood and studied within other disciplines such as programming language research, it is a relatively new domain for deep learning. It presents novel opportunities compared to textual or perceptual data, as its (local) semantics are well-deï¬ned and rich additional information can be extracted using well-known, efï¬cient program analyses. On the other hand, integrating this wealth of structured information poses an interesting challenge. Our VARMISUSE task exposes these opportunities, going beyond simpler tasks such as code completion. We consider it as a ï¬rst proxy for the core challenge of learning the meaning of source code, as it requires to probabilistically reï¬ne standard information included in type systems.
# REFERENCES
MartÃn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorï¬ow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
Miltiadis Allamanis, Earl T Barr, Christian Bird, and Charles Sutton. Learning natural coding conventions. In Foundations of Software Engineering (FSE), 2014. | 1711.00740#39 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 40 | Miltiadis Allamanis, Earl T Barr, Christian Bird, and Charles Sutton. Learning natural coding conventions. In Foundations of Software Engineering (FSE), 2014.
Miltiadis Allamanis, Earl T Barr, Christian Bird, and Charles Sutton. Suggesting accurate method and class names. In Foundations of Software Engineering (FSE), 2015.
Miltiadis Allamanis, Hao Peng, and Charles Sutton. A convolutional attention network for extreme summarization of source code. In International Conference on Machine Learning (ICML), pp. 2091â2100, 2016.
Miltiadis Allamanis, Earl T Barr, Premkumar Devanbu, and Charles Sutton. A survey of machine learning for big code and naturalness. arXiv preprint arXiv:1709.06182, 2017.
Earl T Barr, Mark Harman, Yue Jia, Alexandru Marginean, and Justyna Petke. Automated software transplantation. In International Symposium on Software Testing and Analysis (ISSTA), 2015. | 1711.00740#40 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 41 | Al Bessey, Ken Block, Ben Chelf, Andy Chou, Bryan Fulton, Seth Hallem, Charles Henri-Gros, Asya Kamsky, Scott McPeak, and Dawson Engler. A few billion lines of code later: using static analysis to ï¬nd bugs in the real world. Communications of the ACM, 53(2):66â75, 2010.
Avishkar Bhoopchand, Tim Rocktäschel, Earl Barr, and Sebastian Riedel. Learning Python code suggestion with a sparse pointer network. arXiv preprint arXiv:1611.08307, 2016.
9
Published as a conference paper at ICLR 2018
Benjamin Bichsel, Veselin Raychev, Petar Tsankov, and Martin Vechev. Statistical deobfuscation of android applications. In Conference on Computer and Communications Security (CCS), 2016.
Pavol Bielik, Veselin Raychev, and Martin Vechev. PHOG: probabilistic model for code. International Conference on Machine Learning (ICML), 2016. In
Kyunghyun Cho, Bart van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoderâdecoder approaches. Syntax, Semantics and Structure in Statistical Translation, 2014. | 1711.00740#41 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 42 | Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral ï¬ltering. In Neural Information Processing Systems (NIPS), pp. 3844â3852, 2016.
Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. arXiv preprint arXiv:1704.01212, 2017.
Marco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains. In IEEE International Joint Conference Neural Networks (IJCNN). IEEE, 2005.
Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In International Conference on Knowledge Discovery and Data Mining (SIGKDD), pp. 855â864. ACM, 2016.
Abram Hindle, Earl T Barr, Zhendong Su, Mark Gabel, and Premkumar Devanbu. On the naturalness of software. In International Conference on Software Engineering (ICSE), 2012.
Thomas N Kipf and Max Welling. Semi-supervised classiï¬cation with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. | 1711.00740#42 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.