doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1611.01603 | 9 | The concatenation of the character and word embedding vectors is passed to a two-layer Highway Network (Srivastava et al., 2015). The outputs of the Highway Network are two sequences of d- dimensional vectors, or more conveniently, two matrices: X â RdÃT for the context and Q â RdÃJ for the query.
3. Contextual Embedding Layer. We use a Long Short-Term Memory Network (LSTM) (Hochreiter & Schmidhuber, 1997) on top of the embeddings provided by the previous layers to model the temporal interactions between words. We place an LSTM in both directions, and concatenate the outputs of the two LSTMs. Hence we obtain H â R2dÃT from the context word vectors X, and U â R2dÃJ from query word vectors Q. Note that each column vector of H and U is 2d-dimensional because of the concatenation of the outputs of the forward and backward LSTMs, each with d-dimensional output.
It is worth noting that the ï¬rst three layers of the model are computing features from the query and context at different levels of granularity, akin to the multi-stage feature computation of convolutional neural networks in the computer vision ï¬eld. | 1611.01603#9 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 9 | # 2 REINFORCEMENT LEARNING
We consider the infinite horizon, discounted, finite state and action space Markov decision process, with state space S, action space A and rewards at each time period denoted by r; ⬠R. A policy am: Sx A â R, is a mapping from state-action pair to the probability of taking that action at that state, so it must satisfy }>,. 4 7(s,@) = 1 for all states s ⬠S. Any policy m induces a probability distribution over visited states, d⢠: S â+ R, (which may depend on the initial state), so the probability of seeing state-action pair (s,a) ⬠S x Ais dâ¢(s)r(s,a). | 1611.01626#9 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 9 | In practice, maxp,ep V(D;,G) is not performed to convergence (or global optimality), so the above problem is oversimplified. Furthermore, introducing N discriminators affects the dynam- ics of the game which affects the trajectories of the discriminators. This prevents us from claiming max{V1(t),..., Viv(t)} > max{Vj (t)} Vt even if we initalize D,(0) = D{,(0) as it is unlikely that D,(t) = D{(t) at some time ¢ after the start of the game.
3.2 BOOSTING
We can also consider taking the max over NV discriminators as a form of boosting for the discrim- inatorâs online classification problem (online because G' can produce an infinite data stream). The boosted discriminator is given a sample x; and must predict whether it came from the generator or the dataset. The booster then makes its prediction using the predictions of the N weaker D;. | 1611.01673#9 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 10 | Regularization An important extension to the stacked QRNN is a robust regularization scheme inspired by recent work in regularizing LSTMs.
The need for an effective regularization method for LSTMs, and dropoutâs relative lack of efï¬cacy when applied to recurrent connections, led to the development of recurrent dropout schemes, in- cluding variational inferenceâbased dropout (Gal & Ghahramani, 2016) and zoneout (Krueger et al., 2016). These schemes extend dropout to the recurrent setting by taking advantage of the repeating structure of recurrent networks, providing more powerful and less destructive regularization.
Variational inferenceâbased dropout locks the dropout mask used for the recurrent connections across timesteps, so a single RNN pass uses a single stochastic subset of the recurrent weights. Zoneout stochastically chooses a new subset of channels to âzone outâ at each timestep; for these channels the network copies states from one timestep to the next without modiï¬cation. | 1611.01576#10 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 10 | # 3.2 TRAINING WITH REINFORCE
The list of tokens that the controller predicts can be viewed as a list of actions a1:T to design an architecture for a child network. At convergence, this child network will achieve an accuracy R on a held-out dataset. We can use this accuracy R as the reward signal and use reinforcement learning to train the controller. More concretely, to ï¬nd the optimal architecture, we ask our controller to maximize its expected reward, represented by J(θc):
J(θc) = EP (a1:T ;θc)[R]
Since the reward signal R is non-differentiable, we need to use a policy gradient method to iteratively update θc. In this work, we use the REINFORCE rule from Williams (1992):
T Vo. I (Be) = > pavers.) | Vo. log Plata); 9) Ri t=1
An empirical approximation of the above quantity is:
m T 1 m YOY Vo. log P(arla(eâ1).13 9) Re k=1t=1
Where m is the number of different architectures that the controller samples in one batch and T is the number of hyperparameters our controller has to predict to design a neural network architecture.
3
# Under review as a conference paper at ICLR 2017 | 1611.01578#10 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 10 | # 3 LOSS-AWARE BINARIZATION
As can be seen, existing weight binarization methods (Courbariaux et al., 2015; Rastegari et al., 2016) simply ï¬nd the closest binary approximation of w, and ignore its effects to the loss. In this paper, we consider the loss directly during binarization. As in (Rastegari et al., 2016), we also binarize the weight wl in each layer as Ëwl = αlbl, where αl > 0 and bl is binary.
In the following, we make the following assumptions on ¢. (A1) ¢ is continuously differentiable with Lipschitz-continuous gradient, i.e., there exists 8 > 0 such that ||V@(u) â Vé(v) ||, < 8 |luâ vl, for any u, v; (A2) ¢ is bounded from below.
3.1 BINARIZATION USING PROXIMAL NEWTON ALGORITHM
We formulate weight binarization as the following optimization problem: | 1611.01600#10 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 10 | 4. Attention Flow Layer. Attention ï¬ow layer is responsible for linking and fusing information from the context and the query words. Unlike previously popular attention mechanisms (Weston et al., 2015; Hill et al., 2016; Sordoni et al., 2016; Shen et al., 2016), the attention ï¬ow layer is not used to summarize the query and context into single feature vectors. Instead, the attention vector at each time step, along with the embeddings from previous layers, are allowed to ï¬ow through to the subsequent modeling layer. This reduces the information loss caused by early summarization.
The inputs to the layer are contextual vector representations of the context H and the query U. The outputs of the layer are the query-aware vector representations of the context words, G, along with the contextual embeddings from the previous layer.
In this layer, we compute attentions in two directions: from context to query as well as from query to context. Both of these attentions, which will be discussed below, are derived from a shared similarity matrix, S â RT ÃJ , between the contextual embeddings of the context (H) and the query (U), where Stj indicates the similarity between t-th context word and j-th query word. The similarity matrix is computed by | 1611.01603#10 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 10 | In reinforcement learning an âagentâ interacts with an environment over a number of times steps. At each time step t the agent receives a state s; and a reward r;, and selects an action a; from the policy mt, at which point the agent moves to the next state 5:4; ~ P(-,s:,a1), where P(sâ,s,a) is the probability of transitioning from state s to state sâ after taking action a. This continues until the agent encounters a terminal state (after which the process is typically restarted). The goal of the agent is to find a policy 7 that maximizes the expected total discounted return J(7) = E()0P29 y'r: | 7), where the expectation is with respect to the initial state distribution, the state-transition probabilities, and the policy, and where 7 ⬠(0, 1) is the discount factor that, loosely speaking, controls how much the agent prioritizes long-term versus short-term rewards. Since the agent starts with no knowledge
2
Published as a conference paper at ICLR 2017
of the environment it must continually explore the state space and so will typically use a stochastic policy. | 1611.01626#10 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 10 | There are a few differences between taking the max (case 1) and online boosting (case 2). In case 1, our booster is limited to selecting a single weak discriminator (i.e. a pure strategy), while in case 2, many boosting algorithms more generally use linear combinations of the discriminators. Moreover, in case 2, a booster must make a prediction before receiving a loss function. In case 1, we assume access to the loss function at prediction time, which allows us to compute the max.
It is possible to train the weak discriminators using boosting and then ignore the boosterâs prediction by instead presenting max{V;}. We explore both variants in our experiments, using the adaptive al- gorithm proposed in Beygelzimer et al. (2015). Unfortunately, boosting failed to produce promising results on the image generation tasks. It is possible that boosting produces too strong an adversary for learning which motivates the next section. Boosting results appear in Appendix A.7.
# 4 A FORGIVING TEACHER
The previous perspectives focus on improving the discriminator with the goal of presenting a better approximation of maxp V(D,G) to the generator. Our next perspective asks the question, âIs max V (D, G) too harsh a critic?â
4.1 Soft-DISCRIMINATOR | 1611.01673#10 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 11 | As QRNNs lack recurrent weights, the variational inference approach does not apply. Thus we extended zoneout to the QRNN architecture by modifying the pooling function to keep the previous pooling state for a stochastic subset of channels. Conveniently, this is equivalent to stochastically setting a subset of the QRNNâs f gate channels to 1, or applying dropout on 1 â f :
F = 1 â dropout(1 â Ï(Wf â X))) (6)
Thus the pooling function itself need not be modiï¬ed at all. We note that when using an off-the- shelf dropout layer in this context, it is important to remove automatic rescaling functionality from the implementation if it is present. In many experiments, we also apply ordinary dropout between layers, including between word embeddings and the ï¬rst QRNN layer. | 1611.01576#11 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 11 | 3
# Under review as a conference paper at ICLR 2017
The validation accuracy that the k-th neural network architecture achieves after being trained on a training dataset is Rk.
The above update is an unbiased estimate for our gradient, but has a very high variance. In order to reduce the variance of this estimate we employ a baseline function:
1m mn So SS Vo, log P(aelayâ1):1;9e)(Re â b) b=1 t= 1
As long as the baseline function b does not depend on the on the current action, then this is still an unbiased gradient estimate. In this work, our baseline b is an exponential moving average of the previous architecture accuracies. | 1611.01578#11 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 11 | 3.1 BINARIZATION USING PROXIMAL NEWTON ALGORITHM
We formulate weight binarization as the following optimization problem:
miny (Ww) (3) st. Wy, =aybi, a > 0, b ⬠{41}â¢, L=1,...,L, (4) where ¢ is the loss. Let C' be the feasible region in @. and define its indicator function: I¢(w) = 0 if w ⬠C, and oo otherwise. Problem can then be rewritten as
min (Ww) + Io(w). (5)
We solve sing the proximal Newton method (Section[2.2}. At iteration t, the smooth term ¢(w) is replaced by the second-order expansion
» nt . ate 1. nt Hlye ate E(w!) + VOW! NT wt _w' D) +4 5 (wt _w' NTH! low! _w' 1, | 1611.01600#11 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 11 | S,; = 0(H.,,U.;) ER dd) where a is a trainable scalar function that encodes the similarity between its two input vectors, H., is t-th column vector of H, and U.,; is j-th column vector of U, We choose a(h, u) = Wis) {h; u; ho ul], where wis) ⬠Râ is a trainable weight vector, o is elementwise multiplication, {;] is vector concatenation across row, and implicit multiplication is matrix multiplication. Now we use S to obtain the attentions and the attended vectors in both directions.
Context-to-query Attention. Context-to-query (C2Q) attention signifies which query words are most relevant to each context word. Let a, ⬠R/ represent the attention weights on the query words by t-th context word, }> a,; = 1 for all t. The attention weight is computed by a, = softmax(S;,) ⬠R/â, and subsequently each attended query vector is U.. = yj a,j U.;. Hence U is a 2d-by-T matrix containing the attended query vectors for the entire context. | 1611.01603#11 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 11 | Action-values. The action-value, or Q-value, of a particular state under policy a is the ex- pected total discounted return from taking that action at that state and following 7 thereafter, i.e., Qâ¢(s,a) = EQ Y'r: | 80 = 8,40 = a, 7). The value of state s under policy 7 is denoted by Vâ¢(s) = E(0207'7: | 80 = 8,7), which is the expected total discounted return of policy 7 from state s. The optimal action-value function is denoted Q* and satisfies Q*(s,a) = max, Qâ¢(s, a) for each (s, a). The policy that achieves the maximum is the optimal policy 7*, with value function V*. The advantage function is the difference between the action-value and the value function, i.e., Aâ¢(s,a) = Q*(s, a) âV%(s), and represents the additional expected reward of taking action a over the average performance of the policy from state s. Since V"(s) = S>, 7(s,a)Q*(s,a) we have the identity }>, 7(s,a)A*(s,a) = 0, which simply states that the policy 7 has no advantage over itself. | 1611.01626#11 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 11 | 4.1 Soft-DISCRIMINATOR
In practice, training against a far superior discriminator can impede the generatorâs learning. This is because the generator is unlikely to generate any samples considered ârealisticâ by the discrimi- natorâs standards, and so the generator will receive uniformly negative feedback. This is problemPublished as a conference paper at ICLR 2017
atic because the information contained in the gradient derived from negative feedback only dictates where to drive down pg(x), not specifically where to increase pg (x). ala) driving down pa(2) necessarily increases pg(a) in other regions of Â¥ (to maintain [, pq(x) = 1) which may or may not contain samples from the true dataset (whack-a-mole dilemma). In contrast, a generator is more likely to see positive feedback against a more lenient discriminator, which may better guide a generator towards amassing pg(«) in approximately correct regions of 1â.
For this reason, we explore a variety of functions that allow us to soften the max operator. We choose to focus on soft versions of the three classical Pythagorean means parameterized by \ where X = 0 corresponds to the mean and the max is recovered as \ > 00:
AMsogt(V, A) -Su Vi (3) | 1611.01673#11 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 12 | Densely-Connected Layers We can also extend the QRNN architecture using techniques intro- duced for convolutional networks. For sequence classiï¬cation tasks, we found it helpful to use skip-connections between every QRNN layer, a technique termed âdense convolutionâ by Huang et al. (2016). Where traditional feed-forward or convolutional networks have connections only be- tween subsequent layers, a âDenseNetâ with L layers has feed-forward or convolutional connections between every pair of layers, for a total of L(Lâ1). This can improve gradient ï¬ow and convergence properties, especially in deeper networks, although it requires a parameter count that is quadratic in the number of layers.
When applying this technique to the QRNN, we include connections between the input embeddings and every QRNN layer and between every pair of QRNN layers. This is equivalent to concatenating
3
# Under review as a conference paper at ICLR 2017
Convolution oe p SS Linear fo-Poolh =~: â= ~~ â â SY Sara >| âfo-Pool Convolution ans , Salsas Linear fo-Poolh =[â-â= â â â â â > f-Pool Attention Linear Output gates y MU y
Figure 2: The QRNN encoderâdecoder architecture used for machine translation experiments. | 1611.01576#12 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 12 | Accelerate Training with Parallelism and Asynchronous Updates: In Neural Architecture Search, each gradient update to the controller parameters θc corresponds to training one child net- work to convergence. As training a child network can take hours, we use distributed training and asynchronous parameter updates in order to speed up the learning process of the controller (Dean et al., 2012). We use a parameter-server scheme where we have a parameter server of S shards, that store the shared parameters for K controller replicas. Each controller replica samples m different child architectures that are trained in parallel. The controller then collects gradients according to the results of that minibatch of m architectures at convergence and sends them to the parameter server in order to update the weights across all controller replicas. In our implementation, convergence of each child network is reached when its training exceeds a certain number of epochs. This scheme of parallelism is summarized in Figure 3.
Parameter| [Parameter] | |, [Parameter] Server 1 Server 2 Server S Parameters Accuracy R Child Child Child Child Child Child Child Child Child Replica 1 | | Replica 2 Replica m Replica 1| | Replica 2 Replica m Replica 1| | Replica 2 Replica m | 1611.01578#12 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01603 | 12 | Query-to-context Attention. Query-to-context (Q2C) attention signiï¬es which context words have the closest similarity to one of the query words and are hence critical for answering the query.
3
Published as a conference paper at ICLR 2017
We obtain the attention weights on the context words by b = softmax(max,o1(S)) ⬠R7, where the maximum function (max,,;) is performed across the column. Then the attended context vector ish = > ,bH., ⬠R24. This vector indicates the weighted sum of the most important words in the context with respect to the query. his tiled T times across the column, thus giving He RT,
Finally, the contextual embeddings and the attention vectors are combined together to yield G, where each column vector can be considered as the query-aware representation of each context word. We deï¬ne G by | 1611.01603#12 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 12 | Bellman equation. The Bellman operator T Ï (Bellman, 1957) for policy Ï is deï¬ned as
T*Q(s,a) = EB (r(s,a) +7Q(s',0))
where the expectation is over next state sâ ~ P(-,s,a), the reward r(s, a), and the action b from policy 7,. The Q-value function for policy 7 is the fixed point of the Bellman operator for 7, i.e., T7 Q⢠= Qâ. The optimal Bellman operator 7* is defined as
T*Q(s,a) = Ells, a) + ymax Q(s',b)), | 1611.01626#12 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 12 | AMsogt(V, A) -Su Vi (3)
GMoopt(V, A) = â exp (>. w; log(â Vi) (4)
HM,of+(V, A) -( wil, oy (5)
where w; = Vi /DjeV with \ > 0, V; < 0. Using a softmax also has the well known advantage of being differentiable (as opposed to subdifferentiable for max). Note that we only require continuity to guarantee that computing the softmax is actually equivalent to computing V(D,G) where D is some convex combination of D; (see Appendix A.5).
4.2 USING THE ORIGINAL MINIMAX OBJECTIVE
To illustrate the effect the softmax has on training, observe that the component of AM,, ft(V, 0) relevant to generator training can be rewritten as
x y Be~po(s | log(l â Di(2))] = 7Be~pote)| loa(2)]. (6) | 1611.01673#12 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 13 | Figure 2: The QRNN encoderâdecoder architecture used for machine translation experiments.
each QRNN layerâs input to its output along the channel dimension before feeding the state into the next layer. The output of the last layer alone is then used as the overall encoding result.
EncoderâDecoder Models To demonstrate the generality of QRNNs, we extend the model architec- ture to sequence-to-sequence tasks, such as machine translation, by using a QRNN as encoder and a modiï¬ed QRNN, enhanced with attention, as decoder. The motivation for modifying the decoder is that simply feeding the last encoder hidden state (the output of the encoderâs pooling layer) into the decoderâs recurrent pooling layer, analogously to conventional recurrent encoderâdecoder architec- tures, would not allow the encoder state to affect the gate or update values that are provided to the decoderâs pooling layer. This would substantially limit the representational power of the decoder. | 1611.01576#13 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 13 | Figure 3: Distributed training for Neural Architecture Search. We use a set of S parameter servers to store and send parameters to K controller replicas. Each controller replica then samples m archi- tectures and run the multiple child models in parallel. The accuracy of each child model is recorded to compute the gradients with respect to θc, which are then sent back to the parameter servers.
INCREASE ARCHITECTURE COMPLEXITY WITH SKIP CONNECTIONS AND OTHER LAYER TYPES
In Section 3.1, the search space does not have skip connections, or branching layers used in modern architectures such as GoogleNet (Szegedy et al., 2015), and Residual Net (He et al., 2016a). In this section we introduce a method that allows our controller to propose skip connections or branching layers, thereby widening the search space.
To enable the controller to predict such connections, we use a set-selection type attention (Neelakan- tan et al., 2015) which was built upon the attention mechanism (Bahdanau et al., 2015; Vinyals et al., 2015). At layer N , we add an anchor point which has N â 1 content-based sigmoids to indicate the previous layers that need to be connected. Each sigmoid is a function of the current hiddenstate of the controller and the previous hiddenstates of the previous N â 1 anchor points: | 1611.01578#13 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 13 | For neural networks, the exact Hessian is rarely positive semi-definite. This can be problematic as the nonconvex objective leads to indefinite quadratic optimization. Moreover, computing the exact Hessian is both time- and space-inefficient on large networks. To alleviate these problems, a popular approach is to approximate the Hessian by a diagonal positive definite matrix D. One popular choice is the efficient Jacobi preconditioner. Though an efficient approximation of the Hes- sian under certain conditions, it is not competitive for indefinite matrices (Dauphin et al.| 2015a). More recently, it is shown that equilibration provides a more robust preconditioner in the pres- ence of saddle points (Dauphin et al.||2015a). This is also adopted by popular stochastic optimiza- tion algorithms such as RMSprop (Tieleman & Hinton| and Adam (Kingma_& Bal 2015). Specifically, the second moment v in these algorithms is an estimator of diag(Hâ) (Dauphin et al 20156). Here, we use the square root of this v, which is readily available in Adam, to construct D = | 1611.01600#13 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 13 | G:t = β(H:t, ËU:t, ËH:t) â RdG (2) where G:t is the t-th column vector (corresponding to t-th context word), β is a trainable vector function that fuses its (three) input vectors, and dG is the output dimension of the β function. While the β function can be an arbitrary trainable neural network, such as multi-layer perceptron, a simple concatenation as following still shows good performance in our experiments: β(h, Ëu, Ëh) = [h; Ëu; h ⦠Ëu; h ⦠Ëh] â R8dÃT (i.e., dG = 8d). | 1611.01603#13 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 13 | T*Q(s,a) = Ells, a) + ymax Q(s',b)),
where the expectation is over the next state sâ ~ P(-,s,a), and the reward r(s,a). The optimal Q-value function is the fixed point of the optimal Bellman equation, i.e, 7*Q* = Q*. Both the m-Bellman operator and the optimal Bellman operator are y-contraction mappings in the sup-norm, ie., ||TQi â TQalloo < YI]Qi â Qalloo, for any Qi, Q2 ⬠RS*4. From this fact one can show that the fixed point of each operator is unique, and that value iteration converges, i.e., (T")*Q > Qâ and (T*)*Q â Q* from any initial Q. 2005).
2.1 ACTION-VALUE LEARNING | 1611.01626#13 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 13 | x y Be~po(s | log(l â Di(2))] = 7Be~pote)| loa(2)]. (6)
where z = ma â D;(x)). Note that the generator gradient, | 2icate)), is minimized at z = 1 over E (0, yi. From this form, it is clear that z = 1 if and only if D; OVi, so G only receives a vanishing gradient if all D; agree that the sample is fake; this is especially unlikely for large N. In other words, G only needs to fool a single D; to receive constructive feedback. This result allows the generator to successfully minimize the original generator objective, log(1 â D). This is in contrast to the more popular â log(D) introduced to artificially enhance gradients at the start of training.
At the beginning of training, when maxp, V (Dj, G) is likely too harsh a critic for the generator, we can set A closer to zero to use the mean, increasing the odds of providing constructive feedback to the generator. In addition, the discriminators have the added benefit of functioning as an ensemble, reducing the variance of the feedback presented to the generator, which is especially important when the discriminators are far from optimal and are still learning a reasonable decision boundary. As training progresses and the discriminators improve, we can increase \ to become more critical of the generator for more refined training. | 1611.01673#13 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 14 | Instead, the output of each decoder QRNN layerâs convolution functions is supplemented at every timestep with the final encoder hidden state. This is accomplished by adding the result of the convo- lution for layer ¢ (e.g., Wé « X°, in R?*â¢) with broadcasting to a linearly projected copy of layer £âs last encoder state (e.g., veins ,in Râ¢):
Z = tanh(WS « X°+ Vhs) Ff = o(Wi xXâ + Vih*) (7) Of = o (WS « Xo + VohS), | 1611.01576#14 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 14 | P(Layer j is an input to layer i) = sigmoid(vTtanh(Wprev â hj + Wcurr â hi)), where hj represents the hiddenstate of the controller at anchor point for the j-th layer, where j ranges from 0 to N â 1. We then sample from these sigmoids to decide what previous layers to be used as inputs to the current layer. The matrices Wprev, Wcurr and v are trainable parameters. As
4
# Under review as a conference paper at ICLR 2017
these connections are also deï¬ned by probability distributions, the REINFORCE method still applies without any signiï¬cant modiï¬cations. Figure 4 shows how the controller uses skip connections to decide what layers it wants as inputs to the current layer.
N-1 skip connections Number] | Anchor | | Filter Stride | | Anchor | |Number| | Filter *, Jof Filters[, | Point ,| Height ), 5 f,| Width f, | Point f; JofFiltersf, | Height /, H ; ; >| >| : : : : t t Layer N-1 Layer N Layer N+1
Figure 4: The controller uses anchor points, and set-selection attention to form skip connections. | 1611.01578#14 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01603 | 14 | 5. Modeling Layer. The input to the modeling layer is G, which encodes the query-aware rep- resentations of context words. The output of the modeling layer captures the interaction among the context words conditioned on the query. This is different from the contextual embedding layer, which captures the interaction among context words independent of the query. We use two layers of bi-directional LSTM, with the output size of d for each direction. Hence we obtain a matrix M â R2dÃT , which is passed onto the output layer to predict the answer. Each column vector of M is expected to contain contextual information about the word with respect to the entire context paragraph and the query.
6. Output Layer. The output layer is application-speciï¬c. The modular nature of BIDAF allows us to easily swap out the output layer based on the task, with the rest of the architecture remaining exactly the same. Here, we describe the output layer for the QA task. In section 5, we use a slight modiï¬cation of this output layer for cloze-style comprehension.
The QA task requires the model to ï¬nd a sub-phrase of the paragraph to answer the query. The phrase is derived by predicting the start and the end indices of the phrase in the paragraph. We obtain the probability distribution of the start index over the entire paragraph by | 1611.01603#14 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 14 | In value based reinforcement learning we approximate the Q-values using a function approximator. We then update the parameters so that the Q-values are as close to the fixed point of a Bellman equation as possible. If we denote by Q(s,a;0) the approximate Q-values parameterized by 0, then Q-learning updates the Q-values along direction Es ,4(T*Q(s, a; 4) â Q(s, a; 6))VoQ(s, a; 4) and SARSA updates the Q-values along direction E, «(77 Q(s, a; 0) â Q(s, a; 6))VoQ(s, a; 6). In the online setting the Bellman operator is approximated by sampling and bootstrapping, whereby the Q-values at any state are updated using the Q-values from the next visited state. Exploration is achieved by not always taking the action with the highest Q-value at each time step. One common technique called âepsilon greedyâ is to sample a random action with probability « > 0, where e⬠starts high and decreases over time. Another popular technique is âBoltzmann explo- rationâ, where the policy is given by the softmax over | 1611.01626#14 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 14 | 4.3. MAINTAINING MULTIPLE HYPOTHESES
We argue for this ensemble approach on a more fundamental level as well. Here, we draw on the density ratio estimation perspective of GANs (Uehara et al. (2016)). The original GAN proof assumes we have access to Paata(), if only implicitly. In most cases of interest, the discriminator only has access to a finite dataset sampled from pyata(x); therefore, when computing expectations of V(D,G), we only draw samples from our finite dataset. This is equivalent to training a GAN with Paata(%) = Pdata Which is a distribution consisting of point masses on all the data points in the dataset. For the sake of argument, letâs assume we are training a discriminator and generator, each
'VeV= -y; Be oP: Thi âD;)= -t OP r for D, = 1, Dzx = 0. Our argument ignores OP e .
Published as a conference paper at ICLR 2017
with infinite capacity. In this case, the global optimum (pq(x) = Pdata(z)) fails to capture any of the interesting structure from pyata(x), the true distribution we are trying to learn. Therefore, it is actually critical that we avoid this global optimum.
> x | 1611.01673#14 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 15 | where the tilde denotes that Ëh is an encoder variable. Encoderâdecoder models which operate on long sequences are made signiï¬cantly more powerful with the addition of soft attention (Bahdanau et al., 2015), which removes the need for the entire input representation to ï¬t into a ï¬xed-length encoding vector. In our experiments, we computed an attentional sum of the encoderâs last layerâs hidden states. We used the dot products of these encoder hidden states with the decoderâs last layerâs un-gated hidden states, applying a softmax along the encoder timesteps, to weight the encoder states into an attentional sum kt for each decoder timestep. This context, and the decoder state, are then fed into a linear layer followed by the output gate:
st = softmax(ch - hâ) all's k, = > athâ (8) h? = 0, © (W,k, + W.c?),
where L is the last layer.
While the ï¬rst step of this attention procedure is quadratic in the sequence length, in practice it takes signiï¬cantly less computation time than the modelâs linear and convolutional layers due to the simple and highly parallel dot-product scoring function.
4 | 1611.01576#15 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 15 | Figure 4: The controller uses anchor points, and set-selection attention to form skip connections.
In our framework, if one layer has many input layers then all input layers are concatenated in the depth dimension. Skip connections can cause âcompilation failuresâ where one layer is not compat- ible with another layer, or one layer may not have any input or output. To circumvent these issues, we employ three simple techniques. First, if a layer is not connected to any input layer then the image is used as the input layer. Second, at the ï¬nal layer we take all layer outputs that have not been connected and concatenate them before sending this ï¬nal hiddenstate to the classiï¬er. Lastly, if input layers to be concatenated have different sizes, we pad the small layers with zeros so that the concatenated layers have the same sizes. | 1611.01578#15 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 15 | At the tth iteration of the proximal Newton algorithm, the following subproblem is solved:
mings Ve(w'!) Tw! â wi!) + a w" âw TD lw! â wit) (6) st. Wy =ajbi, af > 0, bye {+1}", l=1,...,L.
3
Published as a conference paper at ICLR 2017
Proposition 3.1 Let dtâ1 l â¡ diag(Dtâ1 ), and
# l l â¡ Ëwtâ1
Ww; ew! - View!) ody ?. (7)
The optimal solution of (6) can be obtained in closed-form as
+ _ lid tO wilh a= aT, , bj = sign(w7). (8)
Theorem 3.1 Assume that [dt algorithm (with closed-form update of Ëwt in Proposition 3.1) converges. l]k > β âl, k, t, the objective of (5) produced by the proximal Newton | 1611.01600#15 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 15 | p= softmax(w (1) [G; M]), (3)
where w(p1) â R10d is a trainable weight vector. For the end index of the answer phrase, we pass M to another bidirectional LSTM layer and obtain M2 â R2dÃT . Then we use M2 to obtain the probability distribution of the end index in a similar manner:
p= softmax(w/(,2) [G; Mâ)) 4
Training. We deï¬ne the training loss (to be minimized) as the sum of the negative log probabilities of the true start and end indices by the predicted distributions, averaged over all examples:
1 N L(@) =~ Do log(pj:) + los (pee) (5)
where θ is the set of all trainable weights in the model (the weights and biases of CNN ï¬lters and LSTM cells, w(S), w(p1) and w(p2)), N is the number of examples in the dataset, y1 i are the true start and end indices of the i-th example, respectively, and pk indicates the k-th value of the vector p. Test. The answer span (k, l) where k ⤠l with the maximum value of p1 computed in linear time with dynamic programming.
3 RELATED WORK | 1611.01603#15 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01673 | 15 | > x
Figure 2: Consider a dataset consisting of the nine 1-dimensional samples in black. Their corre- sponding probability mass function is given in light gray. After training GMAN, three discrimina- tors converge to distinct local optima which implicitly define distributions over the data (red, blue, yellow). Each discriminator may specialize in discriminating a region of the data space (placing more diffuse mass in other regions). Averaging over the three discriminators results in the distribu- tion in black, which we expect has higher likelihood under reasonable assumptions on the structure of the true distribution.
In practice, this degenerate result is avoided by employing learners with limited capacity and corrupt- ing data samples with noise (i.e., dropout), but we might better accomplish this by simultaneously training a variety of limited capacity discriminators. With this approach, we might obtain a diverse set of seemingly tenable hypotheses for the true paata(#). Averaging over these multiple locally optimal discriminators increases the entropy of Pdara(2) by diffusing the probability mass over the data space (see Figure 2 for an example).
4.4. AUTOMATING REGULATION | 1611.01673#15 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 16 | 4
# Under review as a conference paper at ICLR 2017
Model Time / Epoch (s) Test Acc (%) NBSVM-bi (Wang & Manning, 2012) 2 layer sequential BoW CNN (Johnson & Zhang, 2014) Ensemble of RNNs and NB-SVM (Mesnil et al., 2014) 2-layer LSTM (Longpre et al., 2016) Residual 2-layer bi-LSTM (Longpre et al., 2016) â â â â â 91.2 92.3 92.6 87.6 90.1 Our models Densely-connected 4-layer LSTM (cuDNN optimized) Densely-connected 4-layer QRNN Densely-connected 4-layer QRNN with k = 4 480 150 160 90.9 91.4 91.1
Table 1: Accuracy comparison on the IMDb binary sentiment classiï¬cation task. All of our models use 256 units per layer; all layers other than the ï¬rst layer, whose ï¬lter width may vary, use ï¬lter width k = 2. Train times are reported on a single NVIDIA K40 GPU. We exclude semi-supervised models that conduct additional training on the unlabeled portion of the dataset.
# 3 EXPERIMENTS | 1611.01576#16 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 16 | Finally, in Section 3.1, we do not predict the learning rate and we also assume that the architectures consist of only convolutional layers, which is also quite restrictive. It is possible to add the learning rate as one of the predictions. Additionally, it is also possible to predict pooling, local contrast normalization (Jarrett et al., 2009; Krizhevsky et al., 2012), and batchnorm (Ioffe & Szegedy, 2015) in the architectures. To be able to add more types of layers, we need to add an additional step in the controller RNN to predict the layer type, then other hyperparameters associated with it.
3.4 GENERATE RECURRENT CELL ARCHITECTURES
In this section, we will modify the above method to generate recurrent cells. At every time step t, the controller needs to ï¬nd a functional form for ht that takes xt and htâ1 as inputs. The simplest way is to have ht = tanh(W1 âxt +W2 âhtâ1), which is the formulation of a basic recurrent cell. A more complicated formulation is the widely-used LSTM recurrent cell (Hochreiter & Schmidhuber, 1997). | 1611.01578#16 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 16 | Note that both the loss @ and indicator function Jc(-) in (5) are not convex. Hence, convergence analysis of the proximal Newton algorithm in (Lee et al.}/2014), which is only for convex problems, cannot be applied. Recently, |Rakotomamonjy et al.|(2016) proposed a nonconvex proximal Newton extension. However, it assumes a difference-of-convex decomposition which does not hold here.
Remark 3.1 When Dtâ1 l = λI, i.e., the curvature is the same for all dimensions in the lth layer, (8) then reduces to the BWN solution in (2) In other words, BWN corresponds to using the proximal gradient algorithm, while the proposed method corresponds to the proximal Newton algorithm with diagonal Hessian. In composite optimization, it is known that the proximal Newton method is more efï¬cient than the proximal gradient algorithm (Lee et al., 2014; Rakotomamonjy et al., 2016).
Remark 3.2 When αt l = 1, (8) reduces to sign(wt l ), which is the BinaryConnect solution in (1). | 1611.01600#16 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 16 | 3 RELATED WORK
Machine comprehension. A signiï¬cant contributor to the advancement of MC models has been the availability of large datasets. Early datasets such as MCTest (Richardson et al., 2013) were too
4
Published as a conference paper at ICLR 2017
small to train end-to-end neural models. Massive cloze test datasets (CNN/DailyMail by Hermann et al. (2015) and Childrens Book Test by Hill et al. (2016)), enabled the application of deep neural architectures to this task. More recently, Rajpurkar et al. (2016) released the Stanford Question Answering (SQuAD) dataset with over 100,000 questions. We evaluate the performance of our comprehension system on both SQuAD and CNN/DailyMail datasets. | 1611.01603#16 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 16 | 2.2 POLICY GRADIENT
Alternatively, we can parameterize the policy directly and attempt to improve it via gradient ascent on the performance J. The policy gradient theorem (Sutton et al., 1999) states that the gradient of J with respect to the parameters of the policy is given by
âθJ(Ï) = E s,a QÏ(s, a)âθ log Ï(s, a), (1)
where the expectation is over (s, a) with probability dÏ(s)Ï(s, a). In the original derivation of the policy gradient theorem the expectation is over the discounted distribution of states, i.e., over dÏ,s0 t=0 γtP r{st = s | s0, Ï}. However, the gradient update in that case will assign a low γ
3
Published as a conference paper at ICLR 2017
weight to states that take a long time to reach and can therefore have poor empirical performance. In practice the non-discounted distribution of states is frequently used instead. In certain cases this is equivalent to maximizing the average (i.e., non-discounted) policy performance, even when QÏ uses a discount factor (Thomas, 2014). Throughout this paper we will use the non-discounted distribution of states. | 1611.01626#16 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 16 | 4.4. AUTOMATING REGULATION
The problem of keeping the discriminator and generator in balance has been widely recognized in previous work with GANs. Issues with unstable dynamics, oscillatory behavior, and generator col- lapse are not uncommon. In addition, the discriminator is often times able to achieve a high degree of classification accuracy (producing a single scalar) before the generator has made sufficient progress on the arguably more difficult generative task (producing a high dimensional sample). Salimans et al. (2016) suggested label smoothing to reduce the vulnerability of the generator to a relatively superior discriminator. Here, we explore an approach that enables the generator to automatically temper the performance of the discriminator when necessary, but still encourages the generator to challenge itself against more accurate adversaries. Specifically, we augment the generator objective:
gain, Fe(Vi) ~ FQ) @)
where f(A) is monotonically increasing in \ which appears in the softmax equations, (3)â(5). In experiments, we simply set f(A) = cA with c a constant (e.g., 0.001). The generator is incentivized to increase \ to reduce its objective at the expense of competing against the best available adversary D* (see Appendix A.6).
# 5 EVALUATION | 1611.01673#16 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 17 | # 3 EXPERIMENTS
We evaluate the performance of the QRNN on three different natural language tasks: document-level sentiment classiï¬cation, language modeling, and character-based neural machine translation. Our QRNN models outperform LSTM-based models of equal hidden size on all three tasks while dra- matically improving computation speed. Experiments were implemented in Chainer (Tokui et al.).
3.1 SENTIMENT CLASSIFICATION
We evaluate the QRNN architecture on a popular document-level sentiment classiï¬cation bench- mark, the IMDb movie review dataset (Maas et al., 2011). The dataset consists of a balanced sample of 25,000 positive and 25,000 negative reviews, divided into equal-size train and test sets, with an average document length of 231 words (Wang & Manning, 2012). We compare only to other results that do not make use of additional unlabeled data (thus excluding e.g., Miyato et al. (2016)). | 1611.01576#17 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 17 | The computations for basic RNN and LSTM cells can be generalized as a tree of steps that take xt and htâ1 as inputs and produce ht as ï¬nal output. The controller RNN needs to label each node in the tree with a combination method (addition, elementwise multiplication, etc.) and an activation function (tanh, sigmoid, etc.) to merge two inputs and produce one output. Two outputs are then fed as inputs to the next node in the tree. To allow the controller RNN to select these methods and functions, we index the nodes in the tree in an order so that the controller RNN can visit each node one by one and label the needed hyperparameters.
Inspired by the construction of the LSTM cell (Hochreiter & Schmidhuber, 1997), we also need cell variables ctâ1 and ct to represent the memory states. To incorporate these variables, we need the controller RNN to predict what nodes in the tree to connect these two variables to. These predictions can be done in the last two blocks of the controller RNN. | 1611.01578#17 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 17 | Remark 3.2 When αt l = 1, (8) reduces to sign(wt l ), which is the BinaryConnect solution in (1).
From (7) and (8). each iteration first performs gradient descent along V/¢(w!~+) with an adaptive learning rate ad, and then projects it to a binary solution. As discussed in (Courbariaux| fet al.|/2015) 2015), it is important to keep a full-precision weight during training. Hence, we replace (7) by wy + wi â Vil(w'!) @ di~ 1 The whole procedure, which will be called Loss-Aware Binarization (LAB), is shown i in Algorithm|]] [1] In steps 5 and 6, following (Li & Liu\ 2016}, we first rescale input x) 1 to the Ith layer with q;, so that multiplications in dot products and convolutions become additions.
While binarizing weights changes most multiplications to additions, binarizing both weights and activations saves even more computations as additions are further changed to XNOR bit operations (Hubara et al., 2016). Our Algorithm 1 can also be easily extended by binarizing the activations with the simple sign function.
3.2 EXTENSION TO RECURRENT NEURAL NETWORKS | 1611.01600#17 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 17 | Previous works in end-to-end machine comprehension use attention mechanisms in three distinct ways. The ï¬rst group (largely inspired by Bahdanau et al. (2015)) uses a dynamic attention mech- anism, in which the attention weights are updated dynamically given the query and the context as well as the previous attention. Hermann et al. (2015) argue that the dynamic attention model per- forms better than using a single ï¬xed query vector to attend on context words on CNN & DailyMail datasets. Chen et al. (2016) show that simply using bilinear term for computing the attention weights in the same model drastically improves the accuracy. Wang & Jiang (2016) reverse the direction of the attention (attending on query words as the context RNN progresses) for SQuAD. In contrast to these models, BIDAF uses a memory-less attention mechanism. | 1611.01603#17 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 17 | In the online case it is common to add an entropy regularizer to the gradient in order to prevent the policy becoming deterministic. This ensures that the agent will explore continually. In that case the (batch) update becomes
âθ â E s,a QÏ(s, a)âθ log Ï(s, a) + α E s âθH Ï(s), (2)
where H7(s) = â >, 7(s, a) log 7(s, a) denotes the entropy of policy 7, and a > 0 is the reg- ularization penalty parameter. Throughout this paper we will make use of entropy regularization, however many of the results are true for other choices of regularizers with only minor modification, e.g., KL-divergence. Note that equation (2) requires exact knowledge of the Q-values. In practice they can be estimated, e.g., by the sum of discounted rewards along an observed trajectory 1992), and the policy gradient will still perform well (Konda & Tsitsiklis] |2003).
# 3 REGULARIZED POLICY GRADIENT ALGORITHM | 1611.01626#17 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 17 | # 5 EVALUATION
Evaluating GANs is still an open problem. In their original work, Goodfellow et al. (2014) report log likelihood estimates from Gaussian Parzen windows, which they admit, has high variance and is known not to perform well in high dimensions. Theis et al. (2016) recommend avoiding Parzen windows and argue that generative models should be evaluated with respect to their intended appli- cation. Salimans et al. (2016) suggest an Inception score, however, it assumes labels exist for the dataset. Recently, Im et al. (2016) introduced the Generative Adversarial Metric (GAM) for mak- ing pairwise comparisons between independently trained GAN models. The core idea behind their approach is given two generator, discriminator pairs (G1, D) and (G2, D2), we should be able to learn their relative performance by judging each generator under the opponentâs discriminator.
Published as a conference paper at ICLR 2017
5.1 METRIC
In GMAN, the opponent may have multiple discriminators, which makes it unclear how to perform the swaps needed for GAM. We introduce a variant of GAM, the generative multi-adversarial metric (GMAM), that is amenable to training with multiple discriminators, | 1611.01673#17 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 18 | Our best performance on a held-out development set was achieved using a four-layer densely- connected QRNN with 256 units per layer and word vectors initialized using 300-dimensional cased GloVe embeddings (Pennington et al., 2014). Dropout of 0.3 was applied between layers, and we used L? regularization of 4 x 10~°. Optimization was performed on minibatches of 24 examples using RMSprop (Tieleman & Hinton, 2012) with learning rate of 0.001, a = 0.9, and e = 107°. | 1611.01576#18 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 18 | To make this process more clear, we show an example in Figure 5, for a tree structure that has two leaf nodes and one internal node. The leaf nodes are indexed by 0 and 1, and the internal node is indexed by 2. The controller RNN needs to ï¬rst predict 3 blocks, each block specifying a combina- tion method and an activation function for each tree index. After that it needs to predict the last 2 blocks that specify how to connect ct and ctâ1 to temporary variables inside the tree. Speciï¬cally,
5
# Under review as a conference paper at ICLR 2017
he he & £5 BRBSRRRRG oe tit it i Index 2 we relu rN < â> < â+ < â» < â» < - Mer Xt Nea Xt âTeeindexoâ âTreeindexaâ âTreetndex2â ~Cell inject Cell indices Tree Tree Index 0 Index 1 elem_mult, Figure 5: An example of a recurrent cell constructed from a tree that has two leaf nodes (base 2) and one internal node. Left: the tree that deï¬nes the computation steps to be predicted by controller. Center: an example set of predictions made by the controller for each computation step in the tree. Right: the computation graph of the recurrent cell constructed from example predictions of the controller. | 1611.01578#18 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 18 | 3.2 EXTENSION TO RECURRENT NEURAL NETWORKS
The proposed method can be easily extended to recurrent neural networks. Let xl and hl be the input and hidden states, respectively, at time step (or depth) l. A typical recurrent neural network has a recurrence of the form hl = Wxxl + WhÏ(hlâ1) + b (equivalent to the more widely known hl = Ï(Wxxl +Whhlâ1 +b) (Pascanu et al., 2013) ). We binarize both the input-to-hidden weight Wx and hidden-to-hidden weight Wh. Since weights are shared across time in a recurrent network, we only need to binarize Wx and Wh once in each forward propagation. Besides weights, one can also binarize the activations (of the inputs and hidden states) as in the previous section. | 1611.01600#18 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 18 | The second group computes the attention weights once, which are then fed into an output layer for ï¬nal prediction (e.g., Kadlec et al. (2016)). Attention-over-attention model (Cui et al., 2016) uses a 2D similarity matrix between the query and context words (similar to Equation 1) to compute the weighted average of query-to-context attention. In contrast to these models, BIDAF does not summarize the two modalities in the attention layer and instead lets the attention vectors ï¬ow into the modeling (RNN) layer.
The third group (considered as variants of Memory Network (Weston et al., 2015)) repeats comput- ing an attention vector between the query and the context through multiple layers, typically referred to as multi-hop (Sordoni et al., 2016; Dhingra et al., 2016). Shen et al. (2016) combine Memory Networks with Reinforcement Learning in order to dynamically control the number of hops. One can also extend our BIDAF model to incorporate multiple hops. | 1611.01603#18 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 18 | # 3 REGULARIZED POLICY GRADIENT ALGORITHM
In this section we derive a relationship between the policy and the Q-values when using a regularized policy gradient algorithm. This allows us to transform a policy into an estimate of the Q-values. We then show that for small regularization the Q-values induced by the policy at the ï¬xed point of the algorithm have a small Bellman error in the tabular case.
3.1 TABULAR CASE
Consider the fixed points of the entropy regularized policy gradient update Qh. Let us define f(@) = Es, Q" (8, a)Vo log 7(s, a) + aE, VoH (5), and gs(7) = 3°, (s, a) for each s. A fixed point is one where we can no longer update 6 in the direction of f (0) without violating one of the constraints gs(7) = 1, i.e, where f(@) is in the span of the vectors {Vogs(7)}. In other words, any fixed point must satisfy f(0) = >>, AsVogs(), where for each s the Lagrange multiplier \, ⬠R ensures that gs(7) = 1. Substituting in terms to this equation we obtain | 1611.01626#18 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 18 | FE, V") FeV) FEA)! FB, (V2) GMAM = log ( (8)
where a and b refer to the two GMAN variants (see Section 3 for notation F¢(V;)). The idea here is similar. If G2 performs better than G, with respect to both D, and D2, then GMAM>0 (remember V <0 always). If G; performs better in both cases, GMAM<0, otherwise, the result is indeterminate.
5.2 EXPERIMENTS
We evaluate the aforementioned variations of GMAN on a variety of image generation tasks: MNIST (LeCun et al. (1998)), CIFAR-10 (Krizhevsky (2009)) and CelebA (Liu et al. (2015)). We focus on rates of convergence to steady state along with quality of the steady state generator according to the GMAM metric. To summarize, loosely in order of increasing discriminator leniency, we compare
e F-boost: A single AdaBoost.OL-boosted discriminator (see Appendix A.7).
e P-boost: Dj; is trained according to AdaBoost.OL. A max over the weak learner losses is presented to the generator instead of the boosted prediction (see Appendix A.7).
e GMAN-max: max{V;} is presented to the generator. | 1611.01673#18 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 19 | Small batch sizes and long sequence lengths provide an ideal situation for demonstrating the QRNNâs performance advantages over traditional recurrent architectures. We observed a speedup of 3.2x on IMDb train time per epoch compared to the optimized LSTM implementation provided in NVIDIAâs cuDNN library. For speciï¬c batch sizes and sequence lengths, a 16x speed gain is possible. Figure 4 provides extensive speed comparisons. In Figure 3, we visualize the hidden state vectors cL t of the ï¬nal QRNN layer on part of an example from the IMDb dataset. Even without any post-processing, changes in the hidden state are visible and interpretable in regards to the input. This is a consequence of the elementwise nature of the recurrent pooling function, which delays direct interaction between different channels of the hidden state until the computation of the next QRNN layer.
3.2 LANGUAGE MODELING
We replicate the language modeling experiment of Zaremba et al. (2014) and Gal & Ghahramani (2016) to benchmark the QRNN architecture for natural language sequence prediction. The experi- ment uses a standard preprocessed version of the Penn Treebank (PTB) by Mikolov et al. (2010). | 1611.01576#19 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 19 | according to the predictions of the controller RNN in this example, the following computation steps will occur:
⢠The controller predicts Add and T anh for tree index 0, this means we need to compute a0 = tanh(W1 â xt + W2 â htâ1).
e The controller predicts ElemMult and ReLU for tree index 1, this means we need to compute a, = ReLU((W3 * 21) © (Wa hyâ1)).
⢠The controller predicts 0 for the second element of the âCell Indexâ, Add and ReLU for 0 = ReLU(a0 + ctâ1). elements in âCell Injectâ, which means we need to compute anew Notice that we donât have any learnable parameters for the internal nodes of the tree.
e The controller predicts ElemMult and Sigmoid for tree index 2, this means we need to compute az = sigmoid(aj*ââ © a1). Since the maximum index in the tree is 2, hy is set to a2.
e The controller RNN predicts 1 for the first element of the âCell Indexâ, this means that we should set c; to the output of the tree at index 1 before the activation, i.e., c, = (W3 * 21) © (W4 * hy-1). | 1611.01578#19 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 19 | In deep networks, the backpropagated gradient takes the form of a product of Jacobian matrices (Pas-| feanu etal etal ). In a vanilla recurrent neural networkP']for activations h, and hy at depths p and q, F oh, . respectively (where p > q), ae = TIy<i<p ot = Tyect<p W/' diag(oâ(hy-1)). The necessary condition for exploding gradients is that the largest singular value \;(W),) of W), is larger than some given constant ( Pascanu et al] BOTS] The following Proposition shows that for any binary W),, its largest singular value is lower- bounded by the square root of its dimension.
Proposition 3.2 For any W â {â1, +1}mÃn (m ⤠n), λ1(W) â¥
â
# Vn.
2Here, we consider the vanilla recurrent neural network for simplicity. It can be shown that a similar behavior holds for the more commonly used LSTM.
4
Published as a conference paper at ICLR 2017 | 1611.01600#19 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 19 | Visual question answering. The task of question answering has also gained a lot of interest in the computer vision community. Early works on visual question answering (VQA) involved encoding the question using an RNN, encoding the image using a CNN and combining them to answer the question (Antol et al., 2015; Malinowski et al., 2015). Attention mechanisms have also been suc- cessfully employed for the VQA task and can be broadly clustered based on the granularity of their attention and the approach to construct the attention matrix. At the coarse level of granularity, the question attends to different patches in the image (Zhu et al., 2016; Xiong et al., 2016a). At a ï¬ner level, each question word attends to each image patch and the highest attention value for each spatial location (Xu & Saenko, 2016) is adopted. A hybrid approach is to combine questions representa- tions at multiple levels of granularity (unigrams, bigrams, trigrams) (Yang et al., 2015). Several approaches to constructing the attention matrix have been used including element-wise product, element-wise sum, concatenation and Multimodal Compact Bilinear Pooling (Fukui et al., 2016). | 1611.01603#19 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 19 | E s,a (QÏ(s, a) â α log Ï(s, a) â cs) âθ log Ï(s, a) = 0, (3)
where we have absorbed all constants into c â R|S|. Any solution Ï to this equation is strictly positive element-wise since it must lie in the domain of the entropy function. In the tabular case Ï is represented by a single number for each state and action pair and the gradient of the policy with respect to the parameters is the indicator function, i.e., âθ(t,b)Ï(s, a) = 1(t,b)=(s,a). From this we obtain QÏ(s, a) â α log Ï(s, a) â cs = 0 for each s (assuming that the measure dÏ(s) > 0). Multiplying by Ï(a, s) and summing over a â A we get cs = αH Ï(s) + V Ï(s). Substituting c into equation (3) we have the following formulation for the policy:
Ï(s, a) = exp(AÏ(s, a)/α â H Ï(s)), (4) | 1611.01626#19 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 19 | e GMAN-max: max{V;} is presented to the generator.
e GAN: Standard GAN with a single discriminator (see Appendix A.2).
mod-GAN: GAN with modified objective (generator minimizes â log(D(G(z))).
# e
e GMAN-): GMAN with F :=arithmetic softmax with parameter \.
GMAN*: The arithmetic softmax is controlled by the generator through X.
# e
All generator and discriminator models are deep (de)convolutional networks (Radford et al. (2015)), and aside from the boosted variants, all are trained with Adam (Kingma & Ba (2014)) and batch normalization (loffe & Szegedy (2015)). Discriminators convert the real-valued outputs of their networks to probabilities with squashed-sigmoids to prevent saturating logarithms in the minimax objective (« + Ss ). See Appendix A.8 for further details. We test GMAN systems with N = {2,5} discriminators. We maintain discriminator diversity by varying dropout and network depth.
# 5.2.1 MNIST | 1611.01673#19 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 20 | We implemented a gated QRNN model with medium hidden size: 2 layers with 640 units in each layer. Both QRNN layers use a convolutional ï¬lter width k of two timesteps. While the âmediumâ models used in other work (Zaremba et al., 2014; Gal & Ghahramani, 2016) consist of 650 units in
5
# Under review as a conference paper at ICLR 2017
Under review as a conference paper at ICLR 2017 5 â â â== â a == ee == [zs = â= == â â & 3 aay pF â ee & 3 â a = a a in: âââââ=$_ââââ ââ a 8 ââââaa ââ Se ee ââââ_ââ 5 3 = 8 â Ee a a | = 5 ae 8 ââ Zz, = a a = Timesteps (words) iS & ell 3 8 s S Hidden units | 1611.01576#20 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 20 | In the above example, the tree has two leaf nodes, thus it is called a âbase 2â architecture. In our experiments, we use a base number of 8 to make sure that the cell is expressive.
# 4 EXPERIMENTS AND RESULTS
We apply our method to an image classiï¬cation task with CIFAR-10 and a language modeling task with Penn Treebank, two of the most benchmarked datasets in deep learning. On CIFAR-10, our goal is to ï¬nd a good convolutional architecture whereas on Penn Treebank our goal is to ï¬nd a good recurrent cell. On each dataset, we have a separate held-out validation dataset to compute the reward signal. The reported performance on the test set is computed only once for the network that achieves the best result on the held-out validation dataset. More details about our experimental procedures and results are as follows.
4.1 LEARNING CONVOLUTIONAL ARCHITECTURES FOR CIFAR-10 | 1611.01578#20 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 20 | Algorithm 1 Loss-Aware Binarization (LAB) for training a feedforward neural network. Input: Minibatch {(xâ, yâ)}, current full-precision weights {w/}, first moment {mj~'}, moment {vj +}, and learning rate 7°. Forward Propagation for! = 1to Ldo at = Iai towilh, ay" f 7 bf = sign(w!); rescale the layer-! input: X/_, = a} x]_13 compute z/ with input x/_, and binary weight b/; apply batch-normalization and nonlinear activation to z/ to obtain x}; end for 9: compute the loss ¢ using xâ, and y*; : Backward Propagation oe : initialize output layerâs activationâs gradient Oxi 12: for! = L to2do 13: compute Can using oa a} and by; 14: end for 15: Update parameters using Adam 16: for! = 1 to Ldo 17: compute gradients V,/(w*) using ox and x}_1; 18: update first moment mj = Bimj-t +(1â Bi) Vil(w'); 19: update second moment v} = fav; ! + (1 â | 1611.01600#20 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 20 | Lu et al. (2016) have recently shown that in addition to attending from the question to image patches, attending from the image back to the question words provides an improvement on the VQA task. This ï¬nding in the visual domain is consistent with our ï¬nding in the language domain, where our bi-directional attention between the query and context provides improved results. Their model, however, uses the attention weights directly in the output layer and does not take advantage of the attention ï¬ow to the modeling layer.
# 4 QUESTION ANSWERING EXPERIMENTS
In this section, we evaluate our model on the task of question answering using the recently released SQuAD (Rajpurkar et al., 2016), which has gained a huge attention over a few months. In the next section, we evaluate our model on the task of cloze-style reading comprehension.
Dataset. SQuAD is a machine comprehension dataset on a large set of Wikipedia articles, with more than 100,000 questions. The answer to each question is always a span in the context. The model is given a credit if its answer matches one of the human written answers. Two metrics are used to evaluate models: Exact Match (EM) and a softer metric, F1 score, which measures the weighted average of the precision and recall rate at character level. The dataset consists of 90k/10k
5 | 1611.01603#20 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 20 | Ï(s, a) = exp(AÏ(s, a)/α â H Ï(s)), (4)
for all s â S and a â A. In other words, the policy at the ï¬xed point is a softmax over the advantage function induced by that policy, where the regularization parameter α can be interpreted as the temperature. Therefore, we can use the policy to derive an estimate of the Q-values,
ËQÏ(s, a) = ËAÏ(s, a) + V Ï(s) = α(log Ï(s, a) + H Ï(s)) + V Ï(s). (5)
With this we can rewrite the gradient update (2) as
âθ â E s,a (QÏ(s, a) â ËQÏ(s, a))âθ log Ï(s, a), (6) | 1611.01626#20 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 20 | # 5.2.1 MNIST
Figure 3 reveals that increasing the number of discriminators reduces the number of iterations to steady-state by 2x on MNIST; increasing N (the size of the discriminator ensemble) also has the added benefit of reducing the variance the minimax objective over runs. Figure 4 displays the vari- ance of the same objective over a sliding time window, reaffirming GMANâs acceleration to steady- state. Figure 5 corroborates this conclusion with recognizable digits appearing approximately an epoch before the single discriminator run; digits at steady-state appear slightly sharper as well.
Our GMAM metric (see Table 1) agrees with the relative quality of images in Figure 5 with GMAN* achieving the best overall performance. Figure 6 reveals GMAN*âs attempt to regulate the difficulty | 1611.01673#20 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 21 | Figure 3: Visualization of the ï¬nal QRNN layerâs hidden state vectors cL in the IMDb task, with t timesteps along the vertical axis. Colors denote neuron activations. After an initial positive statement âThis movie is simply gorgeousâ (off graph at timestep 9), timestep 117 triggers a reset of most hidden states due to the phrase ânot exactly a bad storyâ (soon after âmain weakness is its storyâ). Only at timestep 158, after âI recommend this movie to everyone, even if youâve never played the gameâ, do the hidden units recover.
each layer, it was more computationally convenient to use a multiple of 32. As the Penn Treebank is a relatively small dataset, preventing overï¬tting is of considerable importance and a major focus of recent research. It is not obvious in advance which of the many RNN regularization schemes would perform well when applied to the QRNN. Our tests showed encouraging results from zoneout applied to the QRNNâs recurrent pooling layer, implemented as described in Section 2.1. | 1611.01576#21 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 21 | 4.1 LEARNING CONVOLUTIONAL ARCHITECTURES FOR CIFAR-10
Dataset: In these experiments we use the CIFAR-10 dataset with data preprocessing and aug- mentation procedures that are in line with other previous results. We ï¬rst preprocess the data by whitening all the images. Additionally, we upsample each image then choose a random 32x32 crop of this upsampled image. Finally, we use random horizontal ï¬ips on this 32x32 cropped image.
Search space: Our search space consists of convolutional architectures, with rectiï¬ed linear units as non-linearities (Nair & Hinton, 2010), batch normalization (Ioffe & Szegedy, 2015) and skip connections between layers (Section 3.3). For every convolutional layer, the controller RNN has to select a ï¬lter height in [1, 3, 5, 7], a ï¬lter width in [1, 3, 5, 7], and a number of ï¬lters in [24, 36, 48,
6
# Under review as a conference paper at ICLR 2017
64]. For strides, we perform two sets of experiments, one where we ï¬x the strides to be 1, and one where we allow the controller to predict the strides in [1, 2, 3]. | 1611.01578#21 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 21 | update first moment mj = Bimj-t +(1â Bi) Vil(w'); 19: update second moment v} = fav; ! + (1 â 52)(Vil(w') © Vil(w")); 20: compute unbiased first moment m{ = m/j/(1 â ff); 21: compute unbiased second moment 0; = vj/(1 â 84); 22: compute current curvature matrix dj = = (a + Va): 23: update full-precision weights w/t! = wi} â mi @ di; 24: update learning rate 7+! = UpdateRule(7â, t + 1); 25: end for | 1611.01600#21 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 21 | 5
Published as a conference paper at ICLR 2017
Logistic Regression Baselinea Dynamic Chunk Readerb Fine-Grained Gatingc Match-LSTMd Multi-Perspective Matchinge Dynamic Coattention Networksf R-Netg BIDAF (Ours) Single Model EM 40.4 62.5 62.5 64.7 65.5 66.2 68.4 68.0 F1 51.0 71.0 73.3 73.7 75.1 75.9 77.5 77.3 Ensemble EM F1 - - - 77.0 77.2 80.4 79.7 81.1 - - - 67.9 68.2 71.6 72.1 73.3 No char embedding No word embedding No C2Q attention No Q2C attention Dynamic attention BIDAF (single) BIDAF (ensemble) EM F1 75.4 65.0 66.8 55.5 67.7 57.2 73.7 63.6 73.6 63.5 77.3 67.7 80.7 72.6
(a) Results on the SQuAD test set | 1611.01603#21 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 21 | since the update is unchanged by per-state constant offsets. When the policy is parameterized as a softmax, i.e., 7(s,a) = exp(W(s,a))/ >>, exp W(s,b), the quantity W is sometimes referred to as the action-preferences of the policy (Sutton & Barto} Chapter 6.6). Equation (7) states that the action preferences are equal to the Q-values scaled by 1/a, up to an additive per-state constant.
4
Published as a conference paper at ICLR 2017
3.2 GENERAL CASE
Consider the following optimization problem:
minimize Es,4(q(s,a) â alog 7(s, a))? a) subjectto S°,a(s,a)=1, sES over variable 6 which parameterizes 7, where we consider both the measure in the expectation and the values q(s, a) to be independent of @. The optimality condition for this problem is
# E s,a
(q(s, a) â α log Ï(s, a) + cs)âθ log Ï(s, a) = 0, | 1611.01626#21 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 21 | Score | Variant | GMAN* | GMAN-0 | GMAN-max | _mod-GAN + 0.127 GMAN* - â0.020 + 0.009 | â0.028 + 0.019 | â0.089 + 0.036 + | 0.007 GMAN-0_ | 0.020 + 0.009 - â0.013 + 0.015 0.027 3 | â0.034 | GMAN-max | 0.028+ 0.019 | 0.013 + 0.015 - â0.011 + 0.024 | â0.122 mod-GAN 0.089 + 0.036 | 0.018 + 0.027 0.011 + 0.024 Table 1: Pairwise GMAM metric means with stdev for select models on MNIST. For each column, a positive GMAM indicates better performance relative to the row opponent; negative implies worse. Scores are obtained by summing each variantâs column. | 1611.01673#21 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 22 | The experimental settings largely followed the âmediumâ setup of Zaremba et al. (2014). Optimiza- tion was performed by stochastic gradient descent (SGD) without momentum. The learning rate was set at 1 for six epochs, then decayed by 0.95 for each subsequent epoch, for a total of 72 epochs. We additionally used L2 regularization of 2 Ã 10â4 and rescaled gradients with norm above 10. Zoneout was applied by performing dropout with ratio 0.1 on the forget gates of the QRNN, without rescaling the output of the dropout function. Batches consist of 20 examples, each 105 timesteps.
Comparing our results on the gated QRNN with zoneout to the results of LSTMs with both ordinary and variational dropout in Table 2, we see that the QRNN is highly competitive. The QRNN without zoneout strongly outperforms both our medium LSTM and the medium LSTM of Zaremba et al. (2014) which do not use recurrent dropout and is even competitive with variational LSTMs. This may be due to the limited computational capacity that the QRNNâs pooling layer has relative to the LSTMâs recurrent weights, providing structural regularization over the recurrence. | 1611.01576#22 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 22 | Training details: The controller RNN is a two-layer LSTM with 35 hidden units on each layer. It is trained with the ADAM optimizer (Kingma & Ba, 2015) with a learning rate of 0.0006. The weights of the controller are initialized uniformly between -0.08 and 0.08. For the distributed train- ing, we set the number of parameter server shards S to 20, the number of controller replicas K to 100 and the number of child replicas m to 8, which means there are 800 networks being trained on 800 GPUs concurrently at any time.
Once the controller RNN samples an architecture, a child model is constructed and trained for 50 epochs. The reward used for updating the controller is the maximum validation accuracy of the last 5 epochs cubed. The validation set has 5,000 examples randomly sampled from the training set, the remaining 45,000 examples are used for training. The settings for training the CIFAR-10 child models are the same with those used in Huang et al. (2016a). We use the Momentum Optimizer with a learning rate of 0.1, weight decay of 1e-4, momentum of 0.9 and used Nesterov Momentum (Sutskever et al., 2013). | 1611.01578#22 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 22 | # }, second
Thus, with weight binarization as in BinaryConnect, the exploding gradient problem becomes more severe as the weight matrices are often large. On the other hand, recall that λ1(c ËWh) = cλ1( ËWh) for any non-negative c. The proposed method alleviates this exploding gradient problem by adap- tively learning the scaling parameter αh.
# 4 EXPERIMENTS
In this section, we perform experiments on the proposed binarization scheme with both feedforward networks (Sections 4.1 and 4.2) and recurrent neural networks (Sections 4.3 and 4.4).
4.1 FEEDFORWARD NEURAL NETWORKS | 1611.01600#22 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 22 | (a) Results on the SQuAD test set
Table 1: (1a) The performance of our model BIDAF and competing approaches by Rajpurkar et al. (2016)a, Yu et al. (2016)b, Yang et al. (2016)c, Wang & Jiang (2016)d, IBM Watsone (unpublished), Xiong et al. (2016b)f , and Microsoft Research Asiag (unpublished) on the SQuAD test set. A concurrent work by Lee et al. (2016) does not report the test scores. All results shown here reï¬ect the SQuAD leaderboard (stanford-qa.com) as of 6 Dec 2016, 12pm PST. (1b) The performance of our model and its ablations on the SQuAD dev set. Ablation results are presented only for single runs.
train/dev question-context tuples with a large hidden test set. It is one of the largest available MC datasets with human-written questions and serves as a great test bed for our model. | 1611.01603#22 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 22 | # E s,a
(q(s, a) â α log Ï(s, a) + cs)âθ log Ï(s, a) = 0,
where c â R|S| is the Lagrange multiplier associated with the constraint that the policy sum to one at each state. Comparing this to equation (3), we see that if q = QÏ and the measure in the expectation is the same then they describe the same set of ï¬xed points. This suggests an interpretation of the ï¬xed points of the regularized policy gradient as a regression of the log-policy onto the Q-values. In the general case of using an approximation architecture we can interpret equation (3) as indicating that the error between QÏ and ËQÏ is orthogonal to âθi log Ï for each i, and so cannot be reduced further by changing the parameters, at least locally. In this case equation (4) is unlikely to hold at a solution to (3), however with a good approximation architecture it may hold approximately, so that the we can derive an estimate of the Q-values from the policy using equation (5). We will use this estimate of the Q-values in the next section.
3.3 CONNECTION TO ACTION-VALUE METHODS | 1611.01626#22 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 22 | Published as a conference paper at ICLR 2017 1 original 1 modified 2 1000 2000 3000 4000 5000 6000 Iteration # Figure 3: Generator objective, F', averaged over 5 training runs on MNIST. Increas- ing the number of discriminators accelerates convergence of F to steady state (solid line) and reduces its variance, o? (filled shadow +1o). Figure 4 provides alternative evidence of GMAN*âs accelerated convergence. 1 Discriminator 2 Discriminators Cumulative STD of F(V(D,G)) e is} 3 Q 1000 2000 3000 4000 5000 6000 Iteration # Figure 4: Stdev, o, of the generator objec- tive over a sliding window of 500 iterations. Lower values indicate a more steady-state. GMAN* with NV = 5 achieves steady-state at 2x speed of GAN (N = 1). Note Fig- ure 3âs filled shadows reveal stdev of F' over runs, while this plot shows stdev over time. PFs fey ela "epoch 2 epochs 3 epochs 5 epochs 10 epochs 5 Discriminators Figure 5: Comparison of image quality across epochs for NV = {1, 2,5} using GMAN-O on MNIST. of the game to | 1611.01673#22 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 23 | Without zoneout, early stopping based upon validation loss was required as the QRNN would be- gin overï¬tting. By applying a small amount of zoneout (p = 0.1), no early stopping is required and the QRNN achieves competitive levels of perplexity to the variational LSTM of Gal & GhahraModel Parameters Validation Test LSTM (medium) (Zaremba et al., 2014) Variational LSTM (medium, MC) (Gal & Ghahramani, 2016) LSTM with CharCNN embeddings (Kim et al., 2016) Zoneout + Variational LSTM (medium) (Merity et al., 2016) 20M 20M 19M 20M 86.2 81.9 â 84.4 82.7 79.7 78.9 80.6 Our models LSTM (medium) QRNN (medium) QRNN + zoneout (p = 0.1) (medium) 20M 18M 18M 85.7 82.9 82.1 82.0 79.9 78.3 | 1611.01576#23 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 23 | During the training of the controller, we use a schedule of increasing number of layers in the child networks as training progresses. On CIFAR-10, we ask the controller to increase the depth by 2 for the child models every 1,600 samples, starting at 6 layers.
Results: After the controller trains 12,800 architectures, we ï¬nd the architecture that achieves the best validation accuracy. We then run a small grid search over learning rate, weight decay, batchnorm epsilon and what epoch to decay the learning rate. The best model from this grid search is then run until convergence and we then compute the test accuracy of such model and summarize the results in Table 1. As can be seen from the table, Neural Architecture Search can design several promising architectures that perform as well as some of the best models on this dataset. | 1611.01578#23 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 23 | 4.1 FEEDFORWARD NEURAL NETWORKS
We compare the original full-precision network (without binarization) with the following weight- binarized networks: (i) BinaryConnect; (ii) Binary-Weight-Network (BWN); and (iii) the proposed Loss-Aware Binarized network (LAB). We also compare with networks having both weights and activations binarized:3 (i) BinaryNeuralNetwork (BNN) (Hubara et al., 2016), the weight-and- activation binarized counterpart of BinaryConnect; (ii) XNOR-Network (XNOR) (Rastegari et al., 2016), the counterpart of BWN; (iii) LAB2, the counterpart of the proposed method, which binarizes weights using proximal Newton method and binarizes activations using a simple sign function.
The setup is similar to that in Courbariaux et al. (2015). We do not perform data augmentation or unsupervised pretraining. Experiments are performed on three commonly used data sets:
3We use the straight-through-estimator (Hubara et al., 2016) to compute the gradient involving the sign function.
5
Published as a conference paper at ICLR 2017 | 1611.01600#23 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 23 | Model Details. The model architecture used for this task is depicted in Figure 1. Each paragraph and question are tokenized by a regular-expression-based word tokenizer (PTB Tokenizer) and fed into the model. We use 100 1D ï¬lters for CNN char embedding, each with a width of 5. The hidden state size (d) of the model is 100. The model has about 2.6 million parameters. We use the AdaDelta (Zeiler, 2012) optimizer, with a minibatch size of 60 and an initial learning rate of 0.5, for 12 epochs. A dropout (Srivastava et al., 2014) rate of 0.2 is used for the CNN, all LSTM layers, and the linear transformation before the softmax for the answers. During training, the moving averages of all weights of the model are maintained with the exponential decay rate of 0.999. At test time, the moving averages instead of the raw weights are used. The training process takes roughly 20 hours on a single Titan X GPU. We also train an ensemble model consisting of 12 training runs with the identical architecture and hyper-parameters. At test time, we choose the answer with the highest sum of conï¬dence scores amongst the 12 runs for each question. | 1611.01603#23 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 23 | 3.3 CONNECTION TO ACTION-VALUE METHODS
The previous section made a connection between regularized policy gradient and a regression onto the Q-values at the ï¬xed point. In this section we go one step further, showing that actor-critic methods can be interpreted as action-value ï¬tting methods, where the exact method depends on the choice of critic.
Actor-critic methods. Consider an agent using an actor-critic method to learn both a policy Ï and a value function V . At any iteration k, the value function V k has parameters wk, and the policy is of the form
a*(s,a) = exp(W*(s, a)/a)/ S> exp(W*(s,b)/a), 8) b
where W* is parameterized by 6" and a > 0 is the entropy regularization penalty. In this case Vo log r*(s,a) = (1/a)(VeW*(s, a) â 0, 7(s,b)VeW*(s, b)). Using equation a) the parame- ters are updated as
AO x E dac(VoW*(s,a) â S> m*(s,b)VoW*(s,b)), Aw oc E dacVwV*(s) (9) sa sa b | 1611.01626#23 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 23 | 5 Discriminators Figure 5: Comparison of image quality across epochs for NV = {1, 2,5} using GMAN-O on MNIST. of the game to accelerate learning. Figure 7 displays the GMAM scores comparing variable \ controlled by GMAN*. 0 2000 4000 6000 8000 1000012000 Iteration # Figure 6: GMAN* regulates difficulty of the Figure 7: Pairwise fixed \âs to the Score »* A=1 A=0 5) > =0.008 | â0.019 tT 0.028 =0.009 | £0.010 oO 5 _ 0.008 - =0.008 a 0.001 A=1 =0.009 =0.010 _ 5 _ 0.019 0.008 - 0.025 | A=0 0.010 | £0.010 GMAM sde(CMAMD for GMAN-) and game by adjusting A. Initially, G reduces o_ GMAN* (\*) over 5 runs on MNIST. ease learning and then gradually increases \ for a more challenging learning environment. | 1611.01673#23 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 24 | Table 2: Single model perplexity on validation and test sets for the Penn Treebank language model- ing task. Lower is better. âMediumâ refers to a two-layer network with 640 or 650 hidden units per layer. All QRNN models include dropout of 0.5 on embeddings and between layers. MC refers to Monte Carlo dropout averaging at test time.
6
# Under review as a conference paper at ICLR 2017
32 Sequence length 128 64 256 e z i s h c t a B 8 16 32 64 128 256 5.5x 5.5x 4.2x 3.0x 2.1x 1.4x 8.8x 6.7x 4.5x 3.0x 1.9x 1.4x 11.0x 7.8x 4.9x 3.0x 2.0x 1.3x 12.4x 8.3x 4.9x 3.0x 2.0x 1.3x 512 16.9x 10.8x 6.4x 3.7x 2.4x 1.3x
= | 1611.01576#24 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 24 | Model Depth Parameters Error rate (%) Network in Network (Lin et al., 2013) All-CNN (Springenberg et al., 2014) Deeply Supervised Net (Lee et al., 2015) Highway Network (Srivastava et al., 2015) Scalable Bayesian Optimization (Snoek et al., 2015) - - - - - - - - - - 8.81 7.25 7.97 7.72 6.37 FractalNet (Larsson et al., 2016) with Dropout/Drop-path 21 21 38.6M 38.6M 5.22 4.60 ResNet (He et al., 2016a) 110 1.7M 6.61 ResNet (reported by Huang et al. (2016c)) 110 1.7M 6.41 ResNet with Stochastic Depth (Huang et al., 2016c) 110 1202 1.7M 10.2M 5.23 4.91 Wide ResNet (Zagoruyko & Komodakis, 2016) 16 28 11.0M 36.5M 4.81 4.17 ResNet (pre-activation) (He et al., 2016b) DenseNet (L = 40, k = 12) Huang et al. (2016a) DenseNet(L = 100, k = 12) Huang et al. (2016a) DenseNet (L | 1611.01578#24 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 24 | 3We use the straight-through-estimator (Hubara et al., 2016) to compute the gradient involving the sign function.
5
Published as a conference paper at ICLR 2017
1. MNIST: This contains 28 Ã 28 gray images from ten digit classes. We use 50000 images for training, another 10000 for validation, and the remaining 10000 for testing. We use the 4-layer model:
784F C â 2048F C â 2048F C â 2048F C â 10SV M,
where F C is a fully-connected layer, and SV M is a L2-SVM output layer using the square hinge loss. Batch normalization, with a minibatch size 100, is used to accelerate learning. The maximum number of epochs is 50. The learning rate for the weight-binarized (resp. weight-and-activation-binarized) network starts at 0.01 (resp. 0.005), and decays by a fac- tor of 0.1 at epochs 15 and 25.
2. CIFAR-10: This contains 32 Ã 32 color images from ten object classes. We use 45000 images for training, another 5000 for validation, and the remaining 10000 for testing. The images are preprocessed with global contrast normalization and ZCA whitening. We use the VGG-like architecture: | 1611.01600#24 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01626 | 24 | where δac is the critic minus baseline term, which depends on the variant of actor-critic being used (see the remark below).
Action-value methods. Compare this to the case where an agent is learning Q-values with a du- eling architecture (Wang et al., 2016), which at iteration k is given by Qk(s, a) = Y k(s, a) â
µ(s, b)Y k(s, b) + V k(s),
b where µ is a probability distribution, Y k is parameterized by θk, V k is parameterized by wk, and the exploration policy is Boltzmann with temperature α, i.e.,
Ïk(s, a) = exp(Y k(s, a)/α)/ exp(Y k(s, b)/α). b (10)
In action value ï¬tting methods at each iteration the parameters are updated to reduce some error, where the update is given by
AO x E dbav(VoÂ¥*(s,a) â Ss (s,b)VeY*(s,b)), Aw ox E davVwV*(s) (11) 3a 7 sa
where δav is the action-value error term and depends on which algorithm is being used (see the remark below).
5
Published as a conference paper at ICLR 2017 | 1611.01626#24 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 24 | Published as a conference paper at ICLR 2017 5.2.2 CELEBA & CIFAR-10 We see similar accelerated convergence behavior for the CelebA dataset in Figure 8. OF = HER RSAlAdeam alae > a2 ~~ 3 A SeSea dg age Aaa ae AB eta 5 Beds eS eoap-â ANAC HEO ANE atone 1 Discriminator 2 Discriminators 3 Discriminators Figure 8: Image quality improvement across number of generators at same number of iterations for GMAN-0 on CelebA. Figure 9 displays images generated by GMAN-0 on CIFAR-10. See Appendix A.3 for more results. ted | Pte de Pe SS a Eee Er 4 OY" 8. BGe-Baee een én Generated Images Real Images Figure 9: Images generated by GMAN-O on the CIFAR-10 dataset. We also found that GMAN is robust to mode collapse. We believe this is because the generator must appease a diverse set of discriminators in each minibatch. Emitting a single sample will score well for one discriminator at the expense of the rest of the discriminators. Current solutions (e.g., minibatch discrimination) are quadratic in batch size. GMAN, | 1611.01673#24 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 25 | =
Figure 4: Left: Training speed for two-layer 640-unit PTB LM on a batch of 20 examples of 105 timesteps. âRNNâ and âsoftmaxâ include the forward and backward times, while âoptimization overheadâ includes gradient clipping, L2 regularization, and SGD computations. Right: Inference speed advantage of a 320-unit QRNN layer alone over an equal-sized cuDNN LSTM layer for data with the given batch size and sequence length. Training results are similar.
mani (2016), which had variational inference based dropout of 0.2 applied recurrently. Their best performing variation also used Monte Carlo (MC) dropout averaging at test time of 1000 different masks, making it computationally more expensive to run. | 1611.01576#25 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 25 | k = 12) Huang et al. (2016a) DenseNet(L = 100, k = 12) Huang et al. (2016a) DenseNet (L = 100, k = 24) Huang et al. (2016a) DenseNet-BC (L = 100, k = 40) Huang et al. (2016b) 164 1001 40 100 100 190 1.7M 10.2M 1.0M 7.0M 27.2M 25.6M 5.46 4.62 5.24 4.10 3.74 3.46 Neural Architecture Search v1 no stride or pooling Neural Architecture Search v2 predicting strides Neural Architecture Search v3 max pooling Neural Architecture Search v3 max pooling + more ï¬lters 15 20 39 39 4.2M 2.5M 7.1M 37.4M 5.50 6.01 4.47 3.65 | 1611.01578#25 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 25 | (2Ã128C3)âM P 2â(2Ã256C3)âM P 2â(2Ã512C3)âM P 2â(2Ã1024F C)â10SV M,
where C3 is a 3 Ã 3 ReLU convolution layer, and M P 2 is a 2 Ã 2 max-pooling layer. Batch normalization, with a minibatch size of 50, is used. The maximum number of epochs is 200. The learning rate for the weight-binarized (resp. weight-and-activation-binarized) network starts at 0.03 (resp. 0.02), and decays by a factor of 0.5 after every 15 epochs. 3. SVHN: This contains 32 Ã 32 color images from ten digit classes. We use 598388 images for training, another 6000 for validation, and the remaining 26032 for testing. The images are preprocessed with global and local contrast normalization. The model used is:
(2Ã64C3)âM P 2â(2Ã128C3)âM P 2â(2Ã256C3)âM P 2â(2Ã1024F C)â10SV M. | 1611.01600#25 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 25 | Ablations. Table 1b shows the performance of our model and its ablations on the SQuAD dev set. Both char-level and word-level embeddings contribute towards the modelâs performance. We conjecture that word-level embedding is better at representing the semantics of each word as a whole, while char-level embedding can better handle out-of-vocab (OOV) or rare words. To evaluate bi- directional attention, we remove C2Q and Q2C attentions. For ablating C2Q attention, we replace the attended question vector ËU with the average of the output vectors of the questionâs contextual embedding layer (LSTM). C2Q attention proves to be critical with a drop of more than 10 points on both metrics. For ablating Q2C attention, the output of the attention layer, G, does not include terms that have the attended Q2C vectors, ËH. To evaluate the attention ï¬ow, we study a dynamic attention model, where the attention is dynamically computed within the modeling layerâs LSTM, following previous work (Bahdanau et al., 2015; Wang & Jiang, 2016). This is in contrast with our approach, where the attention | 1611.01603#25 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 25 | Equivalence. The two policies (8) and (10) are identical if W* = Y* for all k. Since X° and Y° can be initialized and parameterized in the same way, and assuming the two value function estimates are initialized and parameterized in the same way, all that remains is to show that the updates in equations and (9p are identical. Comparing the two, and assuming that dac = day (see remark), we see that the only difference is that the measure is not fixed in (9). but is equal to the current policy and therefore changes after each update. Replacing ju in (11) with 7* makes the updates identical, in which case W* = Y* at all iterations and the two policies and are always the same. In other words, the slightly modified action-value method is equivalent to an actor-critic policy gradient method, and vice-versa (modulo using the non-discounted distribu- tion of states, as discussed in 2.2). In particular, regularized policy gradient methods can be inter- preted as advantage function learning techniques Cretan, since at the optimum the quantity W(s,a) â do, 7(s,b)W(s,b) = a(log 7(s, a) + Hâ¢(s)) will be equal to the advantage function values in the tabular case. | 1611.01626#25 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 25 | discriminator at the expense of the rest of the discriminators. Current solutions (e.g., minibatch discrimination) are quadratic in batch size. GMAN, however, is linear in batch size. 6 CONCLUSION We introduced multiple discriminators into the GAN framework and explored discriminator roles ranging from a formidable adversary to a forgiving teacher. Allowing the generator to automatically tune its learning schedule (GMAN*) outperformed GANs with a single discriminator on MNIST. In general, GMAN variants achieved faster convergence to a higher quality steady state on a variety of tasks as measured by a GAM-type metric (GMAM). In addition, GMAN makes using the original GAN objective possible by increasing the odds of the generator receiving constructive feedback. In future work, we will look at more sophisticated mechanisms for letting the generator control the game as well as other ways to ensure diversity among the discriminators. Introducing multiple generators is conceptually an obvious next step, however, we expect difficulties to arise from more complex game dynamics. For this reason, game theory and game design will likely be important. ACKNOWLEDGMENTS We acknowledge helpful conversations | 1611.01673#25 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 26 | When training on the PTB dataset with an NVIDIA K40 GPU, we found that the QRNN is sub- stantially faster than a standard LSTM, even when comparing against the optimized cuDNN LSTM. In Figure 4 we provide a breakdown of the time taken for Chainerâs default LSTM, the cuDNN LSTM, and QRNN to perform a full forward and backward pass on a single batch during training of the RNN LM on PTB. For both LSTM implementations, running time was dominated by the RNN computations, even with the highly optimized cuDNN implementation. For the QRNN implementa- tion, however, the âRNNâ layers are no longer the bottleneck. Indeed, there are diminishing returns from further optimization of the QRNN itself as the softmax and optimization overhead take equal or greater time. Note that the softmax, over a vocabulary size of only 10,000 words, is relatively small; for tasks with larger vocabularies, the softmax would likely dominate computation time.
It is also important to note that the cuDNN libraryâs RNN primitives do not natively support any form of recurrent dropout. That is, running an LSTM that uses a state-of-the-art regularization scheme at cuDNN-like speeds would likely require an entirely custom kernel. | 1611.01576#26 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 26 | Table 1: Performance of Neural Architecture Search and other state-of-the-art models on CIFAR-10.
7
# Under review as a conference paper at ICLR 2017
First, if we ask the controller to not predict stride or pooling, it can design a 15-layer architecture that achieves 5.50% error rate on the test set. This architecture has a good balance between accuracy and depth. In fact, it is the shallowest and perhaps the most inexpensive architecture among the top performing networks in this table. This architecture is shown in Appendix A, Figure 7. A notable feature of this architecture is that it has many rectangular ï¬lters and it prefers larger ï¬lters at the top layers. Like residual networks (He et al., 2016a), the architecture also has many one-step skip connections. This architecture is a local optimum in the sense that if we perturb it, its performance becomes worse. For example, if we densely connect all layers with skip connections, its performance becomes slightly worse: 5.56%. If we remove all skip connections, its performance drops to 7.97%. | 1611.01578#26 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 26 | Batch normalization, with a minibatch size of 50, is used. The maximum number of epochs is 50. The learning rate for the weight-binarized (resp. weight-and-activation-binarized) network starts at 0.001 (resp. 0.0005), and decays by a factor of 0.1 at epochs 15 and 25.
Since binarization is a form of regularization (Courbariaux et al., 2015), we do not use other reg- ularization methods (like Dropout). All the weights are initialized as in (Glorot & Bengio, 2010). Adam (Kingma & Ba, 2015) is used as the optimization solver.
Table 1 shows the test classiï¬cation error rates, and Figure 1 shows the convergence of LAB. As can be seen, the proposed LAB achieves the lowest error on MNIST and SVHN. It even outperforms the full-precision network on MNIST, as weight binarization serves as a regularizer. With the use of cur- vature information, LAB outperforms BinaryConnect and BWN. On CIFAR-10, LAB is slightly out- performed by BinaryConnect, but is still better than the full-precision network. Among the schemes that binarize both weights and activations, LAB2 also outperforms BNN and the XNOR-Network. | 1611.01600#26 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 26 | LSTM, following previous work (Bahdanau et al., 2015; Wang & Jiang, 2016). This is in contrast with our approach, where the attention is pre-computed before ï¬owing to the modeling layer. Despite being a simpler attention mechanism, our proposed static attention outperforms the dynamically computed attention by more than 3 points. We conjecture that separating out the attention layer results in a richer set of features computed in the ï¬rst 4 layers which are then incorporated by the modeling layer. We also show the performance of BIDAF with several different deï¬nitions of α and β functions (Equation 1 and 2) in Appendix B. | 1611.01603#26 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 26 | Remark. In SARSA (Rummery & Niranjan] 1994) we set day = r(s,a) + yQ(sâ,b) â Q(s, a), where b is the action selected at state sâ, which would be equivalent to using a bootstrap critic in equation (6) where Qâ(s,a) = r(s,a) + yQ(sâ,b). In expected-SARSA (Sutton & Barto} {1998 Exercise 6.10), (Van Seijen et al.|[2009)) we take the expectation over the Q-values at the next state, $0 day = T(s,a)+7V(sâ) âQ(s, a). This is equivalent to TD-actor-critic (Konda & Tsitsiklis}/2003) r V In where we use the value function to provide the critic, which is given by Q* = r(s,a) + yV(sâ). Q-learning Say = 7 (8, a) + ymax, Q(sâ, b) â Q(s, a), which would be equivalent to using an optimizing critic that bootstraps using the max Q-value at the next state, i.e, | 1611.01626#26 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.