doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1608.03983 | 54 | Figure 8: Top-5 test errors obtained by SGD with momentum with the default learning rate schedule and SGDR with T0 = 1, Tmult = 2 on WRN-28-10 trained on a version of ImageNet, with all images from all 1000 classes downsampled to 32 Ã 32 pixels. The same baseline data augmentation as for the CIFAR datasets is used. Three settings of the initial learning rate are considered: 0.050, 0.015 and 0.005. In contrast to the experiments described in the main paper, here, the dataset is permuted only within 10 subgroups each formed from 100 classes which makes good generalization much harder to achieve for both algorithms. An interpretation of SGDR results given here might be that while the initial learning rate seems to be very important, SGDR reduces the problem of improper selection of the latter by scanning / annealing from the initial learning rate to 0.
16 | 1608.03983#54 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.01413 | 0 | 6 1 0 2
g u A 0 2 ] L C . s c [
2 v 3 1 4 1 0 . 8 0 6 1 : v i X r a
# Solving General Arithmetic Word Problems
Subhro Roy and Dan Roth University of Illinois, Urbana Champaign
# sroy9, danr {
# @illinois.edu }
# Abstract
This paper presents a novel approach to au- tomatically solving arithmetic word problems. This is the ï¬rst algorithmic approach that can handle arithmetic problems with multi- ple steps and operations, without depending on additional annotations or predeï¬ned tem- plates. We develop a theory for expression trees that can be used to represent and evalu- ate the target arithmetic expressions; we use it to uniquely decompose the target arithmetic problem to multiple classiï¬cation problems; we then compose an expression tree, combin- ing these with world knowledge through a con- strained inference framework. Our classiï¬ers gain from the use of quantity schemas that sup- ports better extraction of features. Experimen- tal results show that our method outperforms existing systems, achieving state of the art per- formance on benchmark datasets of arithmetic word problems.
# Introduction | 1608.01413#0 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 1 | # Introduction
multi-step arithmetic problems involving all four basic operations. The template based method of (Kushman et al., 2014), on the other hand, can deal with all types of problems, but implicitly assumes that the solution is generated from a set of predeï¬ned equation templates. In this paper, we present a novel approach which can solve a general class of arithmetic problems with- out predeï¬ned equation templates. In particular, it can handle multiple step arithmetic problems as shown in Example 1.
Example 1 Gwen was organizing her book case making sure each of the shelves had exactly 9 books on it. She has 2 types of books - mystery books and picture books. If she had 3 shelves of mystery books and 5 shelves of picture books, how many books did she have in total?
The solution involves understanding that the number of shelves needs to be summed up, and that the total number of shelves needs to be multiplied by the num- ber of books each shelf can hold. In addition, one has to understand that the number â2â is not a direct part of the solution of the problem. | 1608.01413#1 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 2 | In recent years there is growing interest in understand- ing natural language text for the purpose of answering science related questions from text as well as quanti- tative problems of various kinds. In this context, un- derstanding and solving arithmetic word problems is of speciï¬c interest. Word problems arise naturally when reading the ï¬nancial section of a newspaper, following election coverage, or when studying elementary school arithmetic word problems. These problems pose an in- teresting challenge to the NLP community, due to its concise and relatively straightforward text, and seem- ingly simple semantics. Arithmetic word problems are usually directed towards elementary school students, and can be solved by combining the numbers men- tioned in text with basic operations (addition, subtrac- tion, multiplication, division). They are simpler than algebra word problems which require students to iden- tify variables, and form equations with these variables to solve the problem.
While a solution to these problems eventually re- quires composing multi-step numeric expressions from text, we believe that directly predicting this complex expression from text is not feasible. | 1608.01413#2 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 3 | While a solution to these problems eventually re- quires composing multi-step numeric expressions from text, we believe that directly predicting this complex expression from text is not feasible.
At the heart of our technical approach is the novel notion of an Expression Tree. We show that the arith- metic expressions we are interested in can always be represented using an Expression Tree that has some unique decomposition properties. This allows us to de- compose the problem of mapping the text to the arith- metic expression to a collection of simple prediction problems, each determining the lowest common ances- tor operation between a pair of quantities mentioned in the problem. We then formulate the decision problem of composing the ï¬nal expression tree as a joint infer- ence problem, via an objective function that consists of all these decomposed prediction problems, along with legitimacy and background knowledge constraints.
Initial methods to address arithmetic word problems have mostly focussed on subsets of problems, restrict- ing the number or the type of operations used (Roy et al., 2015; Hosseini et al., 2014) but could not deal with
Learning to generate the simpler decomposed ex- pressions allows us to support generalization across problems types. In particular, our system could solve Example 1 even though it has never seen a problem that requires both addition and multiplication operations. | 1608.01413#3 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 4 | We also introduce a second concept, that of quantity schema, that allows us to focus on the information rel- evant to each quantity mentioned in the text. We show that features extracted from quantity schemas help rea- soning effectively about the solution. Moreover, quan- tity schemas help identify unnecessary text snippets in the problem text. For instance, in Example 2, the in- formation that âTom washed cars over the weekendâ is irrelevant; he could have performed any activity to earn money. In order to solve the problem, we only need to know that he had $76 last week, and now he has $86.
Example 2 Last week Tom had $74. He washed cars over the week- end and now has $86. How much money did he make from the job?
We combine the classiï¬ersâ decisions using a con- strained inference framework that allows for incorpo- rating world knowledge as constraints. For example, we deliberatively incorporate the information that, if the problems asks about an âamountâ, the answer must be positive, and if the question starts with âhow manyâ, the answer will most likely be an integer. | 1608.01413#4 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 5 | Our system is evaluated on two existing datasets of arithmetic word problems, achieving state of the art performance on both. We also create a new dataset of multistep arithmetic problems, and show that our sys- tem achieves competitive performance in this challeng- ing evaluation setting.
The next section describes the related work in the area of automated math word problem solving. We then present the theory of expression trees and our decom- position strategy that is based on it. Sec. 4 presents the overall computational approach, including the way we use quantity schemas to learn the mapping from text to expression tree components. Finally, we discuss our experimental study and conclude.
# 2 Related Work | 1608.01413#5 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 6 | # 2 Related Work
Previous work in automated arithmetic problem solvers has focussed on a restricted subset of problems. The system described in (Hosseini et al., 2014) handles only addition and subtraction problems, and requires additional annotated data for verb categories. In con- trast, our system does not require any additional an- notations and can handle a more general category of problems. The approach in (Roy et al., 2015) sup- ports all four basic operations, and uses a pipeline of classiï¬ers to predict different properties of the prob- lem. However, it makes assumptions on the number of quantities mentioned in the problem text, as well as the number of arithmetic steps required to solve the prob- lem. In contrast, our system does not have any such restrictions, effectively handling problems mentioning multiple quantities and requiring multiple steps. Kush- manâs approach to automatically solving algebra word problems (Kushman et al., 2014) might be the most related to ours. It tries to map numbers from the prob- lem text to predeï¬ned equation templates. However, they implicitly assume that similar equation forms have been seen in the training data. In contrast, our system can perform competitively, even when it has never seen similar expressions in training. | 1608.01413#6 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 7 | There is a recent interest in understanding text for the purpose of solving scientiï¬c and quantitative prob- lems of various kinds. Our approach is related to work in understanding and solving elementary school stan- dardized tests (Clark, 2015). The system described in (Berant et al., 2014) attempts to automatically answer biology questions, by extracting the structure of bio- logical processes from text. There has also been efforts to solve geometry questions by jointly understanding diagrams and associated text (Seo et al., 2014). A re- cent work (Sadeghi et al., 2015) tries to answer science questions by visually verifying relations from images.
Our constrained inference module falls under the general framework of Constrained Conditional Mod- els (CCM) (Chang et al., 2012). In particular, we use the L + I scheme of CCMs, which predicts structured output by independently learning several simple com- ponents, combining them at inference time. This has been successfully used to incorporate world knowledge at inference time, as well as getting around the need for large amounts of jointly annotated data for struc- tured prediction (Roth and Yih, 2005; Punyakanok et al., 2005; Punyakanok et al., 2008; Clarke and Lapata, 2006; Barzilay and Lapata, 2006; Roy et al., 2015).
# 3 Expression Tree and Problem Decomposition | 1608.01413#7 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 8 | # 3 Expression Tree and Problem Decomposition
We address the problem of automatically solving arith- metic word problems. The input to our system is the problem text P , which mentions n quantities q1, q2, . . . , qn. Our goal is to map this problem to a read-once arithmetic expression E that, when evalu- ated, provides the problemâs solution. We deï¬ne a read-once arithmetic expression as one that makes use of each quantity at most once. We say that E is a valid expression, if it is such a Read-Once arithmetic expres- sion, and we only consider in this work problems that can be solved using valid expressions (itâs possible that they can be solved also with invalid expressions).
for a valid expression E is a binary tree whose leaves represent quantities, and each internal node represents one of the four basic opera- tions. For a non-leaf node n, we represent the operation (n), and its left and right child associated with it as as lc(n) and rc(n) respectively. The numeric value of the quantity associated with a leaf node n is denoted as Q(n). Each node n also has a value associated with it, represented as VAL(n), which can be computed in a
recursive way as follows: | 1608.01413#8 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 9 | recursive way as follows:
VAL(n) = Qin) if nis a leaf 1) VAL(Ic(n)) @(n) VAL(re(n)) otherwise (
for expression E with root For any expression tree node nroot, the value of VAL(nroot) is exactly equal to the numeric value of the expression E. Therefore, this gives a natural representation of numeric expres- sions, providing a natural parenthesization of the nu- meric expression. Fig 1 shows an example of an arith- metic problem with solution expression and an expres- sion tree for the solution expression.
Problem Gwen was organizing her book case making sure each of the shelves had exactly 9 books on it. She has 2 types of books - mystery books and picture books. If she had 3 shelves of mystery books and 5 shelves of picture books, how many books did she have total? Solution Expression Tree of Solution (3 + 5) Ã 9 = 72
Figure 1: An arithmetic word problem, solution expression and the corresponding expression tree
Deï¬nition An expression tree for a valid expression E is called monotonic if it satisï¬es the following con- ditions:
1. If an addition node is connected to a subtraction node, then the subtraction node is the parent.
2. If a multiplication node is connected to a division node, then the division node is the parent. | 1608.01413#9 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 10 | 2. If a multiplication node is connected to a division node, then the division node is the parent.
3. Two subtraction nodes cannot be connected to each other.
4. Two division nodes cannot be connected to each other.
Fig 2 shows two different expression trees for the same expression. Fig 2b is monotonic whereas ï¬g 2a is not.
Our decomposition relies on the idea of monotonic expression trees. We try to predict for each pair of quantities qi, qj, the operation at the lowest common ancestor (LCA) node of the monotonic expression tree for the solution expression. We also predict for each quantity, whether it is relevant to the solution. Finally, an inference module combines all these predictions.
In the rest of the section, we show that for any pair of quantities qi, qj in the solution expression, any mono- tonic tree for the solution expression has the same LCA
(1)
(a) (b)
Figure 2: Two different expression trees for the numeric ex- pression (3 Ã 5) + 7 â 8 â 9. The right one is monotonic, whereas the left one is not.
operation. Therefore, predicting the LCA operation be- comes a multiclass classiï¬cation problem. | 1608.01413#10 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 11 | operation. Therefore, predicting the LCA operation be- comes a multiclass classiï¬cation problem.
The reason that we consider the monotonic represen- tation of the expression tree is that different trees could otherwise give different LCA operation for a given pair of quantities. For example, in Fig 2, the LCA opera- tion for quantities 5 and 8 can be + or , depending on which tree is considered.
Deï¬nition We deï¬ne an addition-subtraction chain of an expression tree to be the maximal connected set of nodes labeled with addition or subtraction.
The nodes of an addition-subtraction (AS) chain C represent a set of terms being added or subtracted. These terms are sub-expressions created by subtrees rooted at neighboring nodes of the chain. We call these terms the chain terms of C, and the whole expression, after node operations have been applied to the chain terms, the chain expression of C. For example, in ï¬g 2, the shaded nodes form an addition-subtraction chain. The chain expression is (3 9, and the chain â terms are 3 5, 7, 8 and 9. We deï¬ne a multiplication- division (MD) chain in a similar way.
Theorem 3.1. Every valid expression can be repre- sented by a monotonic expression tree. | 1608.01413#11 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 12 | Proof. The proof is procedural, that is, we provide a method to convert any expression tree to a monotonic expression tree for the same expression. Consider a non-monotonic expression tree E, and without loss of generality, assume that the ï¬rst condition for mono- tonicity is not valid. Therefore, there exists an addi- tion node ni and a subtraction node nj, and ni is the parent of nj. Consider an addition-subtraction chain C which includes ni, nj. We now replace the nodes of C and its subtrees in the following way. We add a sin- gle subtraction node nâ. The left subtree of nâ has all the addition chain terms connected by addition nodes, and the right subtree of nâ has all the subtraction chain terms connected by addition nodes. Both subtrees of nâ only require addition nodes, hence monotonicity condition is satisï¬ed. We can construct the monotonic tree in Fig 2b from the non-monotonic tree of Fig 2a us- ing this procedure. The addition chain terms are 3 5 and 7, and the subtraction chain terms are 8 and 9. As as was described above, we introduce the root subtrac- tion node in Fig 2b and attach the addition chain terms | 1608.01413#12 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 13 | to the left and the subtraction chain terms to the right. The same line of reasoning can be used to handle the second condition with multiplication and division re- placing addition and subtraction, respectively.
Theorem 3.2. Consider two valid expression trees 1 2 for the same expression E. Let C1, C2 be and the chain containing the root nodes of 1 and T2 re- spectively. The chain type (addition-subtraction or multiplication-division) as well as the the set of chain terms of C1 and C2 are identical.
Proof. We ï¬rst prove that the chains containing the roots are both AS or both MD, and then show that the chain terms are also identical. | 1608.01413#13 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 14 | Proof. We ï¬rst prove that the chains containing the roots are both AS or both MD, and then show that the chain terms are also identical.
We prove by contradiction that the chain type is same. Let C1âs type be âaddition-subtractionâ and C2âs type be âmultiplication-divisionâ (without loss of gen- erality). Since both C1 and C2 generate the same ex- pression E, we have that E can be represented as sum (or difference) of two expressions as well as product(or division) of two expressions. Transforming a sum (or difference) of expressions to a product (or division) requires taking common terms from the expressions, which imply that the sum (or difference) had dupli- cate quantities. The opposite transformation adds same term to various expressions leading to multiple uses of the same quantity. Therefore, this will force at least one of C1 and C2 to use the same quantity more than once, violating validity. | 1608.01413#14 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 15 | We now need to show that individual chain terms are also identical. Without loss of generality, let us assume that both C; and C, are ââaddition-subtractionâ chains. Suppose the chain terms of C; and C2 are not identi- cal. The chain expression for both the chains will be the same (since they are root chains, the chain expressions has to be the same as F). Let the chain expression for C1 be SO; ti â 30; th, where t;âs are the addition chain terms and t/ are the subtraction chain terms. Similarly, let the chain expression for C2 be )>; 8; â >; sj. We know that 50; t; â 30, t) = 30; si â Yo; s{, but the set of t;âs and t4âs is not the same as the set of s; and s/âs. However it should be possible to transform one form to the other using mathematical manipulations. This transformation will involve taking common terms, or multiplying two terms, or both. Following previous explanation, this will force one of the expressions to have duplicate quantities, violating validity. Hence, the chain terms of C and C2 are identical. | 1608.01413#15 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 16 | for a valid expres- sion E. For a distinct pair of quantities qi, qj par- ticipating in expression E, we denote by ni, nj the representing qi, qj, re- leaves of the expression tree ) to be the lowest com- spectively. Let nLCA(qi, qj; mon ancestor node of ni and nj. We also deï¬ne ) to be true if ni appears in the left order(qi, qj; ) and nj appears in the right subtree of nLCA(qi, qj;
# T
subtree of nzca(qi,qj;7) and set order(q;,q;;7) to false otherwise. Finally we define Orc.a(qi,qj;7) for a pair of quantities q;, q; as follows :
LCA(qi, qj, ) = | 1608.01413#16 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 17 | LCA(qi, qj, ) =
Orca(Gi. GT) = + if O(ntca(Gi,gji3T)) = + x if O(nica(u,g;T)) = x â if O(ntca(Gi,qj3T)) = â and order(qi,qj;;T) = true if O(ntca(Gi,qj3T)) = â and order(qi,4;;T) = false = if O(ntca(Gi,qj3T)) = + and order(qi,qj;;T) = true if O(ntca(Gi,qj3T)) = + and order (qi,4;;T) = false reverse reverse
# T
Deï¬nition Given two expression trees 2 for the same expression E, 2 if for every pair quantities qi, qj in the expression E, we have
Theorem 3.3. All monotonic expression trees for an expression are LCA-equivalent to each other.
# T
# T
Proof. We prove by induction on the number of quanti- ties used in an expression. For all expressions E with 2 quantities, there exists only one monotonic expression tree, and hence, the statement is trivially true. This sat- isï¬es our base case. | 1608.01413#17 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 18 | For the inductive case, we assume that for all expres- sions with k < n quantities, the theorem is true. Now, we need to prove that any expression with n nodes will also satisfy the property.
Consider a valid (as in all cases) expression E, with monotonic expression trees 2. From theorem 3.2, we know that the chains containing the roots of 2 have identical type and terms. Given two T quantities qi, qj of E, the lowest common ancestor of both 2 will either both belong to the chain containing the root, or both belong to one of the chain terms. If the LCA node is part of the chain for both 2, monotonic property ensures that the LCA T operation will be identical. If the LCA node is part of a chain term (which is an expression tree of size less than n), the property is satisï¬ed by induction hypothesis.
The theory just presented suggests that it is possible to uniquely decompose the overall problem to simpler steps and this will be exploited in the next section.
# 4 Mapping Problems to Expression Trees
Given the uniqueness properties proved in Sec. 3, it is sufï¬cient to identify the operation between any two rel- evant quantities in the text, in order to determine the unique valid expression. In fact, identifying the op- eration between any pair of quantities provides much
(2) | 1608.01413#18 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 19 | (2)
needed redundancy given the uncertainty in identifying the operation from text, and we exploit it in our ï¬nal joint inference.
Consequently, our overall method proceeds as fol- lows: given the problem text P , we detect quantities q1, q2, . . . , qn. We then use two classiï¬ers, one for rel- evance and other to predict the LCA operations for a monotonic expression tree of the solution. Our training makes use of the notion of quantity schemas, which we describe in Section 4.2. The distributional output of these classiï¬ers is then used in a joint inference proce- dure that determines the ï¬nal expression tree.
Our training data consists of problem text paired with a monotonic expression tree for the solution ex- pression and alignment of quantities in the expression to quantity mentions in the problem text. Both the rele- vance and LCA operation classiï¬ers are trained on gold annotations.
# 4.1 Global Inference for Expression Trees
In this subsection, we deï¬ne the scoring functions cor- responding to the decomposed problems, and show how we combine these scores to perform global infer- ence. For a problem P with quantities q1, q2, . . . , qn, we deï¬ne the following scoring functions: | 1608.01413#19 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 20 | 1. PAIR(q;,q;, oP) Scores the likelihood of Oxrca(d,d4j,7) = op, where T is a monotone expression tree of the solution expression of P. A multiclass classifier trained to predict LCA opera- tions (Section[4.4) can provide these scores.
2. IRR(q) : Scores the likelihood of quantity q being an irrelevant quantity in P , that is, q is not used in creating the solution. A binary classiï¬er trained to predict whether a quantity q is relevant or not (Section 4.3), can provide these scores.
(E) be the set of all quanti- ties in P which are not used in expression E. Let be a monotonic expression tree for E. We deï¬ne Score(E) of an expression E in terms of the above scoring func- tions and a scaling parameter wIRR as follows:
Score(E) =wirr > IRR(q)+ @) qeL(E£) So PAIR(Gi, gj, Ozca (Gis Gj, T)) G45 â¬L(E) | 1608.01413#20 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 21 | Our ï¬nal expression tree is an outcome of a con- strained optimization process, following (Roth and Yih, 2004; Chang et al., 2012). Our objective function ) makes use of the scores returned by IRR( · · to determine the expression tree and is constrained by legitimacy and background knowledge constraints, de- tailed below.
1. Positive Answer: Most arithmetic problems ask- ing for amounts or number of objects usually have a positive number as an answer. Therefore, while
searching for the best scoring expression, we re- ject expressions generating negative answer.
2. Integral Answer: Problems with questions such as âhow manyâ usually expect integral solutions. We only consider integral solutions as legitimate outputs for such problems.
be the set of valid expressions that can be formed using the quantities in a problem P , and which satisfy the above constraints. The inference algorithm now becomes the following:
arg max EâC Score(E) (4)
The space of possible expressions is large, and we employ a beam search strategy to ï¬nd the highest scoring constraint satisfying expression (Chang et al., 2012). We construct an expression tree using a bottom up approach, ï¬rst enumerating all possible sets of irrel- evant quantities, and next over all possible expressions, keeping the top k at each step. We give details below. | 1608.01413#21 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 22 | 1. Enumerating Irrelevant Quantities: We gener- ate a state for all possible sets of irrelevant quan- tities, ensuring that there is at least two relevant quantities in each state. We refer to each of the rel- evant quantities in each state as a term. Therefore, each state can be represented as a set of terms.
2. Enumerating Expressions: For generating a next state Sâ from S, we choose a pair of terms t; and t; in S and one of the four basic operations, and form a new term by combining terms ¢; and t; with the operation. Since we do not know which of the possible next states will lead to the optimal goal state, we enumerate all possible next states (that is, enumerate all possible pairs of terms and all possible operations); we prune the beam to keep only the top & candidates. We terminate when all the states in the beam have exactly one term.
Once we have a top k list of candidate expression trees, we choose the highest scoring tree which satisï¬es the constraints. However, there might not be any tree in the beam which satisï¬es the constraints, in which case, we choose the top candidate in the beam. We use k = 200 in our experiments. | 1608.01413#22 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 23 | In order to choose the value for the wIRR, we search , and over the set } choose the parameter setting which gives the highest accuracy on the training data.
# 4.2 Quantity Schema
In order to generalize across problem types as well as over simple manipulations of the text, it is neces- sary to train our system only with relevant information from the problem text. E.g., for the problem in exam- ple 2, we do not want to take decisions based on how Tom earned money. Therefore, there is a need to ex- tract the relevant information from the problem text.
To this end, we introduce the concept of a quantity schema which we extract for each quantity in the prob- lemâs text. Along with the question asked, the quantity schemas provides all the information needed to solve most arithmetic problems.
A quantity schema for a quantity q in problem P consists of the following components.
1. Associated Verb For each quantity q, we detect the verb associated with it. We traverse up the dependency tree starting from the quantity men- tion, and choose the ï¬rst verb we reach. We used the easy ï¬rst dependency parser (Goldberg and El- hadad, 2010).
2. Subject of Associated Verb We detect the noun phrase, which acts as subject of the associated verb (if one exists). | 1608.01413#23 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 24 | 2. Subject of Associated Verb We detect the noun phrase, which acts as subject of the associated verb (if one exists).
3. Unit We use a shallow parser to detect the phrase p in which the quantity q is mentioned. All to- kens of the phrase (other than the number itself) are considered as unit tokens. Also, if p is fol- lowed by the prepositional phrase âofâ and a noun phrase (according to the shallow parser annota- tions), we also consider tokens from this second noun phrase as unit tokens. Finally, if no unit token can be extracted, we assign the unit of the neighboring quantities as the unit of q (following previous work (Hosseini et al., 2014)).
4. Related Noun Phrases We consider all noun phrases which are connected to the phrase p con- taining quantity q, with NP-PP-NP attachment. If only one quantity is mentioned in a sentence, we consider all noun phrases in it as related. | 1608.01413#24 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 25 | 5. Rate We determine whether quantity q refers to a rate in the text, as well as extract two unit compo- nents deï¬ning the rate. For example, â7 kilome- ters per hourâ has two components âkilometersâ and âhourâ. Similarly, for sentences describing unit cost like âEach egg costs 2 dollarsâ, â2â is a rate, with units âdollarsâ and âeggâ.
In addition to extracting the quantity schemas for each quantity, we extract the surface form text which poses the question. For example, in the question sen- tence, âHow much will John have to pay if he wants to buy 7 oranges?â, our extractor outputs âHow much will John have to payâ as the question.
# 4.3 Relevance Classiï¬er
We train a binary SVM classiï¬er to determine, given problem text P and a quantity q in it, whether q is needed in the numeric expression generating the solu- tion. We train on gold annotations and use the score of the classiï¬er as the scoring function IRR(
). ·
4.3.1 Features The features are extracted from the quantity schemas and can be broadly categorized into three groups: | 1608.01413#25 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 26 | ). ·
4.3.1 Features The features are extracted from the quantity schemas and can be broadly categorized into three groups:
1. Unit features: Most questions speciï¬cally men- tion the object whose amount needs to be com- puted, and hence questions provide valuable clue as to which quantities can be irrelevant. We add a feature for whether the unit of quantity q is present in the question tokens. Also, we add a feature based on whether the units of other quantities have better matches with question tokens (based on the number of tokens matched), and one based on the number of quantities which have the maximum number of matches with the question tokens.
2. Related NP features: Often units are not enough to differentiate between relevant and irrelevant quantities. Consider the following:
Example 3 Problem : There are 8 apples in a pile on the desk. Each apple comes in a package of 11. 5 apples are added to the pile. How many apples are there in the pile? Solution : (8 + 5) = 13
The relevance decision depends on the noun phrase âthe pileâ, which is absent in the second sentence. We add a feature indicating whether a related noun phrase is present in the question. Also, we add a feature based on whether the re- lated noun phrases of other quantities have bet- ter match with the question. Extraction of related noun phrases is described in Section 4.2. | 1608.01413#26 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 27 | 3. Miscellaneous Features: When a problem men- tions only two quantities, both of them are usually relevant. Hence, we also add a feature based on the number of quantities mentioned in text.
We include pairwise conjunction of the above fea- tures.
# 4.4 LCA Operation Classiï¬er
In order to predict LCA operations, we train a multi- class SVM classiï¬er. Given problem text P and a pair of quantities pi and pj, the classiï¬er predicts one of the six labels described in Eq. 2. We consider the conï¬- dence scores for each label supplied by the classiï¬er as ). the scoring function PAIR( ·
4.4.1 Features We use the following categories of features:
1. Individual Quantity features: Dependent verbs have been shown to play signiï¬cant role in solv- ing addition and subtraction problems (Hosseini et al., 2014). Hence, we add the dependent verb of the quantity as a feature. Multiplication and | 1608.01413#27 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 28 | division problems are largely dependent on rates described in text. To capture that, we add a fea- ture based on whether the quantity is a rate, and whether any component of rate unit is present in the question. In addition to these quantity schema features, we add selected tokens from the neigh- borhood of the quantity mention. Neighborhood of quantities are often highly informative of LCA operations, for example, âHe got 80 more mar- blesâ, the term âmoreâ usually indicates addition. We add as features adverbs and comparative ad- jectives mentioned in a window of size 5 around the quantity mention.
2. Quantity Pair features: For a pair (qi, qj) we add features to indicate whether they have the same dependent verbs, to indicate whether both depen- dent verbs refer to the same verb mention, whether the units of qi and qj are the same and, if one of them is a rate, which component of the unit matches with the other quantityâs unit. Finally, we add a feature indicating whether the value of qi is greater than the value of qj. | 1608.01413#28 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 29 | 3. Question Features: Finally, we add a few fea- tures based on the question asked. In particular, for arithmetic problems where only one operation is needed, the question contains signals for the re- quired operation. Speciï¬cally, we add indicator features based on whether the question mentions comparison-related tokens (e.g., âmoreâ, âlessâ or âthanâ), or whether the question asks for a rate (indicated by tokens such as âeachâ or âoneâ).
We include pairwise conjunction of the above fea- tures. For both classiï¬ers, we use the Illinois-SL pack- age 1 under default settings.
# 5 Experimental Results
In this section, we evaluate the proposed method on publicly available datasets of arithmetic word prob- lems. We evaluate separately the relevance and LCA operation classiï¬ers, and show the contribution of var- ious features. Lastly, we evaluate the performance of the full system, and quantify the gains achieved by the constraints.
# 5.1 Datasets
We evaluate our system on three datasets, each of which comprise a different category of arithmetic word problems. | 1608.01413#29 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 30 | # 5.1 Datasets
We evaluate our system on three datasets, each of which comprise a different category of arithmetic word problems.
1. AI2 Dataset: This is a collection of 395 addition and subtraction problems, released by (Hosseini et al., 2014). They performed a 3-fold cross vali- dation, with every fold containing problems from
1 http://cogcomp.cs.illinois.edu/page/software view/Illinois- SL
different sources. This helped them evaluate ro- bustness to domain diversity. We follow the same evaluation setting. | 1608.01413#30 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 31 | different sources. This helped them evaluate ro- bustness to domain diversity. We follow the same evaluation setting.
2. IL Dataset: This is a collection of arithmetic problems released by (Roy et al., 2015). Each of these problems can be solved by performing one operation. However, there are multiple problems having the same template. To counter this, we per- form a few modiï¬cations to the dataset. First, for each problem, we replace the numbers and nouns with the part of speech tags, and then we cluster the problems based on unigrams and bigrams from this modiï¬ed problem text. In particular, we clus- ter problems together whose unigram-bigram sim- ilarity is over 90%. We next prune each cluster to keep at most 5 problems in each cluster. Finally we create the folds ensuring all problems in a clus- ter are assigned to the same fold, and each fold has similar distribution of all operations. We have a ï¬- nal set of 562 problems, and we use a 5-fold cross validation to evaluate on this dataset. | 1608.01413#31 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 32 | 3. Commoncore Dataset: In order to test our sys- temâs ability to handle multi-step problems, we create a new dataset of multi-step arithmetic problems. The problems were extracted from www.commoncoresheets.com. In total, there were 600 problems, 100 for each of the following types:
(a) Addition followed by Subtraction (b) Subtraction followed by Addition (c) Addition and Multiplication (d) Addition and Division (e) Subtraction and Multiplication (f) Subtraction and Division
This dataset had no irrelevant quantities. There- fore, we did not use the relevance classiï¬er in our evaluations.
In order to test our systemâs ability to generalize across problem types, we perform a 6-fold cross validation, with each fold containing all the prob- lems from one of the aforementioned categories. This is a more challenging setting relative to the individual data sets mentioned above, since we are evaluating on multi-step problems, without ever looking at problems which require the same set of operations.
# 5.2 Relevance Classiï¬er | 1608.01413#32 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 33 | # 5.2 Relevance Classiï¬er
Table 2 evaluates the performance of the relevance clas- siï¬er on the AI2 and IL datasets. We report two accu- racy values: Relax - fraction of quantities which the classiï¬er got correct, and Strict - fraction of math prob- lems, for which all quantities were correctly classiï¬ed. We report accuracy using all features and then remov- ing each feature group, one at a time.
AI2 IL CC All features No Individual Quantity features No Quantity Pair features No Question features Relax 88.7 73.6 83.2 86.8 Strict 85.1 67.6 79.8 83.9 Relax 75.7 52.0 63.6 73.3 Strict 75.7 52.0 63.6 73.3 Relax 60.0 29.2 49.3 60.5 Strict 25.8 0.0 16.5 28.3
Table 1: Performance of LCA Operation classiï¬er on the datasets AI2, IL and CC. | 1608.01413#33 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 34 | Table 1: Performance of LCA Operation classiï¬er on the datasets AI2, IL and CC.
AI2 IL All features No Unit features No NP features No Misc. features Relax 94.7 88.9 94.9 92.0 Strict 89.1 71.5 89.6 85.9 Relax 95.4 92.8 95.0 93.7 Strict 93.2 91.0 91.2 89.8
Table 2: Performance of Relevance classiï¬er on the datasets AI2 and IL.
All constraints Positive constraint Integral constraint No constraint (Hosseini et al., 2014) (Roy et al., 2015) (Kushman et al., 2014) AI2 72.0 78.0 71.8 77.7 77.7 - 64.0 IL 73.9 72.5 73.4 71.9 - 52.7 73.7 CC 45.2 36.5 39.0 29.6 - - 2.3
We see that features related to units of quantities play the most signiï¬cant role in determining relevance of quantities. Also, the related NP features are not helpful for the AI2 dataset.
Table 3: Accuracy in correctly solving arithmetic problems. First four rows represent various conï¬gurations of our sys- tem. We achieve state of the art results in both AI2 and IL datasets. | 1608.01413#34 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 35 | # 5.3 LCA Operation Classiï¬er
Table 1 evaluates the performance of the LCA Oper- ation classiï¬er on the AI2, IL and CC datasets. As before, we report two accuracies - Relax - fraction of quantity pairs for which the classiï¬er correctly pre- dicted the LCA operation, and Strict - fraction of math problems, for which all quantity pairs were correctly classiï¬ed. We report accuracy using all features and then removing each feature group, one at a time.
The strict and relaxed accuracies for IL dataset are identical, since each problem in IL dataset only re- quires one operation. The features related to individual quantities are most signiï¬cant; in particular, the accu- racy goes to 0.0 in the CC dataset, without using indi- vidual quantity features. The question features are not helpful for classiï¬cation in the CC dataset. This can be attributed to the fact that all problems in CC dataset re- quire multiple operations, and questions in multi-step problems usually do not contain information for each of the required operations.
# 5.4 Global Inference Module | 1608.01413#35 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 37 | The previously known best result in the AI2 dataset is reported in (Hosseini et al., 2014). Since we follow the exact same evaluation settings, our results are di- rectly comparable. We achieve state of the art results, without having access to any additional annotated data, unlike (Hosseini et al., 2014), who use labeled data for verb categorization. For the IL dataset, we acquired the system of (Roy et al., 2015) from the authors, and ran it with the same fold information. We outperform their system by an absolute gain of over 20%. We believe that the improvement was mainly due to the depen- dence of the system of (Roy et al., 2015) on lexical and neighborhood of quantity features. In contrast, features from quantity schemas help us generalize across prob- lem types. Finally, we also compare against the tem- plate based system of (Kushman et al., 2014). (Hos- seini et al., 2014) mentions the result of running the system of (Kushman et al., 2014) on AI2 dataset, and we report their result here. For IL and CC datasets, we used the system released by (Kushman et al., 2014). | 1608.01413#37 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 38 | is particularly helpful when division is involved, since it can lead to fractional answers. It does not help in case of the AI2 dataset, which involves only addition and subtraction problems. The role of the constraints becomes more signiï¬cant in case of multi-step problems and, in particular, they con- tribute an absolute improvement of over 15% over the system without constraints on the CC dataset. The tem- plate based system of (Kushman et al., 2014) performs on par with our system on the IL dataset. We believe that it is due to the small number of equation templates in the IL dataset. It performs poorly on the CC dataset, since we evaluate on unseen problem types, which do not ensure that equation templates in the test data will be seen in the training data.
# 5.5 Discussion | 1608.01413#38 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 39 | # 5.5 Discussion
The leading source of errors for the classiï¬ers are er- roneous quantity schema extraction and lack of under- standing of unknown or rare verbs. For the relevance classiï¬er on the AI2 dataset, 25% of the errors were due to mistakes in extracting the quantity schemas and 20% could be attributed to rare verbs. For the LCA operation classiï¬er on the same dataset, 16% of the er- rors were due to unknown verbs and 15% were due to mistakes in extracting the schemas. The erroneous ex- traction of accurate quantity schemas is very signiï¬cant for the IL dataset, contributing 57% of the errors for the relevance classiï¬er and 39% of the errors for the LCA operation classiï¬er. For the operation classiï¬er on the CC dataset, 8% of the errors were due to verbs and 16% were due to faulty quantity schema extraction. Quan- tity Schema extraction is challenging due to parsing is- sues as well as some non-standard rate patterns, and it will be one of the future work targets. For example, in the sentence, âHow many 4-dollar toys can he buy?â, we fail to extract the rate component of the quantity 4.
# 6 Conclusion | 1608.01413#39 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 40 | # 6 Conclusion
This paper presents a novel method for understanding and solving a general class of arithmetic word prob- lems. Our approach can solve all problems whose so- lution can be expressed by a read-once arithmetic ex- pression, where each quantity from the problem text appears at most once in the expression. We develop a novel theoretical framework, centered around the no- tion of monotone expression trees, and showed how this representation can be used to get a unique decom- position of the problem. This theory naturally leads to a computational solution that we have shown to uniquely determine the solution - determine the arithmetic oper- ation between any two quantities identiï¬ed in the text. This theory underlies our algorithmic solution - we de- velop classiï¬ers and a constrained inference approach that exploits redundancy in the information, and show that this yields strong performance on several bench- mark collections. In particular, our approach achieves state of the art performance on two publicly available arithmetic problem datasets and can support natural generalizations. Speciï¬cally, our approach performs competitively on multistep problems, even when it has never observed the particular problem type before. | 1608.01413#40 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 41 | Although we develop and use the notion of expres- sion trees in the context of numerical expressions, the concept is more general. In particular, if we allow leaves of expression trees to represent variables, we can express algebraic expressions and equations in this framework. Hence a similar approach can be targeted towards algebra word problems, a direction we wish to investigate in the future.
The datasets used in the paper are available for download at http://cogcomp.cs.illinois.edu/page/resource view/98.
# Acknowledgments
This research was sponsored by DARPA (under agree- ment number FA8750-13-2-0008), and a grant from AI2. Any opinions, ï¬ndings, conclusions or recom- mendations are those of the authors and do not nec- essarily reï¬ect the view of the agencies.
# References
[Barzilay and Lapata2006] R. Barzilay and M. Lapata. 2006. Aggregation via Set Partitioning for Natural In Human Language Tech- Language Generation. nologies - North American Chapter of the Associa- tion for Computational Linguistics, June. | 1608.01413#41 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 42 | [Berant et al.2014] J. Berant, V. Srikumar, P. Chen, A. V. Linden, B. Harding, B. Huang, P. Clark, and C. D. Manning. 2014. Modeling biological pro- In Proceedings cesses for reading comprehension. of EMNLP.
[Chang et al.2012] M. Chang, L. Ratinov, and D. Roth. 2012. Structured learning with constrained condi- tional models. Machine Learning, 88(3):399â431, 6.
[Clark2015] P. Clark. 2015. Elementary School Sci- ence and Math Tests as a Driver for AI: Take the Aristo Challenge! In Proceedings of IAAI.
[Clarke and Lapata2006] J. Clarke and M. Lapata. 2006. Constraint-based sentence compression: An In Proceedings of integer programming approach. the Annual Meeting of the Association for Compu- tational Linguistics (ACL), pages 144â151, Sydney, Australia, July. ACL. | 1608.01413#42 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 43 | [Goldberg and Elhadad2010] Y. Goldberg and M. El- hadad. 2010. An efï¬cient algorithm for easy-ï¬rst non-directional dependency parsing. In Human Lan- guage Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 742â750, Los Angeles, California, June.
[Hosseini et al.2014] M. J. Hosseini, H. Hajishirzi, O. Etzioni, and N. Kushman. 2014. Learning to solve arithmetic word problems with verb catego- rization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, pages 523â533.
[Kushman et al.2014] N. Kushman, L. Zettlemoyer, R. Barzilay, and Y. Artzi. 2014. Learning to au- In ACL, tomatically solve algebra word problems. pages 271â281. | 1608.01413#43 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 44 | [Punyakanok et al.2005] V. Punyakanok, D. Roth, and W. Yih. 2005. The necessity of syntactic parsing for semantic role labeling. In Proc. of the International Joint Conference on Artiï¬cial Intelligence (IJCAI), pages 1117â1123.
[Punyakanok et al.2008] V. Punyakanok, D. Roth, and W. Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computa- tional Linguistics, 34(2).
[Roth and Yih2004] D. Roth and W. Yih. 2004. A lin- ear programming formulation for global inference in natural language tasks. In Hwee Tou Ng and Ellen Riloff, editors, Proc. of the Conference on Computa- tional Natural Language Learning (CoNLL), pages 1â8. Association for Computational Linguistics.
In- teger linear programming inference for conditional random ï¬elds. In Proc. of the International Confer- ence on Machine Learning (ICML), pages 737â744.
[Roy et al.2015] S. Roy, T. Vieira, and D. Roth. 2015. Reasoning about quantities in natural language. Transactions of the Association for Computational Linguistics, 3. | 1608.01413#44 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1608.01413 | 45 | [Sadeghi et al.2015] F. Sadeghi, S. K. Divvala, and A. Farhadi. 2015. Viske: Visual knowledge extrac- tion and question answering by visual veriï¬cation of relation phrases. In The IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), June.
[Seo et al.2014] M. J. Seo, H. Hajishirzi, A. Farhadi, and O. Etzioni. 2014. Diagram understanding in geometry questions. In Proceedings of the Twenty- Eighth AAAI Conference on Artiï¬cial Intelligence, July 27 -31, 2014, Qu´ebec City, Qu´ebec, Canada., pages 2831â2838. | 1608.01413#45 | Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems. | http://arxiv.org/pdf/1608.01413 | Subhro Roy, Dan Roth | cs.CL | EMNLP 2015 | null | cs.CL | 20160804 | 20160820 | [] |
1607.07086 | 1 | # Aaron Courvilleâ Universit´e de Montr´eal
Yoshua Bengioâ Universit´e de Montr´eal
# ABSTRACT
We present an approach to training neural networks to generate sequences using actor-critic methods from reinforcement learning (RL). Current log-likelihood training methods are limited by the discrepancy between their training and testing modes, as models must generate tokens conditioned on their previous guesses rather than the ground-truth tokens. We address this problem by introducing a critic network that is trained to predict the value of an output token, given the policy of an actor network. This results in a training procedure that is much closer to the test phase, and allows us to directly optimize for a task-speciï¬c score such as BLEU. Crucially, since we leverage these techniques in the supervised learning setting rather than the traditional RL setting, we condition the critic network on the ground-truth output. We show that our method leads to improved performance on both a synthetic task, and for German-English machine translation. Our analysis paves the way for such methods to be applied in natural language generation tasks, such as machine translation, caption generation, and dialogue modelling.
# INTRODUCTION | 1607.07086#1 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 2 | # INTRODUCTION
In many important applications of machine learning, the task is to develop a system that produces a sequence of discrete tokens given an input. Recent work has shown that recurrent neural networks (RNNs) can deliver excellent performance in many such tasks when trained to predict the next output token given the input and previous tokens. This approach has been applied successfully in machine translation (Sutskever et al., 2014; Bahdanau et al., 2015), caption generation (Kiros et al., 2014; Donahue et al., 2015; Vinyals et al., 2015; Xu et al., 2015; Karpathy & Fei-Fei, 2015), and speech recognition (Chorowski et al., 2015; Chan et al., 2015). | 1607.07086#2 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 3 | The standard way to train RNNs to generate sequences is to maximize the log-likelihood of the âcorrectâ token given a history of the previous âcorrectâ ones, an approach often called teacher forcing. At evaluation time, the output sequence is often produced by an approximate search for the most likely candidate according to the learned distribution. During this search, the model is conditioned on its own guesses, which may be incorrect and thus lead to a compounding of errors (Bengio et al., 2015). This can become especially problematic for longer sequences. Due to this discrepancy between training and testing conditions, it has been shown that maximum likelihood training can be suboptimal (Bengio et al., 2015; Ranzato et al., 2015). In these works, the authors argue that the network should be trained to continue generating correctly given the outputs already produced by the model, rather than the ground-truth reference outputs from the data. This gives rise to the challenging problem of determining the target for the next network output. Bengio et al. (2015) use the token k from the ground-truth answer as the target for the network at step k, whereas Ranzato et al. (2015) rely on the REINFORCE algorithm (Williams, 1992) to decide whether or not the tokens
# âCIFAR Senior Fellow â CIFAR Fellow
1
Published as a conference paper at ICLR 2017 | 1607.07086#3 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 4 | # âCIFAR Senior Fellow â CIFAR Fellow
1
Published as a conference paper at ICLR 2017
from a sampled prediction lead to a high task-speciï¬c score, such as BLEU (Papineni et al., 2002) or ROUGE (Lin & Hovy, 2003).
In this work, we propose and study an alternative procedure for training sequence prediction networks that aims to directly improve their test time metrics (which are typically not the log-likelihood). In particular, we train an additional network called the critic to output the value of each token, which we deï¬ne as the expected task-speciï¬c score that the network will receive if it outputs the token and continues to sample outputs according to its probability distribution. Furthermore, we show how the predicted values can be used to train the main sequence prediction network, which we refer to as the actor. The theoretical foundation of our method is that, under the assumption that the critic computes exact values, the expression that we use to train the actor is an unbiased estimate of the gradient of the expected task-speciï¬c score. | 1607.07086#4 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 5 | Our approach draws inspiration and borrows the terminology from the ï¬eld of reinforcement learning (RL) (Sutton & Barto, 1998), in particular from the actor-critic approach (Sutton, 1984; Sutton et al., 1999; Barto et al., 1983). RL studies the problem of acting efï¬ciently based only on weak supervision in the form of a reward given for some of the agentâs actions. In our case, the reward is analogous to the task-speciï¬c score associated with a prediction. However, the tasks we consider are those of supervised learning, and we make use of this crucial difference by allowing the critic to use the ground-truth answer as an input. In other words, the critic has access to a sequence of expert actions that are known to lead to high (or even optimal) returns. To train the critic, we adapt the temporal difference methods from the RL literature (Sutton, 1988) to our setup. While RL methods with non-linear function approximators are not new (Tesauro, 1994; Miller et al., 1995), they have recently surged in popularity, giving rise to the ï¬eld of âdeep RLâ (Mnih et al., 2015). We show that some of the techniques recently developed in deep RL, such as having a target network, may also be beneï¬cial for sequence prediction. | 1607.07086#5 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 6 | The contributions of the paper can be summarized as follows: 1) we describe how RL methodology like the actor-critic approach can be applied to supervised learning problems with structured outputs; and 2) we investigate the performance and behavior of the new method on both a synthetic task and a real-world task of machine translation, demonstrating the improvements over maximum-likelihood and REINFORCE brought by the actor-critic training.
# 2 BACKGROUND
We consider the problem of learning to produce an output sequence Y = (y1, . . . , yT ), yt â A given an input X, where A is the alphabet of output tokens. We will often use notation Yf ...l to refer to subsequences of the form (yf , . . . , yl). Two sets of input-output pairs (X, Y ) are assumed to be available for both training and testing. The trained predictor h is evaluated by computing the average task-speciï¬c score R( ËY , Y ) on the test set, where ËY = h(X) is the prediction. To simplify the formulas we always use T to denote the length of an output sequence, ignoring the fact that the output sequences may have different length. | 1607.07086#6 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 7 | Recurrent neural networks A recurrent neural network (RNN) produces a sequence of state vectors (s1, . . . , sT ) given a sequence of input vectors (e1, . . . , eT ) by starting from an initial s0 state and applying T times the transition function f : st = f (stâ1, et). Popular choices for the mapping f are the Long Short-Term Memory (Hochreiter & Schmidhuber, 1997) and the Gated Recurrent Units (Cho et al., 2014), the latter of which we use for our models.
To build a probabilistic model for sequence generation with an RNN, one adds a stochastic output layer g (typically a softmax for discrete outputs) that generates outputs yt â A and can feed these outputs back by replacing them with their embedding e(yt):
yt â¼ g(stâ1) st = f (stâ1, e(yt)). (1) (2) | 1607.07086#7 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 8 | yt â¼ g(stâ1) st = f (stâ1, e(yt)). (1) (2)
Thus, the RNN deï¬nes a probability distribution p(yt|y1, . . . , ytâ1) of the next output token yt given the previous tokens (y1, . . . , ytâ1). Upon adding a special end-of-sequence token â
to the alphabet A, the RNN can deï¬ne the distribution p(Y ) over all possible sequences as p(Y ) = p(y1)p(y2|y1) . . . p(yT |y1, . . . , yT â1)p(â
|y1, . . . , yT ).
2
Published as a conference paper at ICLR 2017 | 1607.07086#8 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 9 | 2
Published as a conference paper at ICLR 2017
RNNs for sequence prediction To use RNNs for sequence prediction, they must be augmented to generate Y conditioned on an input X. The simplest way to do this is to start with an initial state s0 = s0(X) (Sutskever et al., 2014; Cho et al., 2014). Alternatively, one can encode X as a variable-length sequence of vectors (h1, . . . , hL) and condition the RNN on this sequence using an attention mechanism. In our models, the sequence of vectors is produced by either a bidirectional RNN (Schuster & Paliwal, 1997) or a convolutional encoder (Rush et al., 2015).
We use a soft attention mechanism (Bahdanau et al., 2015) that computes a weighted sum of a sequence of vectors. The attention weights determine the relative importance of each vector. More formally, we consider the following equations for RNNs with attention:
yt â¼ g(stâ1, ctâ1) st = f (stâ1, ctâ1, e(yt)) αt = β(st, (h1, . . . , hL))
ye ~ 9(Stâ1, Ce-1) ®
8. = f (St-1, Ce-1, e(M)) ® | 1607.07086#9 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 10 | ye ~ 9(Stâ1, Ce-1) ®
8. = f (St-1, Ce-1, e(M)) ®
4 = (se, (ha, ---, hx) °
L a= > Oey, hy © j=l
where β is the attention mechanism that produces the attention weights αt and ct is the context vector, or âglimpseâ, for time step t. The attention weights are computed by an MLP that takes as input the current RNN state and each individual vector to focus on. The weights are typically (as in our work) constrained to be positive and sum to 1 by using the softmax function.
A conditioned RNN can be trained for sequence prediction by gradient ascent on the log-likelihood log p(Y |X) for the input-output pairs (X, Y ) from the training set. To produce a prediction ËY for a test input sequence X, an approximate beam search for the maximum of p(·|X) is usually conducted. During this search the probabilities p(·|Ëy1, . . . , Ëytâ1) are considered, where the previous tokens Ëy1, . . . , Ëytâ1 comprise a candidate beginning of the prediction ËY . | 1607.07086#10 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 11 | Value functions We view the conditioned RNN as a stochastic policy that generates actions and receives the task score (e.g., BLEU score) as the return. We furthermore consider the case when the return R is partially received at the intermediate steps in the form of rewards r;: RY, Y)= a re(Ges Yt; Y). This is more general than the case of receiving the full return at the end of the sequence, as we can simply define all rewards other than ry to be zero. Receiving intermediate rewards may ease the learning for the critic, and we use reward shaping as explained in Section] Given the policy, possible actions and reward function, the value represents the expected future return as a function of the current state of the system, which in our case is uniquely defined by the sequence of actions taken so far, Yi. We define the value of an unfinished prediction âit as follows:
T VM XY) =| E Ye re (Gei Vie ¥)- Verret X) Sy
We deï¬ne the value of a candidate next token a for an unï¬nished prediction ËY1...tâ1 as the expected future return after generating token a: | 1607.07086#11 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 12 | T Q(a;¥1..1-1, X,Y) = E (neren) + > rie Ficetien.n¥)) : Yeqi..r~p(.[¥1...t-14,X) ott
We will refer to the candidate next tokens as actions. For notational simplicity, we henceforth drop X and Y from the signature of p, V , Q, R and rt, assuming it is clear from the context which of X and Y is meant. We will also use V without arguments for the expected reward of a random prediction.
3
(3) (4) (5)
Published as a conference paper at ICLR 2017
Algorithm 1 Actor-Critic Training for Sequence Prediction Require: A critic ËQ(a; ËY1...t, Y ) and an actor p(a| ËY1...t, X) with weights Ï and θ respectively. 1: Initialize delayed actor p 2: while Not Converged do 3: 4: 5:
and target critic Qâ with same weights: 6â = 6, 6â = ¢.
Receive a random example (X, Y ). Generate a sequence of actions ËY from p Compute targets for the critic
. | 1607.07086#12 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 13 | Receive a random example (X, Y ). Generate a sequence of actions ËY from p Compute targets for the critic
.
ge = Tees Via, Y) + > P (al¥4..1 X)Q" (a V4.4, Y) acA
Update the critic weights Ï using the gradient
i 2 dé (= (QG: Y1-1,Y) â at) + rots) t=1 2 where C;, = > (Quen _ a Wel Fie) b a
7:
Update actor weights θ using the following gradient estimate
ee ey ee uaF wa) t=1aeA T dp(ye|M1...4-1, X) + LL > 70 t=1
8:
Update delayed actor and target critic, with constants yg < 1, yy « 1 OY = 700+ (1â70)0', & = b+ (1-46)¢'
# 9: end while
Algorithm 2 Complete Actor-Critic Algorithm for Sequence Prediction 1: Initialize critic ËQ(a; ËY1...t, Y ) and actor p(a| ËY1...t, X) with random weights Ï and θ respectively. | 1607.07086#13 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 14 | 2: Pre-train the actor to predict yt+1 given Y1...t by maximizing log p(yt+1|Y1...t, X). 3: Pre-train the critic to estimate Q by running Algorithm 1 with ï¬xed actor. 4: Run Algorithm 1.
4
Published as a conference paper at ICLR 2017
# 3 ACTOR-CRITIC FOR SEQUENCE PREDICTION
Let θ be the parameters of the conditioned RNN, which we will also refer to as the actor. Our training algorithm is based on the following way of rewriting the gradient of the expected return dV dθ :
d dp( Yn A v. Sp cae) Pal) O(a: ¥ 4). ©) do Punthix) & 144
This equality is known in RL under the names policy gradient theorem (Sutton et al., 1999) and stochastic actor-critic (Sutton, 1984). 1 Note that we use the probability rather than the log probability in this formula (which is more typical in RL applications) as we are summing over actions rather than taking an expectation. Intuitively, this equality corresponds to increasing the probability of actions that give high values, and decreasing the probability of actions that give low values. Since this gradient expression is an expectation, it is trivial to build an unbiased estimate for it:
an | 1607.07086#14 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 15 | an
yey Maia) alt. = Qa: Yi 11) (8) k=1t=1acA
where ËY k are M random samples from p( ËY ). By replacing Q with a parameteric estimate ËQ one can obtain a biased estimate with relatively low variance. The parameteric estimate ËQ is called the critic. The above formula is similar in spirit to the REINFORCE learning rule that Ranzato et al. (2015) use in the same context:
av f Hee r(GtlVt -1) a 7 > dn EYE 1) â b(X)],, )
# a | 1607.07086#15 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 16 | av f Hee r(GtlVt -1) a 7 > dn EYE 1) â b(X)],, )
# a
where the scalar b,(X) is called baseline or control variate. The difference is that in REINFORCE the inner sum over all actions is replaced by its 1-sample estimate, namely Peer VO (g,; Y1...t-1), where the log probability aloe rtie|---) Ce apie.) is intro- duced to correct for the sampling of y,. Furthermore, instead of the value Q(; Y1...4-1), REIN- FORCE uses the cumulative reward ean Tr(Gr3 Yi..7-1) following the action yj, which again can be seen as a 1-sample estimate of Q. Due to these simplifications and the potential high variance in the cumulative reward, the REINFORCE gradient estimator has very high variance. In order to improve upon it, we consider the actor-critic estimate from Equation[8| which has a lower variance at the cost of significant bias, since the critic is not perfect and trained simultaneously with the actor. The success depends on our ability to control the bias by designing the critic network and using an appropriate training criterion for it. | 1607.07086#16 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 17 | To implement the critic, we propose to use a separate RNN parameterized by Ï. The critic RNN is run in parallel with the actor, consumes the tokens Ëyt that the actor outputs and produces the estimates ËQ(a; ËY1...t) for all a â A. A key difference between the critic and the actor is that the correct answer Y is given to the critic as an input, similarly to how the actor is conditioned on X. Indeed, the return R( ËY , Y ) is a deterministic function of Y , and we argue that using Y to compute ËQ should be of great help. We can do this because the values are only required during training and we do not use the critic at test time. We also experimented with providing the actor states st as additional inputs to the critic. See Figure 1 for a visual representation of our actor-critic architecture. | 1607.07086#17 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 18 | Temporal-difference learning A crucial component of our approach is policy evaluation, that is the training of the critic to produce useful estimates of Q. With a naive Monte-Carlo method, one could use the future return yw 7 (Gri Yi.r-1) as a target to OG: Yi.t-1)s and use the critic parameters @ to minimize the square error between these two values. However, like with REINFORCE, using such a target yields to very high variance which quickly grows with the number of steps T. We use a temporal difference (TD) method for policy evaluation (Sutton) /T988). Namely, we use the right-hand side gq, = 1i(§; Yi..tâ1) + ae P(AIM1...4)Q(4;M%...1) of the Bellman acA equation as the target for the left-hand Q( i; Y1...4-1).
1We also provide a simple self-contained proof of Equation (7) in Supplementary Material.
5
Published as a conference paper at ICLR 2017
Actor Critic Q pe Q1,Q2,-°: ,Qr ° Decoder "SCS s«éDeecoder SG, Yas Or im actor states @1,%2,°°° XL Y1,Y2,°°* YT | 1607.07086#18 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 19 | Figure 1: Both the actor and the critic are encoder-decoder networks. The actor receives an input sequence X and produces samples ËY which are evaluated by the critic. The critic takes in the ground-truth sequence Y as input to the encoder, and takes the input summary (calculated using an attention mechanism) and the actorâs prediction Ëyt as input at time step t of the decoder. The values Q1, Q2, · · · , QT computed by the critic are used to approximate the gradient of the expected returns with respect to the parameters of the actor. This gradient is used to train the actor to optimize these expected task speciï¬c returns (e.g., BLEU score). The critic may also receive the hidden state activations of the actor as input. | 1607.07086#19 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 20 | Applying deep RL techniques It has been shown in the RL literature that if Q is non-linear (like in our case), the TD policy evaluation might diverge (Tsitsiklis & Van Roy} |1997). Previous work has shown that this problem can be alleviated by using an additional target network Qâ to compute 4, Which is updated less often and/or more slowly than Q. Similarly to (Lillicrap et al.||2015), we update the parameters ¢â of the target critic by linearly interpolating them with the parameters of the trained one. Attempts to remove the target network by propagating the gradient through q resulted in a lower square error (Q(g1; Yi...r) - a). but the resulting Q values proved very unreliable as training signals for the actor.
The fact that both actor and critic use outputs of each other for training creates a potentially dangerous feedback loop. To address this, we sample predictions from a delayed actor (Lillicrap et al., 2015), whose weights are slowly updated to follow the actor that is actually trained. | 1607.07086#20 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 21 | Dealing with large action spaces One of the challenges of our work is that the action space is very large (as is typically the case in NLP tasks with large vocabularies). This can be alleviated by putting constraints on the critic values for actions that are rarely sampled. We found experimentally that shrinking the values of these rare actions is necessary for the algorithm to converge. Speciï¬cally, we add a term Ct for every step t to the criticâs optimization objective which drives all value predictions of the critic closer to their mean:
2
2 a= (a Yaa) â a Yo0Â¥i 0) (10) b a
This corresponds to penalizing the variance of the outputs of the critic. Without this penalty the values of rare actions can be severely overestimated, which biases the gradient estimates and can cause divergence. A similar trick was used in the context of learning simple algorithms with Q-learning (Zaremba et al., 2015). | 1607.07086#21 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 22 | Reward shaping While we are ultimately interested in the maximization of the score of a complete prediction, simply awarding this score at the last step provides a very sparse training signal for the critic. For this reason we use potential-based reward shaping with potentials Φ( ËY1...t) = R( ËY1...t) for incomplete sequences and Φ( ËY ) = 0 for complete ones (Ng et al., 1999). Namely, for a predicted sequence ËY we compute score values for all preï¬xes to obtain the sequence of scores (R( ËY1...1), R( ËY1...2), . . . , R( ËY1...T )). The difference between the consecutive pairs of scores is then used as the reward at each step: rt(Ëyt; ËY1...tâ1) = R( ËY1...t) â R( ËY1...tâ1). Using the shaped reward rt instead of awarding the whole score R at the last step does not change the optimal policy (Ng et al., 1999).
Putting it all together Algorithm 1 describes the proposed method in detail. We consider adding the weighted log-likelihood gradient to the actorâs gradient estimate. This is in line with the prior work
6 | 1607.07086#22 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 23 | 6
Published as a conference paper at ICLR 2017
by (Ranzato et al., 2015) and (Shen et al., 2015). It is also motivated by our preliminary experiments that showed that using the actor-critic estimate alone can lead to an early determinization of the policy and vanishing gradients (also discussed in Section 6). Starting training with a randomly initialized actor and critic would be problematic, because neither the actor nor the critic would provide adequate training signals for one another. The actor would sample completely random predictions that receive very little reward, thus providing a very weak training signal for the critic. A random critic would be similarly useless for training the actor. Motivated by these considerations, we pre-train the actor using standard log-likelihood training. Furthermore, we pre-train the critic by feeding it samples from the pre-trained actor, while the actorâs parameters are frozen. The complete training procedure including pre-training is described by Algorithm 2.
# 4 RELATED WORK | 1607.07086#23 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 24 | # 4 RELATED WORK
In other recent RL-inspired work on sequence prediction, Ranzato et al. (2015) trained a translation model by gradually transitioning from maximum likelihood learning into optimizing BLEU or ROUGE scores using the REINFORCE algorithm. However, REINFORCE is known to have very high variance and does not exploit the availability of the ground-truth like the critic network does. The approach also relies on a curriculum learning scheme. Standard value-based RL algorithms like SARSA and OLPOMDP have also been applied to structured prediction (Maes et al., 2009). Again, these systems do not use the ground-truth for value prediction. | 1607.07086#24 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 25 | Imitation learning has also been applied to structured prediction (Vlachos, 2012). Methods of this type include the SEARN (Daum´e Iii et al., 2009) and DAGGER (Ross et al., 2010) algorithms. These methods rely on an expert policy to provide action sequences that the policy learns to imitate. Unfortunately, itâs not always easy or even possible to construct an expert policy for a task-speciï¬c score. In our approach, the critic plays a role that is similar to the expert policy, but is learned without requiring prior knowledge about the task-speciï¬c score. The recently proposed âscheduled samplingâ (Bengio et al., 2015) can also be seen as imitation learning. In this method, ground-truth tokens are occasionally replaced by samples from the model itself during training. A limitation is that the token k for the ground-truth answer is used as the target at step k, which might not always be the optimal strategy. | 1607.07086#25 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 26 | There are also approaches that aim to approximate the gradient of the expected score. One such approach is âDirect Loss Minimizationâ (Hazan et al., 2010) in which the inference procedure is adapted to take both the model likelihood and task-speciï¬c score into account. Another popular approach is to replace the domain over which the task score expectation is deï¬ned with a small subset of it, as is done in Minimum (Bayes) Risk Training (Goel & Byrne, 2000; Shen et al., 2015; Och, 2003). This small subset is typically an n-best list or a sample (like in REINFORCE) that may or may not include the ground-truth as well. None of these methods provide intermediate targets for the actor during training, and Shen et al. (2015) report that as many as 100 samples were required for the best results.
Another recently proposed method is to optimize a global sequence cost with respect to the selection and pruning behavior of the beam search procedure itself (Wiseman & Rush, 2016). This method follows the more general strategy called âlearning as search optimizationâ (Daum´e III & Marcu, 2005). This is an interesting alternative to our approach; however, it is designed speciï¬cally for the precise inference procedure involved.
# 5 EXPERIMENTS | 1607.07086#26 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 27 | # 5 EXPERIMENTS
To validate our approach, we performed two sets of experiments 2. First, we trained the proposed model to recover strings of natural text from their corrupted versions. Speciï¬cally, we consider each character in a natural language corpus and with some probability replace it with a random character. We call this synthetic task spelling correction. A desirable property of this synthetic task is that data is essentially inï¬nite and overï¬tting is no concern. Our second series of experiments is done on the task of automatic machine translation using different models and datasets.
2 The source code is available at https://github.com/rizar/actor-critic-public
7
Published as a conference paper at ICLR 2017 | 1607.07086#27 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 28 | 2 The source code is available at https://github.com/rizar/actor-critic-public
7
Published as a conference paper at ICLR 2017
In addition to maximum likelihood and actor-critic training we implemented two versions of the REINFORCE gradient estimator. In the ï¬rst version, we use a linear baseline network that takes the actor states as input, exactly as in (Ranzato et al., 2015). We also propose a novel extension of REINFORCE that leverages the extra information available in the ground-truth output Y . Speciï¬cally, we use the ËQ estimates produced by the critic network as the baseline for the REINFORCE algorithm. The motivation behind this approach is that using the ground-truth output should produce a better baseline that lowers the variance of REINFORCE, resulting in higher task-speciï¬c scores. We refer to this method as REINFORCE-critic.
5.1 SPELLING CORRECTION | 1607.07086#28 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 29 | 5.1 SPELLING CORRECTION
We use text from the One Billion Word dataset for the spelling correction task (Chelba et al., 2013), which has pre-deï¬ned training and testing sets. The training data was abundant, and we never used any example twice. We evaluate trained models on a section of the test data that comprises 6075 sentences. To speed up experiments, we clipped all sentences to the ï¬rst 10 or 30 characters.
For the spelling correction actor network, we use an RNN with 100 Gated Recurrent Units (GRU) and a bidirectional GRU network for the encoder. We use the same attention mechanism as proposed in (Bahdanau et al., 2015), which effectively makes our actor network a smaller version of the model used in that work. For the critic network, we employed a model with the same architecture as the actor.
We use character error rate (CER) to measure performance on the spelling task, which we deï¬ne as the ratio between the total of Levenshtein distances between predictions and ground-truth outputs and the total length of the ground-truth outputs. This is a corpus-level metric for which a lower value is better. We use it as the return by negating per-sentence ratios. At the evaluation time greedy search is used to extract predictions from the model. | 1607.07086#29 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 30 | We use the ADAM optimizer (Kingma & Ba, 2015) to train all the networks with the parame- ters recommended in the original paper, with the exception of the scale parameter α. The latter is ï¬rst set to 10â3 and then annealed to 10â4 for log-likelihood training. For the pre-training stage of the actor-critic, we use α = 10â3 and decrease it to 10â4 for the joint actor-critic train- ing. We pretrain the actor until its score on the development set stops improving. We pretrain the critic until its TD error stabilizes3. We used M = 1 sample for both actor-critic and REIN- FORCE. For exact hyperparameter settings we refer the reader to Appendix A.
We start REINFORCE training from a pretrained actor, but we do not use the curriculum learning employed in MIXER. The critic is trained in the same way for both REINFORCE and actor- critic, including the pretraining stage. We re- port results obtained with the reward shaping de- scribed in Section 3, as we found that it slightly improves REINFORCE performance. | 1607.07086#30 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 31 | Table 1 presents our results on the spelling cor- rection task. We observe an improvement in CER over log-likelihood training for all four settings considered. Without simultaneous log- likelihood training, actor-critic training results in a better CER than REINFORCE-critic in three
40 â Lt valid â Ut yalid~ 3 a valtd . â RFC valid -- LL train sok 1 . Ss" LE trains => AC train 15} Epochs == RF train =-- RF-C valid BLEU 25 20
Figure 2: Progress of log-likelihood (LL), RE- INFORCE (RF) and actor-critic (AC) training in terms of BLEU score on the training (train) and val- idation (valid) datasets. LL* stands for the anneal- ing phase of log-likelihood training. The curves start from the epoch of log-likelihood pretraining from which the parameters were initialized.
3A typical behaviour for TD error was to grow at ï¬rst and then start decreasing slowly. We found that stopping pretraining shortly after TD error stops growing leads to good results.
8
Published as a conference paper at ICLR 2017 | 1607.07086#31 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 32 | 8
Published as a conference paper at ICLR 2017
Table 1: Character error rate of different methods on the spelling correction task. In the table L is the length of input strings, η is the probability of replacing a character with a random one. LL stands for the log-likelihood training, AC and RF-C and for the actor-critic and the REINFORCE-critic respectively, AC+LL and RF-C+LL for the combinations of AC and RF-C with LL.
Character Error Rate AC LL 17.24 17.81 17.31 18.4 35.89 38.12 37.0 40.87
L = 10, η = 0.3 L = 30, η = 0.3 L = 10, η = 0.5 L = 30, η = 0.5 RF-C AC+LL RF-C+LL 17.82 18.16 35.84 37.6 16.65 17.1 34.6 36.36 16.97 17.47 35 36.6 | 1607.07086#32 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 33 | Table 2: Our IWSLT 2014 machine translation results with a convolutional encoder compared to the previous work by Ranzato et al. Please see 1 for an explanation of abbreviations. The asterisk identiï¬es results from (Ranzato et al., 2015). The numbers reported with ⤠were approximately read from Figure 6 of (Ranzato et al., 2015)
Decoding method greedy search beam search LL* MIXER* 17.74 ⤠20.3 20.73 ⤠21.9 RF 20.92 21.35 RF-C 22.24 22.58 AC 21.66 22.45
out of four settings. In the fourth case, actor-critic and REINFORCE-critic have similar performance. Adding the log-likelihood gradient with a cofï¬cient λLL = 0.1 helps both of the methods, but actor-critic still retains a margin of improvement over REINFORCE-critic.
5.2 MACHINE TRANSLATION | 1607.07086#33 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 34 | 5.2 MACHINE TRANSLATION
For our ï¬rst translation experiment, we use data from the German-English machine translation track of the IWSLT 2014 evaluation campaign (Cettolo et al., 2014), as used in Ranzato et al. (2015), and closely follow the pre-processing described in that work. The training data comprises about 153,000 German-English sentence pairs. In addition we considered a larger WMT14 English-French dataset Cho et al. (2014) with more than 12 million examples. For further information about the data we refer the reader to Appendix B.
The return is deï¬ned as a smoothed and rescaled version of the BLEU score. Speciï¬cally, we start all n-gram counts from 1 instead of 0, and multiply the resulting score by the length of the ground-truth translation. Smoothing is a common practice when sentence-level BLEU score is considered, and it has been used to apply REINFORCE in similar settings (Ranzato et al., 2015). | 1607.07086#34 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 35 | IWSLT 2014 with a convolutional encoder In our ï¬rst experiment we use a convolutional encoder in the actor to make our results more comparable with Ranzato et al. (2015). For the same reason, we use 256 hidden units in the networks. For the critic, we replaced the convolutional network with a bidirectional GRU network. For training this model we mostly used the same hyperparameter values as in the spelling correction experiments, with a few differences highlighted in Appendix A. For decoding we used greedy search and beam search with a beam size of 10. We found that penalizing candidate sentences that are too short was required to obtain the best results. Similarly to (Hannun et al., 2014), we subtracted ÏT from the negative log-likelihood of each candidate sentence, where T is the candidateâs length, and Ï is a hyperparameter tuned on the validation set. | 1607.07086#35 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 36 | The results are summarized in Table 2. We report a signiï¬cant improvement of 2.3 BLEU points over the log-likelihood baseline when greedy search is used for decoding. Surprisingly, the best performing method is REINFORCE with critic, with an additional 0.6 BLEU point advantage over the actor-critic. When beam-search is used, the ranking of the compared approaches is the same, but the margin between the proposed methods and log-likelihood training becomes smaller. The ï¬nal performances of the actor-critic and the REINFORCE-critic with greedy search are also 0.7 and 1.3 BLEU points respectively better than what Ranzato et al. (2015) report for their MIXER approach. This comparison should be treated with caution, because our log-likelihood baseline is 1.6 BLEU
9
Published as a conference paper at ICLR 2017
Table 3: Our IWSLT 2014 machine translation results with a bidirectional recurrent encoder compared to the previous work. Please see Table 1 for an explanation of abbreviations. The asterisk identiï¬es results from (Wiseman & Rush, 2016).
# Model | 1607.07086#36 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 37 | # Model
greedy search beam search LL* 22.53 23.87 BSO* 23.83 25.48 LL 25.82 27.56 RF-C RF-C+LL 27.42 27.75 27.7 28.3 AC 27.27 27.75 AC+LL 27.49 28.53
Table 4: Our WMT 14 machine translation results compared to the previous work. Please see Table 1 for an explanation of abbreviations. The apostrophy and the asterisk identify results from (Bahdanau et al., 2015) and (Shen et al., 2015) respectively.
Decoding method greedy search beam search LLâ n/a 28.45 LL* MRT * n/a 29.88 n/a 31.3 Model LL 29.33 30.71 AC+LL RF-C+LL 30.85 31.13 29.83 30.37
points stronger than its equivalent from (Ranzato et al., 2015). The performance of REINFORCE with a simple baseline matches the score reported for MIXER in Ranzato et al. (2015). | 1607.07086#37 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 38 | To better understand the IWSLT 2014 results we provide the learning curves for the considered approaches in Figure 2. We can clearly see that the training methods that use generated predictions have a strong regularization effect â that is, better progress on the validation set in exchange for slower or negative progress on the training set. The effect is stronger for both REINFORCE varieties, especially for the one without a critic. The actor-critic training does a much better job of ï¬tting the training set than REINFORCE and is the only method except log-likelihood that shows a clear overï¬tting, which is a healthy behaviour for such a small dataset.
In addition, we performed an ablation study. We found that using a target network was crucial; while the joint actor-critic training was still progressing with γθ = 0.1, with γθ = 1.0 it did not work at all. Similarly important was the value penalty described in Equation (10). We found that good values of the λ coefï¬cient were in the range [10â3, 10â6]. Other techniques, such as reward shaping and a delayed actor, brought moderate performance gains. We refer the reader to Appendix A for more details. | 1607.07086#38 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 39 | IWSLT 2014 with a bidirectional GRU encoder In order to compare our results with those reported by Wiseman & Rush (2016) we repeated our IWSLT 2014 investigation with a different encoder, a bidirectional RNN with 256 GRU units. In this round of experiments we also tried to used combined training objectives in the same way as in our spelling correction experiments. The results are summarized in Table 3. One can see that the actor-critic training, especially its AC+LL version, yields signiï¬cant improvements (1.7 with greedy search and 1.0 with beam search) upon the pure log-likelihood training, which are comparable to those brought by Beam Search Optimization (BSO), even though our log-likelihood baseline is much stronger. In this round of experiments actor-critic and REINFORCE-critic performed on par. | 1607.07086#39 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 40 | WMT 14 Finally we report our results on a very popular large WMT14 English-French dataset (Cho et al., 2014) in Table 4. Our model closely follows the achitecture from (Bahdanau et al., 2015), however we achieved a higher baseline performance by annealing the learning rate α and penalizing output sequences that were too short during beam search. The actor-critic training brings a signiï¬cant 1.5 BLEU improvement with greedy search and a noticeable 0.4 BLEU improvement with beam search. In previous work Shen et al. (2015) report a higher improvement of 1.4 BLEU with beam search, however they use 100 samples for each training example, whereas we use just one. We note that in this experiment, which is perhaps the most realistic settings, the actor-critic enjoys a signiï¬cant advantage over the REINFORCE-critic.
10
Published as a conference paper at ICLR 2017
# 6 DISCUSSION | 1607.07086#40 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 41 | 10
Published as a conference paper at ICLR 2017
# 6 DISCUSSION
We proposed an actor-critic approach to sequence prediction. Our method takes the task objective into account during training and uses the ground-truth output to aid the critic in its prediction of intermediate targets for the actor. We showed that our method leads to signiï¬cant improvements over maximum likelihood training on both a synthetic task and a machine translation benchmark. Compared to REINFORCE training on machine translation, actor-critic ï¬ts the training data much faster, although in some of our experiments we were able to signiï¬cantly reduce the gap in the training speed and achieve a better test error using our critic network as the baseline for REINFORCE.
One interesting observation we made from the machine translation results is that the training methods that use generated predictions have a strong regularization effect. Our understanding is that condi- tioning on the sampled outputs effectively increases the diversity of training data. This phenomenon makes it harder to judge whether the actor-critic training meets our expectations, because a noisier gradient estimate yielded a better test set performance. We argue that the spelling correction results obtained on a virtually inï¬nite dataset in conjuction with better machine translation performance on the large WMT 14 dataset provide convincing evidence that the actor-training can be effective. In future work we will consider larger machine translation datasets. | 1607.07086#41 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 42 | We ran into several optimization issues. The critic would sometimes assign very high values to actions with a very low probability according to the actor. We were able to resolve this by penalizing the criticâs variance. Additionally, the actor would sometimes have trouble to adapt to the demands of the critic. We noticed that the action distribution tends to saturate and become deterministic, causing the gradient to vanish. We found that combining an RL training objective with log-likelihood can help, but in general we think this issue deserves further investigation. For example, one can look for suitable training criteria that have a well-behaved gradient even when the policy has little or no stochasticity. | 1607.07086#42 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 43 | In a concurrent work Wu et al. (2016) show that a version of REINFORCE with the baseline computed using multiple samples can improve performance of a very strong machine translation system. This result, and our REINFORCE-critic experiments, suggest that often the variance of REINFORCE can be reduced enough to make its application practical. That said, we would like to emphasize that this paper attacks the problem of gradient estimation from a very different angle as it aims for low-variance but potentially high-bias estimates. The idea of using the ground-truth output that we proposed is an absolutely necessary ï¬rst step in this direction. Future work could focus on further reducing the bias of the actor-critic estimate, for example, by using a multi-sample training criterion for the critic.
# ACKNOWLEDGMENTS
We thank the developers of Theano (Theano Development Team, 2016) and Blocks (van Merri¨enboer et al., 2015) for their great work. We thank NSERC, Compute Canada, Calcul Queb´ec, Canada Research Chairs, CIFAR, CHISTERA project M2CR (PCIN-2015-226) and Samsung Institute of Advanced Techonology for their ï¬nancial support.
# REFERENCES | 1607.07086#43 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 44 | # REFERENCES
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In Proceedings of the ICLR 2015, 2015.
Andrew G Barto, Richard S Sutton, and Charles W Anderson. Neuronlike adaptive elements that can solve difï¬cult learning control problems. Systems, Man and Cybernetics, IEEE Transactions on, (5):834â846, 1983.
Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. arXiv preprint arXiv:1506.03099, 2015.
Mauro Cettolo, Jan Niehues, Sebastian St¨uker, Luisa Bentivogli, and Marcello Federico. Report on the 11th iwslt evaluation campaign. In Proc. of IWSLT, 2014.
William Chan, Navdeep Jaitly, Quoc V Le, and Oriol Vinyals. Listen, attend and spell. arXiv preprint arXiv:1508.01211, 2015.
11
Published as a conference paper at ICLR 2017 | 1607.07086#44 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 45 | 11
Published as a conference paper at ICLR 2017
Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005, 2013.
Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, KyungHyun Cho, and Yoshua Bengio. Attention-based models for speech recognition. CoRR, abs/1506.07503, 2015. URL http: //arxiv.org/abs/1506.07503.
Hal Daum´e III and Daniel Marcu. Learning as search optimization: Approximate large margin methods for structured prediction. In Proceedings of the 22nd international conference on Machine learning, pp. 169â176. ACM, 2005. | 1607.07086#45 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 46 | Hal Daum´e Iii, John Langford, and Daniel Marcu. Search-based structured prediction. Machine learning, 75(3):297â325, 2009.
Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venu- gopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2625â2634, 2015.
Vaibhava Goel and William J Byrne. Minimum bayes-risk automatic speech recognition. Computer Speech & Language, 14(2):115â135, 2000.
Awni Y Hannun, Andrew L Maas, Daniel Jurafsky, and Andrew Y Ng. First-pass large vocabulary continuous speech recognition using bi-directional recurrent dnns. arXiv preprint arXiv:1408.2873, 2014.
Tamir Hazan, Joseph Keshet, and David A McAllester. Direct loss minimization for structured prediction. In Advances in Neural Information Processing Systems, pp. 1594â1602, 2010.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735â1780, 1997. | 1607.07086#46 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 47 | Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735â1780, 1997.
Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3128â 3137, 2015.
Diederik P Kingma and Jimmy Ba. A method for stochastic optimization. In International Conference on Learning Representation, 2015.
Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539, 2014.
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. | 1607.07086#47 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 48 | Chin-Yew Lin and Eduard Hovy. Automatic evaluation of summaries using n-gram co-occurrence statistics. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1, pp. 71â78. Association for Computational Linguistics, 2003.
Francis Maes, Ludovic Denoyer, and Patrick Gallinari. Structured prediction with reinforcement learning. Machine learning, 77(2-3):271â301, 2009.
W Thomas Miller, Paul J Werbos, and Richard S Sutton. Neural networks for control. MIT press, 1995.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
12
Published as a conference paper at ICLR 2017
Andrew Y Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In ICML, volume 99, pp. 278â287, 1999. | 1607.07086#48 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 49 | Franz Josef Och. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1, pp. 160â167. Association for Computational Linguistics, 2003.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311â318. Association for Computational Linguistics, 2002.
MarcâAurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015.
St´ephane Ross, Geoffrey J Gordon, and J Andrew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. arXiv preprint arXiv:1011.0686, 2010.
Alexander M Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685, 2015. | 1607.07086#49 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 50 | Mike Schuster and Kuldip K Paliwal. Bidirectional recurrent neural networks. Signal Processing, IEEE Transactions on, 45(11):2673â2681, 1997.
Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. Minimum risk training for neural machine translation. arXiv preprint arXiv:1512.02433, 2015.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pp. 3104â3112, 2014.
Richard S Sutton. Learning to predict by the methods of temporal differences. Machine learning, 3 (1):9â44, 1988.
Richard S Sutton and Andrew G Barto. Introduction to reinforcement learning, volume 135. MIT Press Cambridge, 1998.
Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pp. 1057â 1063, 1999.
Richard Stuart Sutton. Temporal credit assignment in reinforcement learning. 1984. | 1607.07086#50 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 51 | Richard Stuart Sutton. Temporal credit assignment in reinforcement learning. 1984.
Gerald Tesauro. Td-gammon, a self-teaching backgammon program, achieves master-level play. Neural computation, 6(2):215â219, 1994.
Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/ 1605.02688.
John N Tsitsiklis and Benjamin Van Roy. An analysis of temporal-difference learning with function approximation. Automatic Control, IEEE Transactions on, 42(5):674â690, 1997.
Bart van Merri¨enboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde- Farley, Jan Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning. arXiv:1506.00619 [cs, stat], June 2015.
Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3156â3164, 2015. | 1607.07086#51 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 52 | Andreas Vlachos. An investigation of imitation learning algorithms for structured prediction. In EWRL, pp. 143â154. Citeseer, 2012.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
13
Published as a conference paper at ICLR 2017
Sam Wiseman and Alexander M Rush. Sequence-to-sequence learning as beam-search optimization. arXiv preprint arXiv:1606.02960, 2016.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Googleâs neural machine translation sys- tem: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pp. 2048â2057, 2015. | 1607.07086#52 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 53 | Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithms from examples. arXiv preprint arXiv:1511.07275, 2015.
14
Published as a conference paper at ICLR 2017
Table 5: Results of an ablation study. We tried varying the actor update speed γθ, the critic update speed γÏ, the value penalty coefï¬cient λ, whether or not reward shaping is used, whether or not temporal difference (TD) learning is used for the critic. Reported are the best training and validation BLEU score obtained in the course of the ï¬rst 10 training epochs. Some of the validation scores would still improve with longer training. Greedy search was used for decoding. | 1607.07086#53 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 54 | 0.001 0.001 10â3 baseline yes yes 33.73 23.16 with different Î³Ï 0.001 0.001 0.001 0.01 0.1 1 10â3 10â3 10â3 yes yes yes yes yes yes 33.52 32.63 9.59 23.03 22.80 8.14 with different γθ 1 0.001 10â3 yes yes 32.9 22.88 without reward shaping 0.001 0.001 10â3 no yes 32.74 22.61 without temporal difference learning 0.001 0.001 10â3 yes no 23.2 16.36 with different λ 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 3 â 10â3 10â4 10â6 10â8 0 yes yes yes yes yes yes yes yes yes yes 32.4 34.10 35.00 33.6 27.41 22.48 23.15 23.10 22.72 20.55
# A HYPERPARAMETERS | 1607.07086#54 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.