doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1603.08983 | 48 | [2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. abs/1409.0473, 2014.
[3] E. Bengio, P.-L. Bacon, J. Pineau, and D. Precup. Conditional computation in neural networks for faster models. arXiv preprint arXiv:1511.06297, 2015.
[4] D. C. Ciresan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classiï¬cation. In arXiv:1202.2745v1 [cs.CV], 2012.
17
[5] G. Dahl, D. Yu, L. Deng, and A. Acero. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. Audio, Speech, and Language Processing, IEEE Trans- actions on, 20(1):30 â42, jan. 2012.
[6] L. Denoyer and P. Gallinari. Deep sequential neural network. arXiv preprint arXiv:1410.0510, 2014. | 1603.08983#48 | Adaptive Computation Time for Recurrent Neural Networks | This paper introduces Adaptive Computation Time (ACT), an algorithm that
allows recurrent neural networks to learn how many computational steps to take
between receiving an input and emitting an output. ACT requires minimal changes
to the network architecture, is deterministic and differentiable, and does not
add any noise to the parameter gradients. Experimental results are provided for
four synthetic problems: determining the parity of binary vectors, applying
binary logic operations, adding integers, and sorting real numbers. Overall,
performance is dramatically improved by the use of ACT, which successfully
adapts the number of computational steps to the requirements of the problem. We
also present character-level language modelling results on the Hutter prize
Wikipedia dataset. In this case ACT does not yield large gains in performance;
however it does provide intriguing insight into the structure of the data, with
more computation allocated to harder-to-predict transitions, such as spaces
between words and ends of sentences. This suggests that ACT or other adaptive
computation methods could provide a generic method for inferring segment
boundaries in sequence data. | http://arxiv.org/pdf/1603.08983 | Alex Graves | cs.NE | null | null | cs.NE | 20160329 | 20170221 | [
{
"id": "1502.04623"
},
{
"id": "1603.08575"
},
{
"id": "1511.06279"
},
{
"id": "1511.06297"
},
{
"id": "1507.01526"
},
{
"id": "1511.06391"
}
] |
1603.08983 | 49 | [6] L. Denoyer and P. Gallinari. Deep sequential neural network. arXiv preprint arXiv:1410.0510, 2014.
[7] S. Eslami, N. Heess, T. Weber, Y. Tassa, K. Kavukcuoglu, and G. E. Hinton. Attend, infer, repeat: Fast scene understanding with generative models. arXiv preprint arXiv:1603.08575, 2016.
[8] A. Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
[9] A. Graves, A. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural net- works. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Con- ference on, pages 6645â6649. IEEE, 2013.
[10] A. Graves, G. Wayne, and I. Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
[11] E. Grefenstette, K. M. Hermann, M. Suleyman, and P. Blunsom. Learning to transduce with unbounded memory. In Advances in Neural Information Processing Systems, pages 1819â1827, 2015. | 1603.08983#49 | Adaptive Computation Time for Recurrent Neural Networks | This paper introduces Adaptive Computation Time (ACT), an algorithm that
allows recurrent neural networks to learn how many computational steps to take
between receiving an input and emitting an output. ACT requires minimal changes
to the network architecture, is deterministic and differentiable, and does not
add any noise to the parameter gradients. Experimental results are provided for
four synthetic problems: determining the parity of binary vectors, applying
binary logic operations, adding integers, and sorting real numbers. Overall,
performance is dramatically improved by the use of ACT, which successfully
adapts the number of computational steps to the requirements of the problem. We
also present character-level language modelling results on the Hutter prize
Wikipedia dataset. In this case ACT does not yield large gains in performance;
however it does provide intriguing insight into the structure of the data, with
more computation allocated to harder-to-predict transitions, such as spaces
between words and ends of sentences. This suggests that ACT or other adaptive
computation methods could provide a generic method for inferring segment
boundaries in sequence data. | http://arxiv.org/pdf/1603.08983 | Alex Graves | cs.NE | null | null | cs.NE | 20160329 | 20170221 | [
{
"id": "1502.04623"
},
{
"id": "1603.08575"
},
{
"id": "1511.06279"
},
{
"id": "1511.06297"
},
{
"id": "1507.01526"
},
{
"id": "1511.06391"
}
] |
1603.08983 | 50 | [12] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015.
[13] S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient ï¬ow in recurrent nets: the diï¬culty of learning long-term dependencies, 2001.
[14] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â 1780, 1997.
[15] M. Hutter. Universal artiï¬cial intelligence. Springer, 2005.
[16] M. A. Just, P. A. Carpenter, and J. D. Woolley. Paradigms and processes in reading compre- hension. Journal of experimental psychology: General, 111(2):228, 1982.
[17] N. Kalchbrenner, I. Danihelka, and A. Graves. Grid long short-term memory. arXiv preprint arXiv:1507.01526, 2015. | 1603.08983#50 | Adaptive Computation Time for Recurrent Neural Networks | This paper introduces Adaptive Computation Time (ACT), an algorithm that
allows recurrent neural networks to learn how many computational steps to take
between receiving an input and emitting an output. ACT requires minimal changes
to the network architecture, is deterministic and differentiable, and does not
add any noise to the parameter gradients. Experimental results are provided for
four synthetic problems: determining the parity of binary vectors, applying
binary logic operations, adding integers, and sorting real numbers. Overall,
performance is dramatically improved by the use of ACT, which successfully
adapts the number of computational steps to the requirements of the problem. We
also present character-level language modelling results on the Hutter prize
Wikipedia dataset. In this case ACT does not yield large gains in performance;
however it does provide intriguing insight into the structure of the data, with
more computation allocated to harder-to-predict transitions, such as spaces
between words and ends of sentences. This suggests that ACT or other adaptive
computation methods could provide a generic method for inferring segment
boundaries in sequence data. | http://arxiv.org/pdf/1603.08983 | Alex Graves | cs.NE | null | null | cs.NE | 20160329 | 20170221 | [
{
"id": "1502.04623"
},
{
"id": "1603.08575"
},
{
"id": "1511.06279"
},
{
"id": "1511.06297"
},
{
"id": "1507.01526"
},
{
"id": "1511.06391"
}
] |
1603.08983 | 51 | [18] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[19] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
[20] Q. V. Le and T. Mikolov. Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053, 2014.
[21] M. Li and P. Vit´anyi. An introduction to Kolmogorov complexity and its applications. Springer Science & Business Media, 2013.
[22] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111â3119, 2013.
18
[23] B. A. Olshausen et al. Emergence of simple-cell receptive ï¬eld properties by learning a sparse code for natural images. Nature, 381(6583):607â609, 1996. | 1603.08983#51 | Adaptive Computation Time for Recurrent Neural Networks | This paper introduces Adaptive Computation Time (ACT), an algorithm that
allows recurrent neural networks to learn how many computational steps to take
between receiving an input and emitting an output. ACT requires minimal changes
to the network architecture, is deterministic and differentiable, and does not
add any noise to the parameter gradients. Experimental results are provided for
four synthetic problems: determining the parity of binary vectors, applying
binary logic operations, adding integers, and sorting real numbers. Overall,
performance is dramatically improved by the use of ACT, which successfully
adapts the number of computational steps to the requirements of the problem. We
also present character-level language modelling results on the Hutter prize
Wikipedia dataset. In this case ACT does not yield large gains in performance;
however it does provide intriguing insight into the structure of the data, with
more computation allocated to harder-to-predict transitions, such as spaces
between words and ends of sentences. This suggests that ACT or other adaptive
computation methods could provide a generic method for inferring segment
boundaries in sequence data. | http://arxiv.org/pdf/1603.08983 | Alex Graves | cs.NE | null | null | cs.NE | 20160329 | 20170221 | [
{
"id": "1502.04623"
},
{
"id": "1603.08575"
},
{
"id": "1511.06279"
},
{
"id": "1511.06297"
},
{
"id": "1507.01526"
},
{
"id": "1511.06391"
}
] |
1603.08983 | 52 | [24] B. Recht, C. Re, S. Wright, and F. Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems, pages 693â701, 2011.
[25] S. Reed and N. de Freitas. Neural programmer-interpreters. Technical Report arXiv:1511.06279, 2015.
[26] J. Schmidhuber. Self-delimiting neural networks. arXiv preprint arXiv:1210.0118, 2012.
[27] J. Schmidhuber and S. Hochreiter. Guessing can outperform many long time lag algorithms. Technical report, 1996.
[28] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬tting. The Journal of Machine Learning Research, 15(1):1929â1958, 2014.
[29] R. K. Srivastava, K. Greï¬, and J. Schmidhuber. Training very deep networks. In Advances in Neural Information Processing Systems, pages 2368â2376, 2015. | 1603.08983#52 | Adaptive Computation Time for Recurrent Neural Networks | This paper introduces Adaptive Computation Time (ACT), an algorithm that
allows recurrent neural networks to learn how many computational steps to take
between receiving an input and emitting an output. ACT requires minimal changes
to the network architecture, is deterministic and differentiable, and does not
add any noise to the parameter gradients. Experimental results are provided for
four synthetic problems: determining the parity of binary vectors, applying
binary logic operations, adding integers, and sorting real numbers. Overall,
performance is dramatically improved by the use of ACT, which successfully
adapts the number of computational steps to the requirements of the problem. We
also present character-level language modelling results on the Hutter prize
Wikipedia dataset. In this case ACT does not yield large gains in performance;
however it does provide intriguing insight into the structure of the data, with
more computation allocated to harder-to-predict transitions, such as spaces
between words and ends of sentences. This suggests that ACT or other adaptive
computation methods could provide a generic method for inferring segment
boundaries in sequence data. | http://arxiv.org/pdf/1603.08983 | Alex Graves | cs.NE | null | null | cs.NE | 20160329 | 20170221 | [
{
"id": "1502.04623"
},
{
"id": "1603.08575"
},
{
"id": "1511.06279"
},
{
"id": "1511.06297"
},
{
"id": "1507.01526"
},
{
"id": "1511.06391"
}
] |
1603.08983 | 53 | [30] R. K. Srivastava, B. R. Steunebrink, and J. Schmidhuber. First experiments with powerplay. Neural Networks, 41:130â136, 2013.
[31] S. Sukhbaatar, J. Weston, R. Fergus, et al. End-to-end memory networks. In Advances in Neural Information Processing Systems, pages 2431â2439, 2015.
[32] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. arXiv preprint arXiv:1409.3215, 2014.
[33] O. Vinyals, S. Bengio, and M. Kudlur. Order matters: Sequence to sequence for sets. arXiv preprint arXiv:1511.06391, 2015.
[34] O. Vinyals, M. Fortunato, and N. Jaitly. Pointer networks. In Advances in Neural Information Processing Systems, pages 2674â2682, 2015.
[35] A. J. Wiles. Modular elliptic curves and fermats last theorem. ANNALS OF MATH, 141:141, 1995. | 1603.08983#53 | Adaptive Computation Time for Recurrent Neural Networks | This paper introduces Adaptive Computation Time (ACT), an algorithm that
allows recurrent neural networks to learn how many computational steps to take
between receiving an input and emitting an output. ACT requires minimal changes
to the network architecture, is deterministic and differentiable, and does not
add any noise to the parameter gradients. Experimental results are provided for
four synthetic problems: determining the parity of binary vectors, applying
binary logic operations, adding integers, and sorting real numbers. Overall,
performance is dramatically improved by the use of ACT, which successfully
adapts the number of computational steps to the requirements of the problem. We
also present character-level language modelling results on the Hutter prize
Wikipedia dataset. In this case ACT does not yield large gains in performance;
however it does provide intriguing insight into the structure of the data, with
more computation allocated to harder-to-predict transitions, such as spaces
between words and ends of sentences. This suggests that ACT or other adaptive
computation methods could provide a generic method for inferring segment
boundaries in sequence data. | http://arxiv.org/pdf/1603.08983 | Alex Graves | cs.NE | null | null | cs.NE | 20160329 | 20170221 | [
{
"id": "1502.04623"
},
{
"id": "1603.08575"
},
{
"id": "1511.06279"
},
{
"id": "1511.06297"
},
{
"id": "1507.01526"
},
{
"id": "1511.06391"
}
] |
1603.06744 | 1 | # Abstract
Many language generation tasks require the production of text conditioned on both structured and unstructured inputs. We present a novel neural network architec- ture which generates an output sequence conditioned on an arbitrary number of in- put functions. Crucially, our approach allows both the choice of conditioning context and the granularity of generation, for example characters or tokens, to be marginalised, thus permitting scalable and effective training. Using this framework, we address the problem of generating pro- gramming code from a mixed natural lan- guage and structured speciï¬cation. We create two new data sets for this paradigm derived from the collectible trading card games Magic the Gathering and Hearth- stone. On these, and a third preexisting corpus, we demonstrate that marginalis- ing multiple predictors allows our model to outperform strong benchmarks.
# Introduction | 1603.06744#1 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 2 | # Introduction
The generation of both natural and formal lan- guages often requires models conditioned on di- verse predictors (Koehn et al., 2007; Wong and Mooney, 2006). Most models take the restrictive approach of employing a single predictor, such as a word softmax, to predict all tokens of the output sequence. To illustrate its limitation, suppose we wish to generate the answer to the question âWho wrote The Foundation?â as âThe Foundation was written by Isaac Asimovâ. The generation of the words âIssac Asimovâ and âThe Foundationâ from a word softmax trained on annotated data is un- likely to succeed as these words are sparse. A ro- bust model might, for example, employ one preDivinejFavor âWhenever a creature with flying enters the battlefield under your control, it gains haste until end of turn. Whenever a Dragon enters the battlefield under your control, it deals lamage to target cteature or player, âwhere X is the number of Dragons control.
Figure 1: Example MTG and HS cards. | 1603.06744#2 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 3 | Figure 1: Example MTG and HS cards.
dictor to copy âThe Foundationâ from the input, and a another one to ï¬nd the answer âIssac Asi- movâ by searching through a database. However, training multiple predictors is in itself a challeng- ing task, as no annotation exists regarding the pre- dictor used to generate each output token. Fur- thermore, predictors generate segments of differ- ent granularity, as database queries can generate multiple tokens while a word softmax generates a single token. In this work we introduce Latent Predictor Networks (LPNs), a novel neural archi- tecture that fulï¬lls these desiderata: at the core of the architecture is the exact computation of the marginal likelihood over latent predictors and gen- erated segments allowing for scalable training.
We introduce a new corpus for the automatic generation of code for cards in Trading Card Games (TCGs), on which we validate our model 1. TCGs, such as Magic the Gathering (MTG) and Hearthstone (HS), are games played between two players that build decks from an ever expanding pool of cards. Examples of such cards are shown in Figure 1. Each card is identiï¬ed by its attributes
# 1Dataset available at https://deepmind.com/publications.html | 1603.06744#3 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 4 | # 1Dataset available at https://deepmind.com/publications.html
(e.g., name and cost) and has an effect that is de- scribed in a text box. Digital implementations of these games implement the game logic, which in- cludes the card effects. This is attractive from a data extraction perspective as not only are the data annotations naturally generated, but we can also view the card as a speciï¬cation communi- cated from a designer to a software engineer.
This dataset presents additional challenges to prior work in code generation (Wong and Mooney, 2006; Jones et al., 2012; Lei et al., 2013; Artzi et al., 2015; Quirk et al., 2015), including the handling of structured inputâi.e. cards are com- posed by multiple sequences (e.g., name and description)âand attributes (e.g., attack and cost), and the length of the generated sequences. Thus, we propose an extension to attention-based neu- ral models (Bahdanau et al., 2014) to attend over structured inputs. Finally, we propose a code com- pression method to reduce the size of the code without impacting the quality of the predictions.
Experiments performed on our new datasets, and a further pre-existing one, suggest that our ex- tensions outperform strong benchmarks. | 1603.06744#4 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 5 | Experiments performed on our new datasets, and a further pre-existing one, suggest that our ex- tensions outperform strong benchmarks.
The paper is structured as follows: We ï¬rst describe the data collection process (Section 2) and formally deï¬ne our problem and our base- line method (Section 3). Then, we propose our extensions, namely, the structured attention mech- anism (Section 4) and the LPN architecture (Sec- tion 5). We follow with the description of our code compression algorithm (Section 6). Our model is validated by comparing with multiple bench- marks (Section 7). Finally, we contextualize our ï¬ndings with related work (Section 8) and present the conclusions of this work (Section 9).
# 2 Dataset Extraction | 1603.06744#5 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 6 | # 2 Dataset Extraction
We obtain data from open source implementations of two different TCGs, MTG in Java2 and HS in Python.3 The statistics of the corpora are illus- trated in Table 1. In both corpora, each card is im- plemented in a separate class ï¬le, which we strip of imports and comments. We categorize the con- tent of each card into two different groups: sin- gular ï¬elds that contain only one value; and text ï¬elds, which contain multiple words representing different units of meaning. In MTG, there are six singular ï¬elds (attack, defense, rarity, set, id, and
# 2github.com/magefree/mage/ 3github.com/danielyule/hearthbreaker/
MTG HS Programming Language Java Python Cards Cards (Train) Cards (Validation) Cards (Test) 13,297 11,969 664 664 665 533 66 66 Singular Fields Text Fields 6 8 4 2 Words In Description (Average) Characters In Code (Average) 21 1,080 7 352
Table 1: Statistics of the two TCG datasets. | 1603.06744#6 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 7 | Table 1: Statistics of the two TCG datasets.
health) and four text ï¬elds (cost, type, name, and description), whereas HS cards have eight singu- lar ï¬elds (attack, health, cost and durability, rar- ity, type, race and class) and two text ï¬elds (name and description). Text ï¬elds are tokenized by splitting on whitespace and punctuation, with ex- ceptions accounting for domain speciï¬c artifacts (e.g., Green mana is described as â{G}â in MTG). Empty ï¬elds are replaced with a âNILâ token.
The code for the HS card in Figure 1 is shown in Figure 2. The effect of âdrawing cards until the player has as many cards as the opponentâ is im- plemented by computing the difference between the playersâ hands and invoking the draw method that number of times. This illustrates that the map- ping between the description and the code is non- linear, as no information is given in the text regard- ing the speciï¬cs of the implementation. | 1603.06744#7 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 8 | class DivineFavor(SpellCard): def __init__(self): super().__init__("Divine Favor", 3, CHARACTER_CLASS.PALADIN, CARD_RARITY.RARE) def use(self, player, game): super().use(player, game) difference = len(game.other_player.hand) - len(player.hand) for i in range(0, difference): player.draw()
Figure 2: Code for the HS card âDivine Favorâ.
# 3 Problem Deï¬nition
Given the description of a card x, our decoding problem is to ï¬nd the code Ëy so that:
Ëy = argmax log P (y | x) y (1)
Here log P (y | x) is estimated by a given model. We deï¬ne y = y1..y|y| as the sequence of char- acters of the code with length |y|. We index each input ï¬eld with k = 1..|x|, where |x| quantiï¬es the
number of input ï¬elds. |xk| denotes the number of tokens in xk and xki selects the i-th token.
# 4 Structured Attention | 1603.06744#8 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 9 | number of input ï¬elds. |xk| denotes the number of tokens in xk and xki selects the i-th token.
# 4 Structured Attention
Background When |z| = 1, the atten- tion model of |/Bahdanau et al. (2014) ap- plies. Following the chain rule, log P(y|xz) = Ve=1.lyl log P(yly1--ye-1, 2), each token y is predicted conditioned on the previously gener- ated sequence y1..y;â1 and input sequence x; = £11.-L1),,|- Probability are estimated with a soft- max over the vocabulary Y:
p(yt|y1..ytâ1, x1) = softmax ytâY (ht) (2)
where ht is the Recurrent Neural Network (RNN) time stamp t, which is modeled as state at g(ytâ1, htâ1, zt). g(·) is a recurrent update func- tion for generating the new state ht based on the previous token ytâ1, the previous state htâ1, and the input text representation zt. We imple- ment g using a Long Short-Term Memory (LSTM) RNNs (Hochreiter and Schmidhuber, 1997). | 1603.06744#9 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 10 | The attention mechanism generates the repre- sentation of the input sequence X = 211..24},,}, and z, is computed as the weighted sum z; = Viet. ler a;h(a1;), where a; is the attention co- efficient obtained for token 71; and h is a func- tion that maps each x1; to a continuous vector. In general, h is a function that projects x1; by learn- ing a lookup table, and then embedding contex- tual words by defining an RNN. Coefficients a; are computed with a softmax over input tokens T11--Ty Jay]
ai = softmax x1iâx (v(h(x1i), htâ1)) (3)
Function v computes the afï¬nity of each token x1i and the current output context htâ1. A common implementation of v is to apply a linear projection from h(x1i) : htâ1 (where : is the concatenation operation) into a ï¬xed size vector, followed by a tanh and another linear projection. | 1603.06744#10 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 11 | Our Approach We extend the computation of zt for cases when x corresponds to multiple ï¬elds. Figure 3 illustrates how the MTG card âSerra An- gelâ is encoded, assuming that there are two singu- lar ï¬elds and one text ï¬eld. We ï¬rst encode each token xki using the C2W model described in Ling et al. (2015), which is a replacement for lookup ta- bles where word representations are learned at the
}ââ* Name (x1, x:) Health (x) Attack (%:) Serra Angel a a + + + + Lie a [O] caw [OOHO SG} 4OSHOO) Bi-LSTM + + + r [oo}} [GO 65}, Go} Linear + + + + eo] Hoo) +(O5) | +O) Tanh + + + + 2 = 3 -2 Linear + + + + ozs] |, [oor on 7.00 Softmax oes = z [ere) lore)
Figure 3: Illustration of the structured attention mechanism operating on a single time stamp t. | 1603.06744#11 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 12 | Figure 3: Illustration of the structured attention mechanism operating on a single time stamp t.
character level (cf. C2W row). A context-aware representation is built for words in the text ï¬elds using a bidirectional LSTM (cf. Bi-LSTM row). Computing attention over multiple input ï¬elds is problematic as each input ï¬eldâs vectors have dif- ferent sizes and value ranges. Thus, we learn a linear projection mapping each input token xki to a vector with a common dimensionality and value range (cf. Linear row). Denoting this process as f (xki), we extend Equation 3 as:
aki = softmax xkiâx (v(f (xki), htâ1)) (4)
Here a scalar coefï¬cient aki is computed for each input token xki (cf. âTanhâ, âLinearâ, and âSoft- maxâ rows). Thus, the overall input representation zt is computed as:
zt = aijf (xki) k=1..|x|,i=1..|xk| (5)
# 5 Latent Predictor Networks | 1603.06744#12 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 13 | zt = aijf (xki) k=1..|x|,i=1..|xk| (5)
# 5 Latent Predictor Networks
Background In order to decode from x to y, many words must be copied into the code, such as the name of the card, the attack and the cost values. If we observe the HS card in Figure 1 and the respective code in Figure 2, we observe that the name âDivine Favorâ must be copied into the class name and in the constructor, along with the cost of the card â3â. As explained earlier, this problem is not speciï¬c to our task: for in- stance, in the dataset of Oda et al. (2015), a model must learn to map from timeout = int ( timeout ) to âconvert timeout into an integer.â, where the name of the variable âtimeoutâ must be copied into the output sequence. The same is- sue exists for proper nouns in machine translation | 1603.06744#13 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 14 | Xso__Xs0o_X x: x 5 3 Ashbringer + â H cto | Generate Character : x] [02] Copy attack H ' 0.01 : 5 0.03 | Copy Health ' FI ae ! x ee E}L*[0.05] copy cost 7 | |- E| [0.95] Copy-Tirionâ + | 8 â 38 0.66 | Copy Name 7 [0.05 | copy âForaringâ + ' ⬠H 0.04 | Copy Des: rot ao rot 3 H 0.20] Generate Character! : Copy From Attack a ia Copy From Health ia ia Copy From Cost £3) Copy From Name d Tirion d Fordring Copy From Description a : Generate Charecters7] Jn] Gi dt) dud ua ol in Hees =~ PEGE aE ee Output (y) U ind td Wd CE) © Wd te Wi fo) [In oO 8 6 6 yyy ye ys ye yr ye Ys yo yu Ye yo Ya yo yw Yr Ye Yu yo Ya Ya Ya ya Yu Yu Ya Ya Ya
Figure 4: Generation process for the code init(âTirion Fordringâ,8,6,6) using LPNs. | 1603.06744#14 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 15 | Figure 4: Generation process for the code init(âTirion Fordringâ,8,6,6) using LPNs.
which are typically copied from one language to the other. Pointer networks (Vinyals et al., 2015) address this by deï¬ning a probability distribution over a set of units that can be copied c = c1..c|c|. The probability of copying a unit ci is modeled as:
row). The same applies to the generation of the attack, health and cost values as each of these pre- dictors is an element in R. Thus, we deï¬ne our ob- jective function as a marginal log likelihood func- tion over a latent variable Ï:
p(ci) = softmax ciâc (v(h(ci), q)) (6)
As in the attention model (Equation 3), v is a func- tion that computes the afï¬nity between an embed- ded copyable unit h(ci) and an arbitrary vector q. | 1603.06744#15 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 16 | Our Approach Combining pointer networks with a character-based softmax is in itself difï¬cult as these generate segments of different granularity and there is no ground truth of which predictor to use at each time stamp. We now describe Latent Predictor Networks, which model the conditional probability log P (y|x) over the latent sequence of predictors used to generate y.
log P (y | x) = log P (y, Ï | x) ÏâÂ¯Ï (7)
Formally, Ï is a sequence of pairs rt, st, where rt â R denotes the predictor that is used at time- stamp t and st the generated string. We decom- pose P (y, Ï | x) as the product of the probabilities of segments st and predictors rt:
Piyw|e)= [] Plsire | y-gea.2) = Tt Stew Il P(se | yi--Me-1, 8,7) P(e | "1--Ye-1, 2) Tt Stew | 1603.06744#16 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 17 | We assume that our model uses multiple pre- dictors r â R, where each r can generate multiple segments st = yt..yt+|st|â1 with ar- bitrary length |st| at time stamp t. An ex- ample is illustrated in Figure 4, where we ob- serve that to generate the code init(âTirion Fordringâ,8,6,6), a pointer network can be used to generate the sequences y13 7 =Tirion and y22 14=Fordring (cf. âCopy From Nameâ row). These sequences can also be generated us- ing a character softmax (cf. âGenerate Charactersâ
where the generation of each segment is per- formed in two steps: select the predictor rt with probability P (rt | y1..ytâ1, x) and then gener- ate st conditioned on predictor rt with probabil- ity log P (st | y1..ytâ1, x, rt). The probability of each predictor is computed using a softmax over all predictors in R conditioned on the previous state htâ1 and the input representation zt (cf. âSe- lect Predictorâ box). Then, the probability of gen- erating the segment st depends on the predictor type. We deï¬ne three types of predictors: | 1603.06744#17 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 18 | Character Generation Generate a single char- acter from observed characters from the training data. Only one character is generated at each time stamp with probability given by Equation 2.
Copy Singular Field For singular ï¬elds only the ï¬eld itself can be copied, for instance, the value of the attack and cost attributes or the type of card. The size of the generated segment is the number of characters in the copied ï¬eld and the segment is generated with probability 1.
Copy Text Field For text ï¬elds, we allow each of the words xki within the ï¬eld to be copied. The probability of copying a word is learned with a pointer network (cf. âCopy From Nameâ box), where h(ci) is set to the representation of the word f (xki) and q is the concatenation htâ1 : zt of the state and input vectors. This predictor generates a segment with the size of the copied word.
It is important to note that the state vector htâ1 is generated by building an RNN over the se- quence of characters up until the time stamp t â 1, i.e. the previous context ytâ1 is encoded at the character level. This allows the number of pos- sible states to remain tractable at training time.
# 5.1 Inference | 1603.06744#18 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 19 | # 5.1 Inference
At training time we use back-propagation to max- imize the probability of observed code, according to Equation 7. Gradient computation must be per- formed with respect to each computed probabil- ity P (rt | y1..ytâ1, x) and P (st | y1..ytâ1, x, rt). The derivative âP (rt|y1..ytâ1,x) yields:
âαtP (rt | y1..ytâ1, x)βt,rt + ξrt P (y | x)âP (rt | y1..ytâ1, x) = αtβt,rt α|y|+1 | 1603.06744#19 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 20 | Here αt denotes the cumulative probability of all values of Ï up until time stamp t and α|y|+1 yields the marginal probability P (y | x). βt,rt = P (st | y1..ytâ1)βt+|st|â1 denotes the cumulative proba- bility starting from predictor rt at time stamp t, ex- clusive. This includes the probability of the gener- ated segment P (st | y1..ytâ1, x, rt) and the proba- bility of all values of Ï starting from timestamp t+ |st|â1, that is, all possible sequences that generate segment y after segment st is produced. For com- pleteness, ξr denotes the cumulative probabilities of all Ï that do not include rt. To illustrate this, we refer to Figure 4 and consider the timestamp t = 14, where the segment s14 =Fordring is generated. In this case, the cumulative probability | 1603.06744#20 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 21 | α14 is the sum of the path that generates the se- quence init(âTirion with characters alone, and the path that generates the word Tirion by copying from the input. β21 includes the prob- ability of all paths that follow the generation of Fordring, which include 2Ã3Ã3 different paths due to the three decision points that follow (e.g. generating 8 using a character softmax vs. copy- ing from the cost). Finally, ξr refers to the path that generates Fordring character by character. While the number of possible paths grows ex- ponentially, α and β can be computed efï¬ciently using the forward-backward algorithm for Semi- Markov models (Sarawagi and Cohen, 2005), where we associate P (rt | y1..ytâ1, x) to edges and P (st | y1..ytâ1, x, rt) to nodes in the Markov chain.
The derivative â log P (y|x) âP (st|y1..ytâ1,x,rt) can be com- puted using the same logic: | 1603.06744#21 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 22 | The derivative â log P (y|x) âP (st|y1..ytâ1,x,rt) can be com- puted using the same logic:
âαt,stP (st | y1..ytâ1, x, rt)βt+|st|â1 + ξrt P (y | x)âP (st | y1..ytâ1, x, rt) = αt,rtβt+|st|â1 α|y|+1
Once again, we denote αt,rt = αtP (rt | y1..ytâ1, x) as the cumulative probability of all values of Ï that lead to st, exclusive.
An intuitive interpretation of the derivatives is that gradient updates will be stronger on prob- ability chains that are more likely to generate the output sequence. For instance, if the model learns a good predictor to copy names, such as Fordring, other predictors that can also gener- ate the same sequences, such as the character soft- max will allocate less capacity to the generation of names, and focus on elements that they excel at (e.g. generation of keywords).
# 5.2 Decoding | 1603.06744#22 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 23 | # 5.2 Decoding
Decoding is performed using a stack-based de- coder with beam search. Each state S corre- sponds to a choice of predictor rt and segment st at a given time stamp t. This state is scored as V (S) = log P (st | y1..ytâ1, x, rt) + log P (rt | y1..ytâ1, x) + V (prev(S)), where prev(S) de- notes the predecessor state of S. At each time the n states with the highest scores V stamp, are expanded, where n is the size of the beam. For each predictor rt, each output st generates a new state. Finally, at each timestamp t, all states
which produce the same output up to that point are merged by summing their probabilities.
# 6 Code Compression
As the attention-based model traverses all input units at each generation step, generation becomes quite expensive for datasets such as MTG where the average card code contains 1,080 characters. While this is not the essential contribution in our paper, we propose a simple method to compress the code while maintaining the structure of the code, allowing us to train on datasets with longer code (e.g., MTG). | 1603.06744#23 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 24 | The idea behind that method is that many keywords in the programming language (e.g., public and return) as well as frequently used functions and classes (e.g., Card) can be learned without character level information. We exploit this by mapping such strings onto additional sym- bols Xi (e.g., public class copy() â âX1 X2 X3()â). Formally, we seek the string Ëv among all strings V (max) up to length max that maximally reduces the size of the corpus:
Ëv = argmax vâV (max) (len(v) â 1)C(v) (8) | 1603.06744#24 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 25 | where C'(v) is the number of occurrences of v in the training corpus and len(v) its length. (len(v) â 1)C(v) can be seen as the number of characters reduced by replacing v with a non- terminal symbol. To find q(v) efficiently, we lever- age the fact that C(v) < C(vâ) if v contains vâ. It follows that (max â 1)C(v) < (max â 1)C(vâ), which means that the maximum compression ob- tainable for vu at size max is always lower than that of uvâ. Thus, if we can find a @ such that (len(@) â 1)C(v) > (max â 1)C(vâ), that is o at the current size achieves a better compression rate than vâ at the maximum length, then it fol- lows that all sequences that contain v can be dis- carded as candidates. Based on this idea, our itera- tive search starts by obtaining the counts C'(v) for all segments of size s = 2, and computing the best scoring segment v. Then, we build a list L(s) of all segments that achieve a better | 1603.06744#25 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 26 | for all segments of size s = 2, and computing the best scoring segment v. Then, we build a list L(s) of all segments that achieve a better compression rate than @ at their maximum size. At size s + 1, only segments that contain a element in L(s â 1) need to be considered, making the number of substrings to be tested to be tractable as s increases. The al- gorithm stops once s reaches max or the newly generated list L(s) contains no elements. | 1603.06744#26 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 27 | X v X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 card)â{âsuper(card);â}â@Overrideâpublic bility ;âthis. (UUID ownerId)â{âsuper(ownerId public new copy() }â)X3expansionSetCode = â X6CardType[]{CardType. ffect size 1041 1002 964 934 907 881 859 837 815 794
Table 2: First 10 compressed units in MTG. We replaced newlines with â and spaces with .
Once Ëv is obtained, we replace all occurrences of Ëv with a new non-terminal symbol. This pro- cess is repeated until a desired average size for the code is reached. While training is performed on the compressed code, the decoding will undergo an additional step, where the compressed code is restored by expanding the all Xi. Table 2 shows the ï¬rst 10 replacements from the MTG dataset, reducing its average size from 1080 to 794.
# 7 Experiments | 1603.06744#27 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 28 | # 7 Experiments
Datasets Tests are performed on the two datasets provided in this paper, described in Ta- ble 1. Additionally, to test the modelâs ability of generalize to other domains, we report results in the Django dataset (Oda et al., 2015), comprising of 16000 training, 1000 development and 1805 test annotations. Each data point consists of a line of Python code together with a manually created nat- ural language description.
Neural Benchmarks We implement two stan- dard neural networks, namely a sequence-to- sequence model (Sutskever et al., 2014) and an attention-based model (Bahdanau et al., 2014). The former is adapted to work with multiple in- put ï¬elds by concatenating them, while the latter uses our proposed attention model. These models are denoted as âSequenceâ and âAttentionâ. | 1603.06744#28 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 29 | Machine Translation Baselines Our problem can also be viewed in the framework of seman- tic parsing (Wong and Mooney, 2006; Lu et al., 2008; Jones et al., 2012; Artzi et al., 2015). Unfor- tunately, these approaches deï¬ne strong assump- tions regarding the grammar and structure of the output, which makes it difï¬cult to generalize for other domains (Kwiatkowski et al., 2010). How- ever, the work in Andreas et al. (2013) provides | 1603.06744#29 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 30 | evidence that using machine translation systems without committing to such assumptions can lead to results competitive with the systems described above. We follow the same approach and create a phrase-based (Koehn et al., 2007) model and a hierarchical model (or PCFG) (Chiang, 2007) as benchmarks for the work presented here. As these models are optimized to generate words, not char- acters, we implement a tokenizer that splits on all punctuation characters, except for the â â charac- ter. We also facilitate the task by splitting Camel- Case words (e.g., class TirionFordring â class Tirion Fordring). Otherwise all class names would not be generated correctly by these methods. We used the models implemented in Moses to generate these baselines using stan- dard parameters, using IBM Alignment Model 4 for word alignments (Och and Ney, 2003), MERT for tuning (Sokolov and Yvon, 2011) and a 4-gram Kneser-Ney Smoothed language model (Heaï¬eld et al., 2013). These models will be denoted as âPhraseâ and âHierarchicalâ, respectively. | 1603.06744#30 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 32 | Evaluation A typical metric is to compute the accuracy of whether the generated code exactly matches the reference code. This is informative as it gives an intuition of how many samples can be used without further human post-editing. How- ever, it does not provide an illustration on the de- gree of closeness to achieving the correct code. Thus, we also test using BLEU-4 (Papineni et al., 2002) at the token level. There are clearly problems with these metrics. For instance, source code can be correct without matching the refer- ence. The code in Figure 2, could have also been implemented by calling the draw function in an cycle that exists once both players have the same number of cards in their hands. Some tasks, such as the generation of queries (Zelle and Mooney, 1996), have overcome this problem by executing the query and checking if the result is the same as the annotation. However, we shall leave the study of these methologies for future work, as adapting these methods for our tasks is not trivial. For instance, the correctness cards with con- ditional (e.g. if player has no cards, then draw a card) or non-deterministc (e.g. put a random card in your hand) ef- fects cannot be simply validated by running the code. | 1603.06744#32 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 33 | Setup The multiple input types (Figure 3) are hyper-parametrized as follows: The C2W model (cf. âC2Wâ row) used to obtain continuous vec- tors for word types uses character embeddings of size 100 and LSTM states of size 300, and gener- ates vectors of size 300. We also report on results using word lookup tables of size 300, where we replace singletons with a special unknown token with probability 0.5 during training, which is then used for out-of-vocabulary words. For text ï¬elds, the context (cf. âBi-LSTMâ row) is encoded with a Bi-LSTM of size 300 for the forward and back- ward states. Finally, a linear layer maps the differ- ent input tokens into a common space with of size 300 (cf. âLinearâ row). As for the attention model, we used an hidden layer of size 200 before ap- plying the non-linearity (row âTanhâ). As for the decoder (Figure 4), we encode output characters with size 100 (cf. âoutput (y)â row), and an LSTM state of size 300 and an input representation of size 300 (cf. | 1603.06744#33 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 34 | output characters with size 100 (cf. âoutput (y)â row), and an LSTM state of size 300 and an input representation of size 300 (cf. âState(h+z)â row). For each pointer network (e.g., âCopy From Nameâ box), the inter- section between the input units and the state units are performed with a vector of size 200. Train- ing is performed using mini-batches of 20 sam- ples using AdaDelta (Zeiler, 2012) and we report results using the iteration with the highest BLEU score on the validation set (tested at intervals of 5000 mini-batches). Decoding is performed with a beam of 1000. As for compression, we performed a grid search over compressing the code from 0% to 80% of the original average length over inter- vals of 20% for the HS and Django datasets. On the MTG dataset, we are forced to compress the code up to 80% due to performance issues when training with extremely long sequences. | 1603.06744#34 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 36 | MTG HS Django BLEU Acc BLEU Acc BLEU Acc Retrieval Phrase Hierarchical 54.9 49.5 50.6 0.0 0.0 0.0 62.5 34.1 43.2 0.0 0.0 0.0 18.6 14.7 47.6 31.5 9.5 35.9 Sequence Attention 33.8 50.1 0.0 0.0 28.5 43.9 0.0 0.0 44.1 33.2 58.9 38.8 Our System â C2W â Compress â LPN â Attention 61.4 60.9 - 52.4 39.1 4.8 4.4 - 0.0 0.5 65.6 67.1 59.7 42.0 49.9 4.5 4.5 6.1 0.0 3.0 77.6 62.3 75.9 60.9 76.3 61.3 63.3 40.8 48.8 34.5
Table 3: BLEU and Accuracy scores for the pro- posed task on two in-domain datasets (HS and MTG) and an out-of-domain dataset (Django). | 1603.06744#36 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 37 | Table 3: BLEU and Accuracy scores for the pro- posed task on two in-domain datasets (HS and MTG) and an out-of-domain dataset (Django).
Compression 0% 20% 40% 60% 80% Seconds Per Card Softmax LPN BLEU Scores Softmax LPN 2.81 3.29 44.2 59.7 2.36 2.65 46.9 62.8 1.88 2.35 47.2 61.1 1.42 1.93 51.4 66.4 0.94 1.41 52.7 67.1
Table 4: Results with increasing compression rates with a regular softmax (cf. âSoftmaxâ) and a LPN (cf. âLPNâ). Performance values (cf. âSeconds Per Cardâ block) are computed using one CPU. | 1603.06744#37 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 38 | syntactic errors such as producing a non-existent function call or generating incomplete code. As BLEU penalizes length mismatches, generating code that matches the length of the reference pro- vides a large boost. The phrase-based transla- tion model (cf. âPhraseâ row) performs well in the Django (cf. âDjangoâ column), where map- ping from the input to the output is mostly mono- tonic, while the hierarchical model (cf. âHierar- chicalâ row) yields better performance on the card datasets as the concatenation of the input ï¬elds needs to be reordered extensively into the out- put sequence. Finally, the sequence-to-sequence model (cf. âSequenceâ row) yields extremely low results, mainly due to the lack of capacity needed to memorize whole input and output sequences, while the attention based model (cf. âAttentionâ row) produces results on par with phrase-based systems. Finally, we observe that by including all the proposed components (cf. âOur Systemâ row), we obtain signiï¬cant improvements over all base- lines in the three datasets and is the only one that obtains non-zero accuracies in the card datasets. | 1603.06744#38 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 39 | Component Comparison We present ablation results in order to analyze the contribution of each of our modiï¬cations. Removing the C2W model (cf. ââ C2Wâ row) yields a small deterioration, as word lookup tables are more susceptible to spar- sity. The only exception is in the HS dataset, where lookup tables perform better. We believe that this is because the small size of the training set does not provide enough evidence for the char- acter model to scale to unknown words. Surpris- ingly, running our model compression code (cf. ââ Compressâ row) actually yields better results. Ta- ble 4 provides an illustration of the results for dif- ferent compression rates. We obtain the best re- sults with an 80% compression rate (cf. âBLEU Scoresâ block), while maximising the time each card is processed (cf. âSeconds Per Cardâ block). While the reason for this is uncertain, it is simi- lar to the ï¬nding that language models that output characters tend to under-perform those that output words (J´ozefowicz et al., 2016). This applies when using the regular optimization process with a | 1603.06744#39 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 40 | language models that output characters tend to under-perform those that output words (J´ozefowicz et al., 2016). This applies when using the regular optimization process with a char- acter softmax (cf. âSoftmaxâ rows), but also when using the LPN (cf. âLPNâ rows). We also note that the training speed of LPNs is not signiï¬cantly lower as marginalization is performed with a dy- namic program. Finally, a signiï¬cant decrease is observed if we remove the pointer networks (cf. ââ LPNâ row). These improvements also generalize to sequence-to-sequence models (cf. ââ Attentionâ row), as the scores are superior to the sequence-to- sequence benchmark (cf. âSequenceâ row). | 1603.06744#40 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 41 | Result Analysis Examples of the code gener- ated for two cards are illustrated in Figure 5. We obtain the segments that were copied by the pointer networks by computing the most likely predictor for those segments. We observe from the marked segments that the model effectively copies the attributes that match in the output, including the name of the card that must be collapsed. As expected, the majority of the errors originate from inaccuracies in the generation of the effect of the card. While it is encouraging to observe that a small percentage of the cards are generated cor- rectly, it is worth mentioning that these are the re- sult of many cards possessing similar effects. The âMadder Bomberâ card is generated correctly as there is a similar card âMad Bomberâ in the train- ing set, which implements the same effect, except that it deals 3 damage instead of 6. Yet, it is a promising result that the model was able to capture | 1603.06744#41 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 42 | this difference. However, in many cases, effects that radically differ from seen ones tend to be gen- erated incorrectly. In the card âPreparationâ, we observe that while the properties of the card are generated correctly, the effect implements a unre- lated one, with the exception of the value 3, which is correctly copied. Yet, interestingly, it still gener- ates a valid effect, which sets a minionâs attack to 3. Investigating better methods to accurately gen- erate these effects will be object of further studies. | 1603.06744#42 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 43 | class MadderBomber(MinionCard): def _ init__(self): super().__init__("Madder Bomber", 5, CHARACTER_CLASS.ALL, CARD_RARITY.RARE, battlecry=Battlecry(Damage(1), CharacterSelector(players=BothPlayer(), picker= RandomPicker(6)))) BLEU = 100.0 def create_minion(selt, player):§ return Minion(5, 4)§ class Preparation(SpellCard): det __init__(selt): super().__init__("Preparation", 0, CHARACTER_CLASS.ROGUE, CARD_RARITY.EPIC, target_func=hearthbreaker.targeting.tind_minion_spell_target) BLEU = 64.2 Breparation) def use(self, player, game): super().use(player, game) sell.target.change_attack(3) player.add_aura(AuraUntil(ManaChange(-3), CardSelector(condition=IsSpell()), SpellCast()))
Figure 5: Examples of decoded cards from HS. Copied segments are marked in green and incor- rect segments are marked in red.
# 8 Related Work | 1603.06744#43 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 44 | Figure 5: Examples of decoded cards from HS. Copied segments are marked in green and incor- rect segments are marked in red.
# 8 Related Work
While we target widely used programming lan- guages, namely, Java and Python, our work is related to studies on the generation of any ex- ecutable code. These include generating regu- lar expressions (Kushman and Barzilay, 2013), and the code for parsing input documents (Lei et al., 2013). Much research has also been in- vested in generating formal languages, such as database queries (Zelle and Mooney, 1996; Be- rant et al., 2013), agent speciï¬c language (Kate et al., 2005) or smart phone instructions (Le et al., 2013). Finally, mapping natural language into a sequence of actions for the generation of Fi- executable code (Branavan et al., 2009). nally, a considerable effort in this task has fo- cused on semantic parsing (Wong and Mooney, 2006; Jones et al., 2012; Lei et al., 2013; Artzi et al., 2015; Quirk et al., 2015). Recently pro- posed models focus on Combinatory Categorical Grammars (Kushman and Barzilay, 2013; Artzi | 1603.06744#44 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 45 | et al., 2015), Bayesian Tree Transducers (Jones et al., 2012; Lei et al., 2013) and Probabilistic Con- text Free Grammars (Andreas et al., 2013). The work in natural language programming (Vadas and Curran, 2005; Manshadi et al., 2013), where users write lines of code from natural language, is also related to our work. Finally, the reverse map- ping from code into natural language is explored in (Oda et al., 2015).
Character-based sequence-to-sequence models have previously been used to generate code from natural language in (Mou et al., 2015). Inspired by these works, LPNs provide a richer framework by employing attention models (Bahdanau et al., 2014), pointer networks (Vinyals et al., 2015) and character-based embeddings (Ling et al., 2015). Our formulation can also be seen as a generaliza- tion of Allamanis et al. (2016), who implement a special case where two predictors have the same granularity (a sub-token softmax and a pointer net- work). Finally, HMMs have been employed in neural models to marginalize over label sequences in (Collobert et al., 2011; Lample et al., 2016) by modeling transitions between labels.
# 9 Conclusion | 1603.06744#45 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 46 | # 9 Conclusion
We introduced a neural network architecture named Latent Prediction Network, which allows efï¬cient marginalization over multiple predictors. Under this architecture, we propose a generative model for code generation that combines a char- acter level softmax to generate language-speciï¬c tokens and multiple pointer networks to copy key- words from the input. Along with other exten- sions, namely structured attention and code com- pression, our model is applied on on both exist- ing datasets and also on a newly created one with implementations of TCG game cards. Our experi- ments show that our model out-performs multiple benchmarks, which demonstrate the importance of combining different types of predictors.
# References
[Allamanis et al.2016] M. Allamanis, H. Peng, and C. Sutton. 2016. A Convolutional Attention Net- work for Extreme Summarization of Source Code. ArXiv e-prints, February.
[Andreas et al.2013] Jacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as ma- chine translation. In Proceedings of the 51st Annual
Meeting of the Association for Computational Lin- guistics, pages 47â52, August. | 1603.06744#46 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 47 | Meeting of the Association for Computational Lin- guistics, pages 47â52, August.
[Artzi et al.2015] Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage ccg semantic parsing with amr. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing, pages 1699â1710, September.
[Bahdanau et al.2014] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.
[Berant et al.2013] Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic pars- ing on freebase from question-answer pairs. In Pro- ceedings of the 2013 Conference on Empirical Meth- ods in Natural Language Processing, pages 1533â 1544.
[Branavan et al.2009] S. R. K. Branavan, Harr Chen, Luke S. Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th In- ternational Joint Conference on Natural Language Processing of the AFNLP, pages 82â90. | 1603.06744#47 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 48 | Hierarchi- cal phrase-based translation. Comput. Linguist., 33(2):201â228, June.
[Collobert et al.2011] Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language process- J. Mach. Learn. Res., ing (almost) from scratch. 12:2493â2537, November.
Ivan Pouzyrevsky, and Philipp Jonathan H. Clark, Koehn. 2013. Scalable modiï¬ed Kneser-Ney lan- guage model estimation. In Proceedings of the 51th Annual Meeting on Association for Computational Linguistics, pages 690â696.
[Hochreiter and Schmidhuber1997] Sepp Hochreiter and J¨urgen Schmidhuber. Long short- term memory. Neural Comput., 9(8):1735â1780, November.
[Jones et al.2012] Bevan Keeley Jones, Mark Johnson, and Sharon Goldwater. 2012. Semantic parsing In Proceedings of with bayesian tree transducers. the 50th Annual Meeting of the Association for Com- putational Linguistics, pages 488â496. | 1603.06744#48 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 49 | [J´ozefowicz et al.2016] Rafal Oriol and Exploring the limits of J´ozefowicz, Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu. language modeling. CoRR, abs/1602.02410. 2016.
[Kate et al.2005] Rohit J. Kate, Yuk Wah Wong, and Raymond J. Mooney. 2005. Learning to transform natural to formal languages. In Proceedings of the
Twentieth National Conference on Artiï¬cial Intelli- gence (AAAI-05), pages 1062â1068, Pittsburgh, PA, July.
[Koehn et al.2007] Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, OndËrej Bojar, Alexandra Constantin, and Evan 2007. Moses: Open source toolkit for Herbst. In Proceedings of statistical machine translation. the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pages 177â180. | 1603.06744#49 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 50 | and Regina Barzilay. 2013. Using semantic uniï¬ca- tion to generate regular expressions from natural In Proceedings of the 2013 Conference language. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 826â836, Atlanta, Georgia, June.
Luke Zettlemoyer, Sharon Goldwater, and Mark Steed- man. 2010. Inducing probabilistic ccg grammars from logical form with higher-order uniï¬cation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1223â1233.
[Lample et al.2016] G. Lample, M. Ballesteros, S. Sub- ramanian, K. Kawakami, and C. Dyer. 2016. Neural Architectures for Named Entity Recognition. ArXiv e-prints, March.
[Le et al.2013] Vu Le, Sumit Gulwani, and Zhendong Su. 2013. Smartsynth: Synthesizing smartphone In Pro- automation scripts from natural language. ceeding of the 11th Annual International Confer- ence on Mobile Systems, Applications, and Services, pages 193â206. | 1603.06744#50 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 51 | [Lei et al.2013] Tao Lei, Fan Long, Regina Barzilay, and Martin Rinard. 2013. From natural language speciï¬cations to program input parsers. In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1294â1303, Soï¬a, Bulgaria, August.
[Ling et al.2015] Wang Ling, Tiago Lu´ıs, Lu´ıs Marujo, R´amon Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Finding function in form: Compositional character models for open vocabulary word representation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing.
[Lu et al.2008] Wei Lu, Hwee Tou Ng, Wee Sun Lee, and Luke S. Zettlemoyer. 2008. A generative model for parsing natural language to meaning represen- tations. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Process- ing, EMNLP â08, pages 783â792, Stroudsburg, PA, USA. Association for Computational Linguistics. | 1603.06744#51 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 52 | [Manshadi et al.2013] Mehdi Hafezi Manshadi, Daniel Gildea, and James F. Allen. 2013. Integrating pro- gramming by example and natural language pro- gramming. In Marie desJardins and Michael L. Littman, editors, AAAI. AAAI Press.
[Mou et al.2015] Lili Mou, Rui Men, Ge Li, Lu Zhang, and Zhi Jin. 2015. On end-to-end program gener- ation from user intention by deep neural networks. CoRR, abs/1510.07211.
[Och and Ney2003] Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Comput. Linguist., 29(1):19â51, March.
[Oda et al.2015] Yusuke Oda, Hiroyuki Fudaba, Gra- ham Neubig, Hideaki Hata, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Learning to generate pseudo-code from source code using statis- tical machine translation. In 30th IEEE/ACM Inter- national Conference on Automated Software Engi- neering (ASE), Lincoln, Nebraska, USA, November. | 1603.06744#52 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 53 | [Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: A method for automatic evaluation of machine trans- lation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311â318.
[Quirk et al.2015] Chris Quirk, Raymond Mooney, and Michel Galley. 2015. Language to code: Learn- ing semantic parsers for if-this-then-that recipes. In Proceedings of the 53rd Annual Meeting of the As- sociation for Computational Linguistics, pages 878â 888, Beijing, China, July.
and William W. Cohen. 2005. Semi-markov conditional random ï¬elds for information extraction. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems 17, pages 1185â1192. MIT Press.
[Sokolov and Yvon2011] Artem Sokolov and Franc¸ois Yvon. 2011. Minimum Error Rate Semi-Ring. In Mikel Forcada and Heidi Depraetere, editors, Pro- ceedings of the European Conference on Machine Translation, pages 241â248, Leuven, Belgium. | 1603.06744#53 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06744 | 54 | [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learn- ing with neural networks. CoRR, abs/1409.3215.
[Vadas and Curran2005] David Vadas and James R. Curran. 2005. Programming with unrestricted nat- In Proceedings of the Australasian ural language. Language Technology Workshop 2005, pages 191â 199, Sydney, Australia, December.
[Vinyals et al.2015] Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In C. Cortes, N.D. Lawrence, D.D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Infor- mation Processing Systems 28, pages 2674â2682. Curran Associates, Inc.
[Wong and Mooney2006] Yuk Wah Wong and Ray- 2006. Learning for semantic mond J. Mooney. parsing with statistical machine translation. In Pro- ceedings of the Main Conference on Human Lan- guage Technology Conference of the North Amer- ican Chapter of the Association of Computational Linguistics, pages 439â446. | 1603.06744#54 | Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks. | http://arxiv.org/pdf/1603.06744 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom | cs.CL, cs.NE | null | null | cs.CL | 20160322 | 20160608 | [] |
1603.06147 | 1 | Kyunghyun Cho New York University
Yoshua Bengio Universit´e de Montr´eal CIFAR Senior Fellow
# Abstract
The existing machine translation systems, whether phrase-based or neural, have relied almost exclusively on word-level modelling with explicit segmentation. In this paper, we ask a fundamental question: can neural machine translation generate a character sequence without any explicit segmentation? To answer this question, we evaluate an attention-based encoderâ decoder with a subword-level encoder and a character-level decoder on four language pairsâEn-Cs, En-De, En-Ru and En-Fiâ using the parallel corpora from WMTâ15. Our experiments show that the models with a character-level decoder outperform the ones with a subword-level decoder on all of the four language pairs. Further- more, the ensembles of neural models with a character-level decoder outperform the state-of-the-art non-neural machine trans- lation systems on En-Cs, En-De and En-Fi and perform comparably on En-Ru. | 1603.06147#1 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 2 | tion, although neural networks do not suffer from character-level modelling and rather suffer from the issues speciï¬c to word-level modelling, such as the increased computational complexity from a very large target vocabulary (Jean et al., 2015; Lu- ong et al., 2015b). Therefore, in this paper, we ad- dress a question of whether neural machine trans- lation can be done directly on a sequence of char- acters without any explicit word segmentation.
To answer this question, we focus on represent- ing the target side as a character sequence. We evaluate neural machine translation models with a character-level decoder on four language pairs from WMTâ15 to make our evaluation as convinc- ing as possible. We represent the source side as a sequence of subwords extracted using byte-pair encoding from Sennrich et al. (2015), and vary the target side to be either a sequence of subwords or characters. On the target side, we further design a novel recurrent neural network (RNN), called bi- scale recurrent network, that better handles multi- ple timescales in a sequence, and test it in addition to a naive, stacked recurrent neural network. | 1603.06147#2 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 3 | 1 The existing machine translation systems have re- lied almost exclusively on word-level modelling with explicit segmentation. This is mainly due to the issue of data sparsity which becomes much more severe, especially for n-grams, when a sen- tence is represented as a sequence of characters rather than words, as the length of the sequence grows signiï¬cantly. In addition to data sparsity, we often have a priori belief that a word, or its segmented-out lexeme, is a basic unit of meaning, making it natural to approach translation as map- ping from a sequence of source-language words to a sequence of target-language words.
On all of the four language pairsâEn-Cs, En-De, En-Ru and En-Fiâ, the models with a character- level decoder outperformed the ones with a subword-level decoder. We observed a similar trend with the ensemble of each of these con- ï¬gurations, outperforming both the previous best neural and non-neural translation systems on En- Cs, En-De and En-Fi, while achieving a compara- ble result on En-Ru. We ï¬nd these results to be a strong evidence that neural machine translation can indeed learn to translate at the character-level and that in fact, it beneï¬ts from doing so.
# 2 Neural Machine Translation | 1603.06147#3 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 4 | # 2 Neural Machine Translation
This has continued with the more recently proposed paradigm of neural machine translaNeural machine translation refers to a recently proposed approach to machine translation (Forcada and Neco, 1997; Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014). This approach aims at building an end-to-end neu- ral network that takes as input a source sentence X = (a1,...,@7,) and outputs its translation Y = (y1,.--,yr,), Where x; and yy are respec- tively source and target symbols. This neural net- work is constructed as a composite of an encoder network and a decoder network. | 1603.06147#4 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 5 | The encoder network encodes the input sen- tence X into its continuous representation. In this paper, we closely follow the neural transla- tion model proposed in Bahdanau et al. (2015) and use a bidirectional recurrent neural network, which consists of two recurrent neural networks. The forward network reads the input sentence in a forward direction: Zt = B (ex(ae), Zi-1), where e,,(x) is a continuous embedding of the t-th input symbol, and ¢ is a recurrent activa- tion function. Similarly, the reverse network reads the sentence in a reverse direction (right to left): vi= (ex (22), Fis) At each loca- tion in the input sentence, we concatenate the hid- den states from the forward and reverse RNNs to form a context set C = {z1,...,27,}, where â= [Ze Z,j.
Then the decoder computes the conditional dis- tribution over all possible translations based on this context set. This is done by first rewrit- ing the conditional probability of a translation: log p(Y|X) = Dy, log p(w lycv,X). For each conditional term in the summation, the decoder RNN updates its hidden state by
hy = $(ey(yvâ1), byâ1, ev), (63) | 1603.06147#5 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 6 | hy = $(ey(yvâ1), byâ1, ev), (63)
where e, is the continuous embedding of a target symbol. c; is a context vector computed by a soft- alignment mechanism:
cy = falign(â¬y(Yrâ1); hy_1,C)). (2)
The soft-alignment mechanism falign weights each vector in the context set C according to its relevance given what has been translated. The weight of each vector zt is computed by
Ong = Zor eee), (3)
where fscore iS a parametric function returning an unnormalized score for z; given hy_; and y_1.
We use a feedforward network with a single hid- den layer in this paper.! Z is a normalization con- stant: Z= ey efscore(ey(Yy a) shy 14") This procedure can be understood as computing the alignment probability between the ¢/-th target symbol and t-th source symbol.
The hidden state hy, together with the previous target symbol y_; and the context vector cy, is fed into a feedforward neural network to result in the conditional distribution: v(ye | yer, X) x efote(eu(Yerâa)sbyr ey). (4) | 1603.06147#6 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 7 | The whole model, consisting of the encoder, decoder and soft-alignment mechanism, is then tuned end-to-end to minimize the negative log- likelihood using stochastic gradient descent.
# 3 Towards Character-Level Translation
# 3.1 Motivation
Let us revisit how the source and target sen- tences (X and Y ) are represented in neural ma- chine translation. For the source side of any given training corpus, we scan through the whole cor- pus to build a vocabulary Vx of unique tokens to which we assign integer indices. A source sen- tence X is then built as a sequence of the indices i.e., of such tokens belonging to the sentence, X = (x1, . . . , xTx), where xt â {1, 2, . . . , |Vx|}. The target sentence is similarly transformed into a target sequence of integer indices. | 1603.06147#7 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 8 | Each token, or its index, is then transformed into a so-called one-hot vector of dimensionality |Vx|. All but one elements of this vector are set to 0. The only element whose index corresponds to the tokenâs index is set to 1. This one-hot vector is the one which any neural machine translation model sees. The embedding function, ex or ey, is simply the result of applying a linear transforma- tion (the embedding matrix) to this one-hot vector. The important property of this approach based on one-hot vectors is that the neural network is oblivious to the underlying semantics of the to- kens. To the neural network, each and every token in the vocabulary is equal distance away from ev- ery other token. The semantics of those tokens are simply learned (into the embeddings) to maximize the translation quality, or the log-likelihood of the model.
This property allows us great freedom in the choice of tokensâ unit. Neural networks have been
1
For other possible implementations, see (Luong et al., 2015a). | 1603.06147#8 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 9 | This property allows us great freedom in the choice of tokensâ unit. Neural networks have been
1
For other possible implementations, see (Luong et al., 2015a).
shown to work well with word tokens (Bengio et al., 2001; Schwenk, 2007; Mikolov et al., 2010) but also with ï¬ner units, such as subwords (Sen- nrich et al., 2015; Botha and Blunsom, 2014; Lu- ong et al., 2013) as well as symbols resulting from compression/encoding (Chitnis and DeNero, 2015). Although there have been a number of previous research reporting the use of neural net- works with characters (see, e.g., Mikolov et al. (2012) and Santos and Zadrozny (2014)), the dom- inant approach has been to preprocess the text into a sequence of symbols, each associated with a se- quence of characters, after which the neural net- work is presented with those symbols rather than with characters. | 1603.06147#9 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 10 | More recently in the context of neural machine translation, two research groups have proposed to directly use characters. Kim et al. (2015) proposed to represent each word not as a single integer index as before, but as a sequence of characters, and use a convolutional network followed by a highway network (Srivastava et al., 2015) to extract a con- tinuous representation of the word. This approach, which effectively replaces the embedding func- tion ex, was adopted by Costa-Juss`a and Fonollosa (2016) for neural machine translation. Similarly, Ling et al. (2015b) use a bidirectional recurrent neural network to replace the embedding functions ex and ey to respectively encode a character se- quence to and from the corresponding continuous word representation. A similar, but slightly differ- ent approach was proposed by Lee et al. (2015), where they explicitly mark each character with its relative location in a word (e.g., âBâeginning and âIântermediate). | 1603.06147#10 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 11 | Despite the fact that these recent approaches work at the level of characters, it is less satisfying that they all rely on knowing how to segment char- acters into words. Although it is generally easy for languages like English, this is not always the case. This word segmentation procedure can be as simple as tokenization followed by some punc- tuation normalization, but also can be as compli- cated as morpheme segmentation requiring a sep- arate model to be trained in advance (Creutz and Lagus, 2005; Huang and Zhao, 2007). Further- more, these segmentation2 steps are often tuned or designed separately from the ultimate objective of translation quality, potentially contributing to a
2From here on, the term segmentation broadly refers to any method that splits a given character sequence into a se- quence of subword symbols.
suboptimal quality.
Based on this observation and analysis, in this paper, we ask ourselves and the readers a question which should have been asked much earlier: Is it possible to do character-level translation without any explicit segmentation?
# 3.2 Why Word-Level Translation? | 1603.06147#11 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 12 | # 3.2 Why Word-Level Translation?
(1) Word as a Basic Unit of Meaning A word can be understood in two different senses. In the abstract sense, a word is a basic unit of mean- ing (lexeme), and in the other sense, can be un- derstood as a âconcrete word as used in a sen- tence.â (Booij, 2012). A word in the former sense turns into that in the latter sense via a process of morphology, including inï¬ection, compound- ing and derivation. These three processes do al- ter the meaning of the lexeme, but often it stays close to the original meaning. Because of this view of words as basic units of meaning (either in the form of lexemes or derived form) from lin- guistics, much of previous work in natural lan- guage processing has focused on using words as basic units of which a sentence is encoded as a sequence. Also, the potential difï¬culty in ï¬nding a mapping between a wordâs character sequence and meaning3 has likely contributed to this trend toward word-level modelling. | 1603.06147#12 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 13 | (2) Data Sparsity There is a further technical reason why much of previous research on ma- chine translation has considered words as a ba- sic unit. This is mainly due to the fact that ma- jor components in the existing translation systems, such as language models and phrase tables, are a count-based estimator of probabilities. In other words, a probability of a subsequence of sym- bols, or pairs of symbols, is estimated by count- ing the number of its occurrences in a training corpus. This approach severely suffers from the issue of data sparsity, which is due to a large state space which grows exponentially w.r.t. the length of subsequences while growing only lin- early w.r.t. the corpus size. This poses a great chal- lenge to character-level modelling, as any subse- quence will be on average 4â5 times longer when characters, instead of words, are used. Indeed, Vilar et al. (2007) reported worse performance when the character sequence was directly used by a phrase-based machine translation system. More
3For instance, âquitâ, âquiteâ and âquietâ are one edit- distance away from each other but have distinct meanings. | 1603.06147#13 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 14 | 3For instance, âquitâ, âquiteâ and âquietâ are one edit- distance away from each other but have distinct meanings.
recently, Neubig et al. (2013) proposed a method to improve character-level translation with phrase- based translation systems, however, with only a limited success.
(3) Vanishing Gradient Speciï¬cally to neural machine translation, a major reason behind the wide adoption of word-level modelling is due to the difï¬culty in modelling long-term dependen- cies with recurrent neural networks (Bengio et al., 1994; Hochreiter, 1998). As the lengths of the sentences on both sides grow when they are repre- sented in characters, it is easy to believe that there will be more long-term dependencies that must be captured by the recurrent neural network for suc- cessful translation.
# 3.3 Why Character-Level Translation? | 1603.06147#14 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 15 | # 3.3 Why Character-Level Translation?
Why not Word-Level Translation? The most pressing issue with word-level processing is that we do not have a perfect word segmentation al- gorithm for any one language. A perfect segmen- tation algorithm needs to be able to segment any given sentence into a sequence of lexemes and morphemes. This problem is however a difï¬cult problem on its own and often requires decades of research (see, e.g., Creutz and Lagus (2005) for Finnish and other morphologically rich languages and Huang and Zhao (2007) for Chinese). There- fore, many opt to using either a rule-based tok- enization approach or a suboptimal, but still avail- able, learning based segmentation algorithm. | 1603.06147#15 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 16 | The outcome of this naive, sub-optimal segmen- tation is that the vocabulary is often ï¬lled with many similar words that share a lexeme but have different morphology. For instance, if we apply a simple tokenization script to an English corpus, ârunâ, ârunsâ, âranâ and ârunningâ are all separate entries in the vocabulary, while they clearly share the same lexeme ârunâ. This prevents any ma- chine translation system, in particular neural ma- chine translation, from modelling these morpho- logical variants efï¬ciently.
More speciï¬cally in the case of neural machine translation, each of these morphological variantsâ ârunâ, ârunsâ, âranâ and ârunningââ will be as- signed a d-dimensional word vector, leading to four independent vectors, while it is clear that if we can segment those variants into a lexeme and other morphemes, we can model them more efï¬- ciently. For instance, we can have a d-dimensional vector for the lexeme ârunâ and much smaller | 1603.06147#16 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 17 | vectors for âsâ andâingâ. Each of those variants will be then a composite of the lexeme vector (shared across these variants) and morpheme vec- tors (shared across words sharing the same sufï¬x, for example) (Botha and Blunsom, 2014). This makes use of distributed representation, which generally yields better generalization, but seems to require an optimal segmentation, which is un- fortunately almost never available.
In addition to inefï¬ciency in modelling, there are two additional negative consequences from us- ing (unsegmented) words. First, the translation system cannot generalize well to novel words, which are often mapped to a token reserved for an unknown word. This effectively ignores any meaning or structure of the word to be incorpo- rated when translating. Second, even when a lex- eme is common and frequently observed in the training corpus, its morphological variant may not be. This implies that the model sees this speciï¬c, rare morphological variant much less and will not be able to translate it well. However, if this rare morphological variant shares a large part of its spelling with other more common words, it is de- sirable for a machine translation system to exploit those common words when translating those rare variants. | 1603.06147#17 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 18 | Why Character-Level Translation? All of these issues can be addressed to certain extent by directly modelling characters. Although the issue of data sparsity arises in character-level transla- tion, it is elegantly addressed by using a paramet- ric approach based on recurrent neural networks instead of a non-parametric count-based approach. Furthermore, in recent years, we have learned how to build and train a recurrent neural network that can well capture long-term dependencies by using more sophisticated activation functions, such as long short-term memory (LSTM) units (Hochre- iter and Schmidhuber, 1997) and gated recurrent units (Cho et al., 2014).
Kim et al. (2015) and Ling et al. (2015a) re- cently showed that by having a neural network that converts a character sequence into a word vector, we avoid the issues from having many morpho- logical variants appearing as separate entities in a vocabulary. This is made possible by sharing the character-to-word neural network across all the unique tokens. A similar approach was applied to machine translation by Ling et al. (2015b).
These recent approaches, however, still rely on | 1603.06147#18 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 20 | It however becomes unnecessary to consider these prior information, if we use a neural net- work, be it recurrent, convolution or their combi- nation, directly on the unsegmented character se- quence. The possibility of using a sequence of un- segmented characters has been studied over many years in the ï¬eld of deep learning. For instance, Mikolov et al. (2012) and Sutskever et al. (2011) trained a recurrent neural network language model (RNN-LM) on character sequences. The latter showed that it is possible to generate sensible text sequences by simply sampling a character at a time from this model. More recently, Zhang et al. (2015) and Xiao and Cho (2016) successfully applied a convolutional net and a convolutional- recurrent net respectively to character-level docu- ment classiï¬cation without any explicit segmenta- tion. Gillick et al. (2015) further showed that it is possible to train a recurrent neural network on unicode bytes, instead of characters or words, to perform part-of-speech tagging and named entity recognition.
These previous works suggest the possibility of applying neural networks for the task of machine translation, which is often considered a substan- tially more difï¬cult problem compared to docu- ment classiï¬cation and language modelling.
# 3.4 Challenges and Questions | 1603.06147#20 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 21 | # 3.4 Challenges and Questions
There are two overlapping sets of challenges for the source and target sides. On the source side, it is unclear how to build a neural network that learns a highly nonlinear mapping from a spelling to the meaning of a sentence.
On the target side, there are two challenges. The ï¬rst challenge is the same one from the source side, as the decoder neural network needs to sum- marize what has been translated. In addition to this, the character-level modelling on the target side is more challenging, as the decoder network must be able to generate a long, coherent sequence of characters. This is a great challenge, as the size of the state space grows exponentially w.r.t. the number of symbols, and in the case of characters, it is often 300-1000 symbols long.
All these challenges should ï¬rst be framed as
wa
(a) Gating units (b) One-step processing
Ct Ct
Figure 1: Bi-scale recurrent neural network
questions; whether the current recurrent neural networks, which are already widely used in neu- ral machine translation, are able to address these challenges as they are. In this paper, we aim at an- swering these questions empirically and focus on the challenges on the target side (as the target side shows both of the challenges).
# 4 Character-Level Translation | 1603.06147#21 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 22 | # 4 Character-Level Translation
In this paper, we try to answer the questions posed earlier by testing two different types of recurrent neural networks on the target side (decoder).
First, we test an existing recurrent neural net- work with gated recurrent units (GRUs). We call this decoder a base decoder.
Second, we build a novel two-layer recurrent neural network, inspired by the gated-feedback network from Chung et al. (2015), called a bi- scale recurrent neural network. We design this network to facilitate capturing two timescales, mo- tivated by the fact that characters and words may work at two separate timescales.
We choose to test these two alternatives for the following purposes. Experiments with the base decoder will clearly answer whether the existing neural network is enough to handle character-level decoding, which has not been properly answered in the context of machine translation. The alterna- tive, the bi-scale decoder, is tested in order to see whether it is possible to design a better decoder, if the answer to the ï¬rst question is positive.
# 4.1 Bi-Scale Recurrent Neural Network | 1603.06147#22 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 23 | # 4.1 Bi-Scale Recurrent Neural Network
In this proposed bi-scale recurrent neural network, there are two sets of hidden units, h1 and h2. They contain the same number of units, i.e., dim(h1) = dim(h2). The ï¬rst set h1 models a fast-changing timescale (thereby, a faster layer), and h2 a slower timescale (thereby, a slower layer). For each hid- den unit, there is an associated gating unit, to
which we refer by g! and g?. For the descrip- tion below, we use y_1 and c, for the previous target symbol and the context vector (see Eq. (2)), respectively.
Let us start with the faster layer. The faster layer outputs two sets of activations, a normal output hi}, and its gated version h}. The activation of the faster layer is computed by
h}, = tanh (w" [ev(owâ1); hi; h?; cv) ;
where hi , and hh? , are the gated activations of the faster and slower layers respectively. These gated activations are computed by
hi =(1âgL) Oh}, h? = gh Ohi. | 1603.06147#23 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 24 | hi =(1âgL) Oh}, h? = gh Ohi.
In other words, the faster layerâs activation is based on the adaptive combination of the faster and slower layersâ activations from the previous time step. Whenever the faster layer determines that it needs to reset, i.e., gh = 1, the next activation will be determined based more on the slower layerâs activation.
The faster layerâs gating unit is computed by
Bi =o (w? lev(w a): hp sh?_;ev' ) ,
where Ï is a sigmoid function.
The slower layer also outputs two sets of acti- vations, a normal output h?, and its gated version h?. These activations are computed as follows:
h? = (1â h? = (1- gh) @h?_, +g) oh}, gi) Ohi,
where h?, is a candidate activation. The slower layerâs gating unit g?, is computed by
gi =o (we [(gz © hy); bh? _4; cv) . | 1603.06147#24 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 25 | gi =o (we [(gz © hy); bh? _4; cv) .
This adaptive leaky integration based on the gat- ing unit from the faster layer has a consequence that the slower layer updates its activation only when the faster layer resets. This puts a soft con- straint that the faster layer runs at a faster rate by preventing the slower layer from updating while the faster layer is processing a current chunk.
The candidate activation is then computed by
h?, = tanh (w"â [(gt © hy); hey; cr]) . (5)
BPE BPE â© BPE char (base) ++ BPE Char (bi-scale) Source Sentence Length
GiM(GFE BPE, BPE Chr (Br seae) dae 8PE, BPE Cnar (base 75 a Word Frequency
BPE BPE â© BPE char (base) ++ BPE Char (bi-scale) GiM(GFE BPE, BPE Chr (Br seae) dae 8PE, BPE Cnar (base 75 a Source Sentence Length Word Frequency
Figure 2: (left) The BLEU scores on En-Cs w.r.t. the length of source sentences. (right) The difference of word negative log-probabilities be- tween the subword-level decoder and either of the character-level base or bi-scale decoder. | 1603.06147#25 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 26 | # Ëh2
h?_ , indicates the reset activation from the pre- vious time step, similarly to what happened in the faster layer, and cy is the input from the context. According to g}, ©h}, in Eq. (5), the faster layer influences the slower layer, only when the faster layer has finished processing the current chunk and is about to reset itself (gh = 1). In other words, the slower layer does not receive any in- put from the faster layer, until the faster layer has quickly processed the current chunk, thereby run- ning at a slower rate than the faster layer does.
At each time step, the final output of the pro- posed bi-scale recurrent neural network is the con- catenation of the output vectors of the faster and slower layers, i.e., [h!; h?]. This concatenated vector is used to compute the probability distribu- ion over all the symbols in the vocabulary, as in Eq. (4). See Fig. 1 for graphical illustration.
# 5 Experiment Settings
For evaluation, we represent a source sentence as a sequence of subword symbols extracted by byte- pair encoding (BPE, Sennrich et al. (2015)) and a target sentence either as a sequence of BPE-based symbols or as a sequence of characters. | 1603.06147#26 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 27 | Corpora and Preprocessing We use all avail- able parallel corpora for four language pairs from WMTâ15: En-Cs, En-De, En-Ru and En-Fi. They consist of 12.1M, 4.5M, 2.3M and 2M sentence pairs, respectively. We tokenize each corpus using a tokenization script included in Moses.4 We only use the sentence pairs, when the source side is up to 50 subword symbols long and the target side is either up to 100 subword symbols or 500 charac- ters. We do not use any monolingual corpus.
4Although tokenization is not necessary for character- level modelling, we tokenize the all target side corpora to make comparison against word-level modelling easier. | 1603.06147#27 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 28 | e D - n E s C - n E u R - n E i F - n E (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o) (p) Attention h2 h1 D D D D D D D D D D c r S Trgt 1 2 2 2 2 2 2 State-of-the-art Non-Neural Approachâ BPE E P B Char Base Base Bi-S D D Base D Base D Bi-S 2 2 2 State-of-the-art Non-Neural Approachâ BPE E P B Char D D Base D Base D Bi-S 2 2 2 State-of-the-art Non-Neural Approachâ BPE E P B Char D D Base D Base D Bi-S 2 2 2 State-of-the-art Non-Neural Approachâ BPE E P B Char Development Single Ens 20.78 21.2621.45 20.62 21.5721.88 20.88 20.31 21.2921.43 21.13 20.78 20.08 â 23.49 23.14 â 23.05 â â â 16.1216.96 15.96 17.6817.78 17.39 | 1603.06147#28 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 29 | 20.78 20.08 â 23.49 23.14 â 23.05 â â â 16.1216.96 15.96 17.6817.78 17.39 17.6217.93 17.43 â 18.5618.70 18.26 18.5618.87 18.39 18.3018.54 17.88 â 9.6110.02 9.24 11.1911.55 11.09 10.7311.04 10.40 â 19.21 19.52 19.83 21.17 20.53 20.53 11.92 13.72 13.39 Test1 Single 19.98 20.4720.88 19.30 21.3321.56 19.82 19.70 21.2521.47 20.62 20.19 19.39 20.60(1) 17.1617.68 16.38 19.2519.55 18.89 19.2719.53 19.15 21.00(3) 25.3025.40 24.95 26.0026.07 25.04 25.5925.76 24.57 28.70(5) â â â â Ens â 23.10 23.11 â 23.04 â â 20.79 21.95 22.15 29.26 29.37 29.26 â â â Test2 Single 21.72 22.0222.21 | 1603.06147#29 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 30 | â 23.04 â â 20.79 21.95 22.15 29.26 29.37 29.26 â â â Test2 Single 21.72 22.0222.21 21.35 23.4523.91 21.72 21.30 23.0623.47 22.85 22.26 20.94 24.00(2) 14.6315.09 14.26 16.9817.17 16.81 16.8617.10 16.68 18.20(4) 19.7220.29 19.02 21.1021.24 20.14 20.7321.02 19.97 24.30(6) 8.979.17 8.88 10.9311.56 10.11 10.2410.63 9.71 12.70(7) Ens â 24.83 25.24 â 25.44 â â 17.61 18.92 18.93 22.96 23.51 23.75 11.73 13.48 13.32 | 1603.06147#30 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 31 | Table 1: BLEU scores of the subword-level, character-level base and character-level bi-scale decoders for both single models and ensembles. The best scores among the single models per language pair are bold-faced, and those among the ensembles are underlined. When available, we report the median value, and the minimum and maximum values as a subscript and a superscript, respectively. (â) http: //matrix.statmt.org/ as of 11 March 2016 (constrained only). (1) Freitag et al. (2014). (2, 6) Williams et al. (2015). (3, 5) Durrani et al. (2014). (4) Haddow et al. (2015). (7) Rubino et al. (2015).
the pairs other than En-Fi, we use newstest-2013 as a development set, and newstest- 2014 (Test1) and newstest-2015 (Test2) as test sets. For En-Fi, we use newsdev-2015 and newstest- 2015 as development and test sets, respectively.
given a source sentence. The beam widths are 5 and 15 respectively for the subword-level and character-level decoders. They were chosen based on the translation quality on the development set. The translations are evaluated using BLEU.5 | 1603.06147#31 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 32 | Models and Training We test three models set- tings: (1) BPEâBPE, (2) BPEâChar (base) and (3) BPEâChar (bi-scale). The latter two differ by the type of recurrent neural network we use. We use GRUs for the encoder in all the settings. We used GRUs for the decoders in the ï¬rst two set- tings, (1) and (2), while the proposed bi-scale re- current network was used in the last setting, (3). The encoder has 512 hidden units for each direc- tion (forward and reverse), and the decoder has 1024 hidden units per layer.
Multilayer Decoder and Soft-Alignment Mech- anism When the decoder is a multilayer re- current neural network (including a stacked net- work as well as the proposed bi-scale network), the decoder outputs multiple hidden vectorsâ {h',...,hâ} for L layers, at a time. This allows an extra degree of freedom in the soft-alignment mechanism (fscore in Eq. (3)). We evaluate using alternatives, including (1) using only hâ (slower layer) and (2) using all of them (concatenated). | 1603.06147#32 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 33 | We train each model using stochastic gradient descent with Adam (Kingma and Ba, 2014). Each update is computed using a minibatch of 128 sen- tence pairs. The norm of the gradient is clipped with a threshold 1 (Pascanu et al., 2013).
Ensembles We also evaluate an ensemble of neural machine translation models and compare its performance against the state-of-the-art phrase- based translation systems on all four language pairs. We decode from an ensemble by taking the average of the output probabilities at each step.
Decoding and Evaluation We use beamsearch to approximately ï¬nd the most likely translation
5We used the multi-bleu.perl script from Moses.
Two sets| of| lights so close| to one| another| eos zwet Lichtersets so nah an elnander 208
of| lights. Zwei Cie eo ets:
Two sets| of| lights so close| to one| another| eos zwet Lichtersets so nah an elnander 208 of| lights. Zwei Cie eo ets:
Figure 3: Alignment matrix of a test example from En-De using the BPEâChar (bi-scale) model.
# 6 Quantitative Analysis | 1603.06147#33 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 34 | Figure 3: Alignment matrix of a test example from En-De using the BPEâChar (bi-scale) model.
# 6 Quantitative Analysis
Slower Layer for Alignment On En-De, we test which layer of the decoder should be used for computing soft-alignments. In the case of subword-level decoder, we observed no difference between choosing any of the two layers of the de- coder against using the concatenation of all the layers (Table 1 (aâb)) On the other hand, with the character-level decoder, we noticed an improve- ment when only the slower layer (h2) was used for the soft-alignment mechanism (Table 1 (câg)). This suggests that the soft-alignment mechanism beneï¬ts by aligning a larger chunk in the target with a subword unit in the source, and we use only the slower layer for all the other language pairs. | 1603.06147#34 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 35 | Single Models In Table 1, we present a com- prehensive report of the translation qualities of (1) subword-level decoder, (2) character-level base decoder and (3) character-level bi-scale decoder, for all the language pairs. We see that the both types of character-level decoder outperform the subword-level decoder for En-Cs and En-Fi quite signiï¬cantly. On En-De, the character-level base decoder outperforms both the subword-level de- coder and the character-level bi-scale decoder, validating the effectiveness of the character-level modelling. On En-Ru, among the single mod- els, the character-level decoders outperform the subword-level decoder, but in general, we observe that all the three alternatives work comparable to each other.
These results clearly suggest that it is indeed possible to do character-level translation without explicit segmentation. In fact, what we observed is that character-level translation often surpasses the translation quality of word-level translation. Of course, we note once again that our experiment is restricted to using an unsegmented character se- quence at the decoder only, and a further explo- ration toward replacing the source sentence with an unsegmented character sequence is needed. | 1603.06147#35 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 36 | Ensembles Each ensemble was built using eight independent models. The ï¬rst observation we make is that in all the language pairs, neural ma- chine translation performs comparably to, or often better than, the state-of-the-art non-neural transla- tion system. Furthermore, the character-level de- coders outperform the subword-level decoder in all the cases.
# 7 Qualitative Analysis
(1) Can the character-level decoder generate a long, coherent sentence? The translation in in characters is dramatically longer than that words, likely making it more difï¬cult for a recur- rent neural network to generate a coherent sen- tence in characters. This belief turned out to be false. As shown in Fig. 2 (left), there is no sig- niï¬cant difference between the subword-level and character-level decoders, even though the lengths of the generated translations are generally 5â10 times longer in characters. | 1603.06147#36 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 37 | (2) Does the character-level decoder help with rare words? One advantage of character-level modelling is that it can model the composition of any character sequence, thereby better modelling rare morphological variants. We empirically con- ï¬rm this by observing the growing gap in the aver- age negative log-probability of words between the subword-level and character-level decoders as the frequency of the words decreases. This is shown in Fig. 2 (right) and explains one potential cause behind the success of character-level decoding in our experiments (we deï¬ne diï¬(x, y) = x â y). | 1603.06147#37 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 38 | (3) Can the character-level decoder soft-align between a source word and a target charac- ter? In Fig. 3 (left), we show an example soft- alignment of a source sentence, âTwo sets of light It is clear that the so close to one anotherâ. character-level translation model well captured the alignment between the source subwords and target characters. We observe that the character- level decoder correctly aligns to âlightsâ and âsets ofâ when generating a German compound word âLichtersetsâ (see Fig. 3 (right) for the zoomed- in version). This type of behaviour happens simi- larly between âone anotherâ and âeinanderâ. Of course, this does not mean that there exists an alignment between a source word and a target character. Rather, this suggests that the internal state of the character-level decoder, the base or bi- scale, well captures the meaningful chunk of char- acters, allowing the model to map it to a larger chunk (subword) in the source. | 1603.06147#38 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 39 | (4) How fast is the decoding speed of the character-level decoder? We evaluate the de- coding speed of subword-level base, character- level base and character-level bi-scale decoders on newstest-2013 corpus (En-De) with a single Titan X GPU. The subword-level base decoder gener- ates 31.9 words per second, and the character-level base decoder and character-level bi-scale decoder generate 27.5 words per second and 25.6 words per second, respectively. Note that this is evalu- ated in an online setting, performing consecutive translation, where only one sentence is translated at a time. Translating in a batch setting could dif- fer from these results.
# 8 Conclusion | 1603.06147#39 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 40 | # 8 Conclusion
In this paper, we addressed a fundamental ques- tion on whether a recently proposed neural ma- chine translation system can directly handle trans- lation at the level of characters without any word segmentation. We focused on the target side, in which a decoder was asked to generate one char- acter at a time, while soft-aligning between a tar- get character and a source subword. Our extensive experiments, on four language pairsâEn-Cs, En- De, En-Ru and En-Fiâ strongly suggest that it is indeed possible for neural machine translation to translate at the level of characters, and that it actu- ally beneï¬ts from doing so.
Our result has one limitation that we used sub- word symbols in the source side. However, this has allowed us a more ï¬ne-grained analysis, but in the future, a setting where the source side is also represented as a character sequence must be inves- tigated.
# Acknowledgments | 1603.06147#40 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 41 | # Acknowledgments
The authors would like to thank the developers of Theano (Team et al., 2016). We acknowledge the support of the following agencies for research funding and computing support: NSERC, Calcul Qu´ebec, Compute Canada, the Canada Research Chairs, CIFAR and Samsung. KC thanks the sup- port by Facebook, Google (Google Faculty Award 2016) and NVIDIA (GPU Center of Excellence 2015-2016). JC thanks Orhan Firat for his con- structive feedbacks.
# References
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly In Proceedings of learning to align and translate. the International Conference on Learning Represen- tations (ICLR).
Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradi- ent descent is difï¬cult. IEEE Transactions on Neu- ral Networks, 5(2):157â166.
Yoshua Bengio, R´ejean Ducharme, and Pascal Vincent. 2001. A neural probabilistic language model. In Ad- vances in Neural Information Processing Systems, pages 932â938.
Geert Booij. 2012. The grammar of words: An intro- duction to linguistic morphology. Oxford University Press. | 1603.06147#41 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 42 | Geert Booij. 2012. The grammar of words: An intro- duction to linguistic morphology. Oxford University Press.
Jan A Botha and Phil Blunsom. 2014. Compositional morphology for word representations and language modelling. In ICML 2014.
Rohan Chitnis and John DeNero. 2015. Variable- length word encodings for neural translation models. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 2088â2093.
Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine the Empiricial In Proceedings of translation. Methods in Natural Language Processing (EMNLP 2014), October.
Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2015. Gated feedback recur- In Proceedings of the 32nd rent neural networks. International Conference on Machine Learning.
Marta R Costa-Juss`a and Jos´e AR Fonollosa. 2016. Character-based neural machine translation. arXiv preprint arXiv:1603.00810. | 1603.06147#42 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 43 | Mathias Creutz and Krista Lagus. 2005. Unsupervised morpheme segmentation and morphology induction from text corpora using Morfessor 1.0. Helsinki University of Technology.
Nadir Durrani, Barry Haddow, Philipp Koehn, and Kenneth Heaï¬eld. 2014. Edinburghâs phrase-based In Pro- machine translation systems for wmt-14. ceedings of the ACL 2014 Ninth Workshop on Sta- tistical Machine Translation, Baltimore, MD, USA, pages 97â104.
Mikel L Forcada and Ram´on P ËNeco. 1997. Recur- sive hetero-associative memories for translation. In International Work-Conference on Artiï¬cial Neural Networks, pages 453â462. Springer.
Markus Freitag, Stephan Peitz, Joern Wuebker, Her- mann Ney, Matthias Huck, Rico Sennrich, Nadir Durrani, Maria Nadejde, Philip Williams, Philipp Koehn, et al. 2014. Eu-bridge mt: Combined ma- chine translation.
Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2015. Multilingual language process- ing from bytes. arXiv preprint arXiv:1512.00103. | 1603.06147#43 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.