doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1610.10099 | 34 | Table 5 contains some of the unaltered generated transla- tions from the ByteNet that highlight reordering and other phenomena such as transliteration. The character-level aspect of the model makes post-processing unnecessary in principle. We further visualize the sensitivity of the ByteNetâs predictions to speciï¬c source and target inputs using gradient-based visualization (Simonyan et al., 2013). Figure 6 represents a heatmap of the magnitude of the gra- dients of the generated outputs with respect to the source and target inputs. For visual clarity, we sum the gradients for all the characters that make up each word and normal- ize the values along each column. In contrast with the at- tentional pooling mechanism (Bahdanau et al., 2014), this general technique allows us to inspect not just dependen- cies of the outputs on the source inputs, but also dependen- cies of the outputs on previous target inputs, or on any other neural network layers.
# References
Ba, Lei Jimmy, Kiros, Ryan, and Hinton, Geoffrey E. Layer normalization. CoRR, abs/1607.06450, 2016.
Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014. | 1610.10099#34 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 35 | Bengio, Yoshua, Ducharme, R´ejean, Vincent, Pascal, and Jauvin, Christian. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137â 1155, 2003.
Chen, Liang-Chieh, Papandreou, George, Kokkinos, Ia- sonas, Murphy, Kevin, and Yuille, Alan L. Semantic im- age segmentation with deep convolutional nets and fully connected crfs. CoRR, abs/1412.7062, 2014.
Cho, Kyunghyun, van Merrienboer, Bart, G¨ulc¸ehre, C¸ aglar, Bougares, Fethi, Schwenk, Holger, and Bengio, Yoshua.
Neural Machine Translation in Linear Time
Learning phrase representations using RNN encoder- CoRR, decoder for statistical machine translation. abs/1406.1078, 2014.
around 3000 demonstrators attempted official residency of Prime Minister Nawaz Sharif & s Geeicnzeitig etwa 3000 Demonstranten versucht offizielle Target Premierministers Nawaz Sharif Target | 1610.10099#35 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 36 | Chung, Junyoung, G¨ulc¸ehre, Caglar, Cho, Kyunghyun, and Bengio, Yoshua. Gated feedback recurrent neural net- works. CoRR, abs/1502.02367, 2015.
Chung, Junyoung, Ahn, Sungjin, and Bengio, Yoshua. Hi- erarchical multiscale recurrent neural networks. CoRR, abs/1609.01704, 2016a.
Chung, Junyoung, Cho, Kyunghyun, and Bengio, Yoshua. A character-level decoder without explicit segmentation In Proceedings of the for neural machine translation. 54th Annual Meeting of the Association for Computa- tional Linguistics, ACL 2016, 2016b.
Freitag, Markus, Peitz, Stephan, Wuebker, Joern, Ney, Her- mann, Huck, Matthias, Sennrich, Rico, Durrani, Nadir, Nadejde, Maria, Williams, Philip, Koehn, Philipp, Her- rmann, Teresa, Cho, Eunah, and Waibel, Alex. Eu-bridge mt: Combined machine translation. In ACL 2014 Ninth Workshop on Statistical Machine Translation, 2014.
Graves, Alex. Generating sequences with recurrent neural networks. CoRR, abs/1308.0850, 2013. | 1610.10099#36 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 37 | Graves, Alex. Generating sequences with recurrent neural networks. CoRR, abs/1308.0850, 2013.
Ha, D., Dai, A., and Le, Q. V. HyperNetworks. ArXiv e-prints, September 2016.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Identity mappings in deep residual networks. Jian. CoRR, abs/1603.05027, 2016.
Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short- term memory. Neural computation, 1997.
Hochreiter, Sepp, Bengio, Yoshua, and Frasconi, Paolo. Gradient ï¬ow in recurrent nets: the difï¬culty of learn- ing long-term dependencies. In Kolen, J. and Kremer, S. (eds.), Field Guide to Dynamical Recurrent Networks. IEEE Press, 2001.
Hutter, Marcus. The human knowledge compression con- test. http://prize.hutter1.net/, 2012. | 1610.10099#37 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 38 | Hutter, Marcus. The human knowledge compression con- test. http://prize.hutter1.net/, 2012.
Figure 6. Magnitude of gradients of the predicted outputs with re- spect to the source and target inputs. The gradients are summed for all the characters in a given word. In the bottom heatmap the magnitudes are nonzero on the diagonal, since the prediction of a target character depends highly on the preceding target character in the same word.
Kaiser, Åukasz and Bengio, Samy. Can active memory re- place attention? Advances in Neural Information Pro- cessing Systems, 2016.
Kalchbrenner, Nal and Blunsom, Phil. Recurrent continu- ous translation models. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Pro- cessing, 2013.
Kalchbrenner, Nal, Danihelka, Ivo, and Graves, Alex. Grid International Conference on long short-term memory. Learning Representations, 2016a.
Neural Machine Translation in Linear Time
Kalchbrenner, Nal, van den Oord, Aaron, Simonyan, Karen, Danihelka, Ivo, Vinyals, Oriol, Graves, Alex, and Kavukcuoglu, Koray. Video pixel networks. CoRR, abs/1610.00527, 2016b. | 1610.10099#38 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 39 | Kingma, Diederik P. and Ba, Jimmy. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
Kudo, Taku, Kazawa, Hideto, Stevens, Keith, Kurian, George, Patil, Nishant, Wang, Wei, Young, Cliff, Smith, Jason, Riesa, Jason, Rudnick, Alex, Vinyals, Oriol, Corrado, Greg, Hughes, Macduff, and Dean, Jeffrey. Googles neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144, 2016a.
Luong, Minh-Thang and Manning, Christopher D. Achiev- ing open vocabulary neural machine translation with hy- brid word-character models. In ACL, 2016.
Luong, Minh-Thang, Pham, Hieu, and Manning, Christo- pher D. Effective approaches to attention-based neural machine translation. In EMNLP, September 2015.
Mikolov, Tomas, Karaï¬Â´at, Martin, Burget, Luk´as, Cer- nock´y, Jan, and Khudanpur, Sanjeev. Recurrent neu- ral network based language model. In INTERSPEECH 2010, pp. 1045â1048, 2010. | 1610.10099#39 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 40 | Wu, Yuhuai, Zhang, Saizheng, Zhang, Ying, Bengio, Yoshua, and Salakhutdinov, Ruslan. On multiplica- tive integration with recurrent neural networks. CoRR, abs/1606.06630, 2016b.
Yu, Fisher and Koltun, Vladlen. Multi-scale context aggre- gation by dilated convolutions. CoRR, abs/1511.07122, 2015.
Zhou, Jie, Cao, Ying, Wang, Xuguang, Li, Peng, and Xu, Wei. Deep recurrent models with fast-forward connections for neural machine translation. CoRR, abs/1606.04199, 2016.
Rocki, Kamil. Recurrent memory array structures. CoRR, abs/1607.03085, 2016.
Simonyan, Karen, Vedaldi, Andrea, and Zisserman, An- drew. Deep inside convolutional networks: Visualising image classiï¬cation models and saliency maps. CoRR, abs/1312.6034, 2013.
Srivastava, Rupesh Kumar, Greff, Klaus, and Schmidhu- ber, J¨urgen. Highway networks. CoRR, abs/1505.00387, 2015. | 1610.10099#40 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 41 | Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc V. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pp. 3104â 3112, 2014.
van den Oord, Aaron, Dieleman, Sander, Zen, Heiga, Si- monyan, Karen, Vinyals, Oriol, Graves, Alex, Kalch- brenner, Nal, Senior, Andrew, and Kavukcuoglu, Ko- ray. Wavenet: A generative model for raw audio. CoRR, abs/1609.03499, 2016a.
and Kavukcuoglu, Koray. Pixel recurrent neural networks. In ICML, volume 48, pp. 1747â1756, 2016b.
Williams, Philip, Sennrich, Rico, Nadejde, Maria, Huck, Matthias, and Koehn, Philipp. Edinburghâs syntax-based In Proceedings of the Tenth systems at WMT 2015. Workshop on Statistical Machine Translation, 2015. | 1610.10099#41 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.07272 | 0 | 6 1 0 2
t c O 4 2 ] L C . s c [
1 v 2 7 2 7 0 . 0 1 6 1 : v i X r a
# Bridging Neural Machine Translation and Bilingual Dictionaries
Jiajun Zhangâ and Chengqing Zongâ â¡ â University of Chinese Academy of Sciences, Beijing, China National Laboratory of Pattern Recognition, CASIA, Beijing, China â¡CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China {jjzhang,cqzong}@nlpr.ia.ac.cn
# Abstract | 1610.07272#0 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 0 | 7 1 0 2
b e F 9 ] V C . s c [
5 v 9 2 6 7 0 . 0 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# A LEARNED REPRESENTATION FOR ARTISTIC STYLE
Vincent Dumoulin & Jonathon Shlens & Manjunath Kudlur Google Brain, Mountain View, CA [email protected], [email protected], [email protected]
# ABSTRACT
The diversity of painting styles represents a rich visual vocabulary for the con- struction of an image. The degree to which one may learn and parsimoniously capture this visual vocabulary measures our understanding of the higher level fea- tures of paintings, if not images in general. In this work we investigate the con- struction of a single, scalable deep network that can parsimoniously capture the artistic style of a diversity of paintings. We demonstrate that such a network gen- eralizes across a diversity of artistic styles by reducing a painting to a point in an embedding space. Importantly, this model permits a user to explore new paint- ing styles by arbitrarily combining the styles learned from individual paintings. We hope that this work provides a useful step towards building rich models of paintings and offers a window on to the structure of the learned representation of artistic style.
# INTRODUCTION | 1610.07629#0 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 1 | # Abstract
Neural Machine Translation (NMT) has become the new state-of-the-art in sev- eral language pairs. However, it remains a challenging problem how to integrate NMT with a bilingual dictionary which mainly contains words rarely or never seen in the bilingual training data. In this pa- per, we propose two methods to bridge NMT and the bilingual dictionaries. The core idea behind is to design novel models that transform the bilingual dictionaries into adequate sentence pairs, so that NMT can distil latent bilingual mappings from the ample and repetitive phenomena. One method leverages a mixed word/character model and the other attempts at synthesiz- ing parallel sentences guaranteeing mas- sive occurrence of the translation lexi- con. Extensive experiments demonstrate that the proposed methods can remarkably improve the translation quality, and most of the rare words in the test sentences can obtain correct translations if they are cov- ered by the dictionary.
Typically, NMT adopts the encoder-decoder ar- chitecture which consists of two recurrent neural networks. The encoder network models the se- mantics of the source sentence and transforms the source sentence into the context vector represen- tation, from which the decoder network generates the target translation word by word. | 1610.07272#1 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 1 | # INTRODUCTION
A pastiche is an artistic work that imitates the style of another one. Computer vision and more recently machine learning have a history of trying to automate pastiche, that is, render an image in the style of another one. This task is called style transfer, and is closely related to the texture synthesis task. While the latter tries to capture the statistical relationship between the pixels of a source image which is assumed to have a stationary distribution at some scale, the former does so while also attempting to preserve some notion of content. | 1610.07629#1 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 2 | One important feature of NMT is that each word in the vocabulary is mapped into a low- dimensional (word embed- ding). The use of continuous representations en- ables NMT to learn latent bilingual mappings for accurate translation and explore the statistical sim- ilarity between words (e.g. desk and table) as well. As a disadvantage of the statistical models, NMT can learn good word embeddings and accurate bilingual mappings only when the words occur frequently in the parallel sentence pairs. However, low-frequency words are ubiquitous, especially when the training data is not enough (e.g. low- resource language pairs). Fortunately, in many language pairs and domains, we have handmade bilingual dictionaries which mainly contain words rarely or never seen in the training corpus. There- fore, it remains a big challenge how to bridge NMT and the bilingual dictionaries.
1
# 1 Introduction | 1610.07272#2 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 2 | On the computer vision side, Efros & Leung (1999) and Wei & Levoy (2000) attempt to âgrowâ textures one pixel at a time using non-parametric sampling of pixels in an examplar image. Efros & Freeman (2001) and Liang et al. (2001) extend this idea to âgrowingâ textures one patch at a time, and Efros & Freeman (2001) uses the approach to implement âtexture transferâ, i.e. transfering the texture of an object onto another one. Kwatra et al. (2005) approaches the texture synthesis problem from an energy minimization perspective, progressively reï¬ning the texture using an EM- like algorithm. Hertzmann et al. (2001) introduces the concept of âimage analogiesâ: given a pair of âunï¬lteredâ and âï¬lteredâ versions of an examplar image, a target image is processed to create an analogous âï¬lteredâ result. More recently, Frigo et al. (2016) treats style transfer as a local texture transfer (using an adaptive patch partition) followed by a global color transfer, and Elad & Milanfar (2016) extends Kwatraâs energy-based method into a style transfer algorithm by taking content similarity into account. | 1610.07629#2 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 3 | 1
# 1 Introduction
Due to its superior ability in modelling the end-to-end translation process, neural machine translation (NMT), recently proposed by (Kalch- brenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014), has become the novel paradigm and achieved the new state-of-the-art translation performance for several language pairs, such as English-to-French, English-to-German and Chinese-to-English (Sutskever et al., 2014; Bahdanau et al., 2014; Luong et al., 2015b; Sen- nrich et al., 2015b; Wu et al., 2016).
Recently, Arthur et al. (2016) attempt at incor- porating discrete translation lexicons into NMT. The main idea of their method is leveraging the discrete translation lexicons to positively inï¬uence the probability distribution of the output words in the NMT softmax layer. However, their approach only addresses the translation lexicons which are in the restricted vocabulary 1 of NMT. The out-of- vocabulary (OOV) words are out of their consid- eration.
1NMT usually keeps only the words whose occurrence is more than a threshold (e.g. 10), since very rare words can not yield good embeddings and large vocabulary leads to high computational complexity. | 1610.07272#3 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 3 | On the machine learning side, it has been shown that a trained classiï¬er can be used as a feature extractor to drive texture synthesis and style transfer. Gatys et al. (2015a) uses the VGG-19 network (Simonyan & Zisserman, 2014) to extract features from a texture image and a synthesized texture. The two sets of features are compared and the synthesized texture is modiï¬ed by gradient descent so that the two sets of features are as close as possible. Gatys et al. (2015b) extends this idea to style transfer by adding the constraint that the synthesized image also be close to a content image with respect to another set of features extracted by the trained VGG-19 classiï¬er.
While very ï¬exible, this algorithm is expensive to run due to the optimization loop being carried. Ulyanov et al. (2016a), Li & Wand (2016) and Johnson et al. (2016) tackle this problem by intro- ducing a feedforward style transfer network, which is trained to go from content to pastiche image in one pass. However, in doing so some of the ï¬exibility of the original algorithm is lost: the style transfer network is tied to a single style, which means that separate networks have to be trained
1
Published as a conference paper at ICLR 2017 | 1610.07629#3 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 4 | ' Chinese Source: : Chinese Pinyin: Bilingual Dictionary: 1. mixed word/character model 4L1E > fireworks JEZE A A HY Be RR ALE i zhéngzai wei ziji de chuangyi shifang lihua ; English Reference: was setting off fireworks for its creativity 2. pseudo sentence pair synthesis model <B>#L <E>4é > fireworks the fireworks light up the night fireworks open in the sky LEAT RH) they held talks on fireworks JH tek Riz thie AY 4L4E fireworks product for London Olympics. Vv mixed word/character NUT y. EE ANSI ¥. is trying to release their own creative fireworks EM <B>4L <E>4E NMT trained on mixed corpus J IEE AA HY is releasing their own creative fireworks FE
Figure 1: The framework of our proposed methods.
In this paper, we aim at making full use of all the bilingual dictionaries, especially the ones covering the rare or OOV words. Our basic idea is to trans- form the low-frequency word pair in bilingual dic- tionaries into adequate sequence pairs which guar- antee the frequent occurrence of the word pair, so that NMT can learn translation mappings between the source word and the target word. | 1610.07272#4 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07272 | 5 | To achieve this goal, we propose two methods, as shown in Fig. 1. In the test sentence, the Chi- nese word lËihu¯a appears only once in our train- ing data and the baseline NMT cannot correctly translate this word. Fortunately, our bilingual dic- tionary contains this translation lexicon. Our ï¬rst method extends the mixed word/character model proposed by Wu et al. (2016) to re-label the rare words in both of the dictionary and training data with character sequences in which characters are now frequent and the character translation map- pings can be learnt by NMT. Instead of backing off words into characters, our second method is well designed to synthesize adequate pseudo sentence pairs containing the translation lexicon, allowing NMT to learn the word translation mappings.
We make the following contributions in this pa- per:
⢠We propose a low-frequency to high- frequency framework to bridge NMT and the bilingual dictionaries.
⢠We propose and investigate two methods to utilize the bilingual dictionaries. One extends the mixed word/character model and the other designs a pseudo sentence pair syn- thesis model. | 1610.07272#5 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 5 | (b) The style representation learned via conditional instance normalization permits the arbitrary combination of artistic styles. Each pastiche in the sequence corresponds to a different step in interpolating between the γ and β values associated with two styles the model was trained on.
Figure 1: Pastiches produced by a style transfer network trained on 32 styles chosen for their variety.
for every style being modeled. Subsequent work has brought some performance improvements to style transfer networks, e.g. with respect to color preservation (Gatys et al., 2016a) or style transfer quality (Ulyanov et al., 2016b), but to our knowledge the problem of the single-purpose nature of style transfer networks remains untackled.
We think this is an important problem that, if solved, would have both scientiï¬c and practical im- portance. First, style transfer has already found use in mobile applications, for which on-device processing is contingent upon the models having a reasonable memory footprint. More broadly, building a separate network for each style ignores the fact that individual paintings share many com- mon visual elements and a true model that captures artistic style would be able to exploit and learn from such regularities. Furthermore, the degree to which an artistic styling model might general- ize across painting styles would directly measure our ability to build systems that parsimoniously capture the higher level features and statistics of photographs and images (Simoncelli & Olshausen, 2001). | 1610.07629#5 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 6 | ⢠We propose and investigate two methods to utilize the bilingual dictionaries. One extends the mixed word/character model and the other designs a pseudo sentence pair syn- thesis model.
⢠The extensive experiments on Chinese-to- English translation show that our proposed methods signiï¬cantly outperform the strong attention-based NMT. We further ï¬nd that most of rare words can be correctly trans- lated, as long as they are covered by the bilin- gual dictionary.
# 2 Neural Machine Translation
Our framework bridging NMT and the discrete bilingual dictionaries can be applied in any neural machine translation model. Without loss of gen- erality, we use the attention-based NMT proposed by (Luong et al., 2015b), which utilizes stacked Long-Short Term Memory (LSTM, (Hochreiter and Schmidhuber, 1997)) layers for both encoder and decoder as illustrated in Fig. 2. | 1610.07272#6 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 6 | In this work, we show that a simple modiï¬cation of the style transfer network, namely the in- troduction of conditional instance normalization, allows it to learn multiple styles (Figure 1a).We demonstrate that this approach is ï¬exible yet comparable to single-purpose style transfer networks, both qualitatively and in terms of convergence properties. This model reduces each style image into a point in an embedding space. Furthermore, this model provides a generic representation for artistic styles that seems ï¬exible enough to capture new artistic styles much faster than a single-purpose net2
Published as a conference paper at ICLR 2017
VGG-16
Figure 2: Style transfer network training diagram (Johnson et al., 2016; Ulyanov et al., 2016a). A pastiche image is produced by feeding a content image through the style transfer network. The two images, along with a style image, are passed through a trained classiï¬er, and the resulting interme- diate representations are used to compute the content loss Lc and style loss Ls. The parameters of the classiï¬er are kept ï¬xed throughout training.
work. Finally, we show that the embeddding space representation permits one to arbitrarily combine artistic styles in novel ways not previously observed (Figure 1b).
# 2 STYLE TRANSFER WITH DEEP NETWORKS | 1610.07629#6 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 7 | The encoder-decoder NMT ï¬rst encodes the source sentence X = (x1, x2, · · · , xTx) into a se- quence of context vectors C = (h1, h2, · · · , hTx) whose size varies with respect to the source sen- tence length. Then, the encoder-decoder NMT decodes from the context vectors C and gener- ates target translation Y = (y1, y2, · · · , yTy ) one word each time by maximizing the probability of p(yi|y<i, C). Note that xj (yi) is word embedding corresponding to the jth (ith) word in the source (target) sentence. Next, we brieï¬y review the en' ' ' i ' ' | | start ' ' ' ' decoder ' : + ' i ' encoder i ! ' i ht | â> |h} |-> â hi, ' ' 7 ! ' x1 x2 Xt |
Figure 2: The architecture of the attention-based NMT which has m stacked LSTM layers for en- coder and l stacked LSTM layers for decoder.
coder introducing how to obtain C and the decoder addressing how to calculate p(yi|y<i, C). | 1610.07272#7 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 7 | # 2 STYLE TRANSFER WITH DEEP NETWORKS
Style transfer can be deï¬ned as ï¬nding a pastiche image p whose content is similar to that of a content image c but whose style is similar to that of a style image s. This objective is by nature vaguely deï¬ned, because similarity in content and style are themselves vaguely deï¬ned.
The neural algorithm of artistic style proposes the following deï¬nitions:
⢠Two images are similar in content if their high-level features as extracted by a trained classiï¬er are close in Euclidian distance.
⢠Two images are similar in style if their low-level features as extracted by a trained classiï¬er share the same statistics or, more concretely, if the difference between the featuresâ Gram matrices has a small Frobenius norm. | 1610.07629#7 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 8 | coder introducing how to obtain C and the decoder addressing how to calculate p(yi|y<i, C).
context vectors C = ) are generated by the en- is Encoder: 1 , hm The (hm coder using m stacked LSTM layers. hk j calculated as follows: 2 , · · · , hm Tx
j = LST M (hk hk jâ1, hkâ1 j ) (1)
Where hkâ1 Decoder: probability p(yi|y<i, C) is computed in different ways according to the choice of the context C at time i. In (Cho et al., 2014), the authors choose C = hm , Tx while Bahdanau et al. (2014) use different context ci at different time step and the conditional probability will become:
p(yi|y<i, C) = p(yi|y<i, ci) = sof tmax(W Ëzi)) (2)
where Ëzi is the attention output:
Ëzi = tahn(Wc[zl i; ci]) (3)
The attention model calculates ci as the weighted sum of the source-side context vectors, just as illustrated in the middle part of Fig. 2.
Tx G = S- Oj 2 (4) j=l
where αij is a normalized item calculated as fol- lows: | 1610.07272#8 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 8 | The ï¬rst point is motivated by the empirical observation that high-level features in classiï¬ers tend to correspond to higher levels of abstractions (see Zeiler & Fergus (2014) for visualizations; see Johnson et al. (2016) for style transfer features). The second point is motivated by the observation that the artistic style of a painting may be interpreted as a visual texture (Gatys et al., 2015a). A visual texture is conjectured to be spatially homogenous and consist of repeated structural motifs whose minimal sufï¬cient statistics are captured by lower order statistical measurements (Julesz, 1962; Portilla & Simoncelli, 1999).
In its original formulation, the neural algorithm of artistic style proceeds as follows: starting from some initialization of p (e.g. c, or some random initialization), the algorithm adapts p to minimize the loss function
L(s, c, p) = λsLs(p) + λcLc(p), (1) where Ls(p) is the style loss, Lc(p) is the content loss and λs, λc are scaling hyperparameters. Given a set of âstyle layersâ S and a set of âcontent layersâ C, the style and content losses are themselves deï¬ned as | 1610.07629#8 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 9 | Tx G = S- Oj 2 (4) j=l
where αij is a normalized item calculated as fol- lows:
Qj; = byt 4 (5) 7 Sy ht 2h
zk i is computed using the following formula:
iâ1, zkâ1 i = LST M (zk zk i i will be calculated by combining
If k = 1, z1 Ëziâ1 as feed input (Luong et al., 2015b):
i = LST M (z1 z1 iâ1, yiâ1, Ëziâ1) (7)
Given the sentence aligned bilingual train- ing data Db = {(X (n) , Y (n) n=1 , all the pa- b rameters of the encoder-decoder NMT are opti- mized to maximize the following conditional log- likelihood:
N Ty 1 Li)=5> S > logp(y!â |y?, XA) (8) n=1 i=1
# Incorporating Bilingual Dictionaries | 1610.07272#9 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 9 | £(0) =o FN G(Os(0)) = G(64(6)) I: Q) ieS ~'
Le(v) = > = | i(P) ~ 65(0) |B @) jec 4
where Ïl(x) are the classiï¬er activations at layer l, Ul is the total number of units at layer l and G(Ïl(x)) is the Gram matrix associated with the layer l activations. In practice, we set λc = 1.0 and and leave λs as a free hyper-parameter.
3
Published as a conference paper at ICLR 2017
In order to speed up the procedure outlined above, a feed-forward convolutional network, termed a style transfer network T , is introduced to learn the transformation (Johnson et al., 2016; Li & Wand, 2016; Ulyanov et al., 2016a). It takes as input a content image c and outputs the pastiche image p directly (Figure 2). The network is trained on many content images (Deng et al., 2009) using the same loss function as above, i.e.
L(s, c) = λsLs(T (c)) + λcLc(T (c)). (4) | 1610.07629#9 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 10 | N Ty 1 Li)=5> S > logp(y!â |y?, XA) (8) n=1 i=1
# Incorporating Bilingual Dictionaries
The word translation pairs in bilingual dictionar- ies are difï¬cult to use in neural machine transla- tion, mainly because they are rarely or never seen in the parallel training corpus. We attempt to build a bridge between NMT and bilingual dictionaries. We believe the bridge is data transformation that can transform rarely or unseen word translation pairs into frequent ones and provide NMT ade- quate information to learn latent translation map- pings. In this work, we propose two methods to perform data transformation from character level and word level respectively.
# 3.1 Mixed Word/Character Model
= dictionary Dic Given {(Dic(i) i=1, we focus on the trans- lation lexicons (Dicx, Dicy) if Dicx is a rare or unknown word in the bilingual corpus Db.
We ï¬rst introduce data transformation using the character-based method. We all know that words
are composed of characters and most of the char- acters are frequent even though the word is never seen. This idea is popularly used to deploy open vocabulary NMT (Ling et al., 2015; Costa-Juss`a and Fonollosa, 2016; Chung et al., 2016). | 1610.07272#10 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 10 | L(s, c) = λsLs(T (c)) + λcLc(T (c)). (4)
While feedforward style transfer networks solve the problem of speed at test-time, they also suffer from the fact that the network T is tied to one speciï¬c painting style. This means that a separate network T has to be trained for every style to be imitated. The real-world impact of this limitation is that it becomes prohibitive to implement a style transfer application on a memory-limited device, such as a smartphone.
# 2.1 N-STYLES FEEDFORWARD STYLE TRANSFER NETWORKS
Our work stems from the intuition that many styles probably share some degree of computation, and that this sharing is thrown away by training N networks from scratch when building an N - styles style transfer system. For instance, many impressionist paintings share similar paint strokes but differ in the color palette being used. In that case, it seems very wasteful to treat a set of N impressionist paintings as completely separate styles. | 1610.07629#10 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 11 | Character translation mappings are much eas- ier to learn for NMT than word translation map- pings. However, given a character sequence of a source language word, NMT cannot guarantee the generated character sequence would lead to a valid target language word. Therefore, we prefer the framework mixing the words and characters, which is employed by Wu et al. (2016) to handle OOV words. If it is a frequent word, we keep it un- changed. Otherwise, we fall back to the character sequence.
We perform data transformation on both parallel training corpus and bilingual dictionaries. Here, English sentences and words are adopted as ex- amples. Suppose we keep the English vocabulary V in which the frequency of each word exceeds a threshold K. For each English word w (e.g. oak) in a parallel sentence pair (Xb, Yb) or in a trans- lation lexicon (Dicx, Dicy), if w â V , w will be left as it is. Otherwise, w is re-labelled by charac- ter sequence. For example, oak will be:
oak + (B)o (M)a (E)k (9)
Where (B), (/) and (£) denotes respectively begin, middle and end of a word.
# 3.2 Pseudo Sentence Pair Synthesis Model | 1610.07272#11 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 11 | To take this into account, we propose to train a single conditional style transfer network T (c, s) for N styles. The conditional network is given both a content image and the identity of the style to apply and produces a pastiche corresponding to that style. While the idea is straightforward on paper, there remains the open question of how conditioning should be done. In exploring this question, we found a very surprising fact about the role of normalization in style transfer networks: to model a style, it is sufï¬cient to specialize scaling and shifting parameters after normalization to each speciï¬c style. In other words, all convolutional weights of a style transfer network can be shared across many styles, and it is sufï¬cient to tune parameters for an afï¬ne transformation after normalization for each style.
We call this approach conditional instance normalization. The goal of the procedure is transform a layerâs activations x into a normalized activation z speciï¬c to painting style s. Building off the instance normalization technique proposed in Ulyanov et al. (2016b), we augment the γ and β parameters so that theyâre N à C matrices, where N is the number of styles being modeled and C is the number of output feature maps. Conditioning on a style is achieved as follows:
e=1. (4) +4, (5) | 1610.07629#11 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 12 | Where (B), (/) and (£) denotes respectively begin, middle and end of a word.
# 3.2 Pseudo Sentence Pair Synthesis Model
Since NMT is a data driven approach, it can learn latent translation mappings for a word pair (Dicx, Dicy) if these exist many parallel sen- tences containing (Dicx, Dicy). Along this line, we propose the pseudo sentence pair synthesis In this model, we aim at synthesiz- model. ing for a rare or unknown translation lexicon (Dicx, Dicy) the adequate pseudo parallel sen- tences {(X j j=1 each of which contains (Dicx, Dicy).
Although there are no enough bilingual sen- tence pairs in many languages (and many do- mains), a huge amount of the monolingual data is available in the web. In this paper, we plan to make use of the source-side monolingual data Dsm = {(x im) M_|(M > N)to synthesize the pseudo bilingual sentence pairs Dy, = {(Xp, ¥p)} 7.1.
For constructing Dbp, we resort to statistical ma- chine translation (SMT) and apply a self-learning | 1610.07272#12 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 12 | e=1. (4) +4, (5)
where µ and Ï are xâs mean and standard deviation taken across spatial axes and γs and βs are obtained by selecting the row corresponding to s in the γ and β matrices (Figure 3). One added beneï¬t of this approach is that one can stylize a single image into N painting styles with a single feed forward pass of the network with a batch size of N . In constrast, a single-style network requires N feed forward passes to perform N style transfers (Johnson et al., 2016; Li & Wand, 2016; Ulyanov et al., 2016a).
Because conditional instance normalization only acts on the scaling and shifting parameters, training a style transfer network on N styles requires fewer parameters than the naive approach of training N separate networks. In a typical network setup, the model consists of roughly 1.6M parameters, only around 3K (or 0.2%) of which specify individual artistic styles. In fact, because the size of γ and β grows linearly with respect to the number of feature maps in the network, this approach requires O(N à L) parameters, where L is the total number of feature maps in the network. | 1610.07629#12 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 13 | For constructing Dbp, we resort to statistical ma- chine translation (SMT) and apply a self-learning
Algorithm 1 Pseudo Sentence Pair Synthesis. Input: bilingual training data Db; bilingual dic- tionary Dic; source language monolingual data Dsm; pseudo sentence pair number K for each (Dicx, Dicy);
Output: pseudo {(X j p )}J p, Y j sentence j=1: pairs Dbp =
1: Build an SMT system PBMT on {Db, Dic}; 2: Dbp = {}; 3: for each (Dicx, Dicy) in Dic do Retrieve K monolingual 4: p }K Translate {X k p }K p , Y k
p }K k=1 using PBMT; p }K k=1 into Dbp;
sentences {Y k Add {X k
# 6: 7: end for 8: return Dbp | 1610.07272#13 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 13 | In addition, as is discussed in subsection 3.4, conditional instance normalization presents the advan- tage that integrating an N + 1th style to the network is cheap because of the very small number of parameters to train.
4
Published as a conference paper at ICLR 2017
<â_ io
Figure 3: Conditional instance normalization. The input activation x is normalized across both spatial dimensions and subsequently scaled and shifted using style-dependent parameter vectors γs, βs where s indexes the style label.
# 3 EXPERIMENTAL RESULTS
3.1 METHODOLOGY
Unless noted otherwise, all style transfer networks were trained using the hyperparameters outlined in the Appendixâs Table 1.
We used the same network architecture as in Johnson et al. (2016), except for two key details: zero-padding is replaced with mirror-padding, and transposed convolutions (also sometimes called deconvolutions) are replaced with nearest-neighbor upsampling followed by a convolution. The use of mirror-padding avoids border patterns sometimes caused by zero-padding in SAME-padded convolutions, while the replacement for transposed convolutions avoids checkerboard patterning, as discussed in in Odena et al. (2016). We ï¬nd that with these two improvements training the network no longer requires a total variation loss that was previously employed to remove high frequency noise as proposed in Johnson et al. (2016). | 1610.07629#13 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 14 | sentences {Y k Add {X k
# 6: 7: end for 8: return Dbp
method as illustrated in Algorithm 1. In contrast to NMT, statistical machine translation (SMT, e.g. phrase-based SMT (Koehn et al., 2007; Xiong et al., 2006)) is easy to integrate bilingual dictio- naries (Wu et al., 2008) as long as we consider the translation lexicons of bilingual dictionaries as phrasal translation rules. Following (Wu et al., 2008), we ï¬rst merge the bilingual sentence cor- pus Db with the bilingual dictionaries Dic, and employ the phrase-based SMT to train an SMT system called PBMT (line 1 in Algorithm 1). | 1610.07272#14 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 14 | Our training procedure follows Johnson et al. (2016). Brieï¬y, we employ the ImageNet dataset (Deng et al., 2009) as a corpus of training content images. We train the N -style network with stochastic gradient descent using the Adam optimizer (Kingma & Ba, 2014). Details of the model architecture are in the Appendix. A complete implementation of the model in TensorFlow (Abadi et al., 2016) as well as a pretrained model are available for download 1. The evaluation images used for this work were resized such that their smaller side has size 512. Their stylized versions were then center-cropped to 512x512 pixels for display.
3.2 TRAINING A SINGLE NETWORK ON N STYLES PRODUCES STYLIZATIONS COMPARABLE TO INDEPENDENTLY-TRAINED MODELS
As a ï¬rst test, we trained a 10-styles model on stylistically similar images, namely 10 impressionist paintings from Claude Monet. Figure 4 shows the result of applying the trained network on evalu- ation images for a subset of the styles, with the full results being displayed in the Appendix. The model captures different color palettes and textures. We emphasize that 99.8% of the parameters are shared across all styles in contrast to 0.2% of the parameters which are unique to each painting style. | 1610.07629#14 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 15 | For each rare or unknown word translation pair (Dicx, Dicy), we can easily retrieve the ad- equate source language monolingual sentences {(X k p ) from the web or other data collections. PBMT is then applied to translate {(X k p )}K k=1 to generate target language translations {(Y k p )}K k=1. As PBMT employs the bilingual dictionaries Dic as additional transla- tion rules, each target translation sentence Yp â {(Y k p )}K k=1 will contain Dicy. Then, the sen- tence pair (X k p ) will include the word trans- lation pair (Dicx, Dicy). Finally, we can pair p )}K {(X k k=1 to yield pseudo sen- tence pairs {(X k p )}K k=1, which will be added into Dbp (line 2-6 in Algorithm 1).
The original bilingual corpus Db and the pseudo bilingual sentence pairs Dbp are combined to- gether to train a new NMT model. Some may worry that the target parts of Dbp are SMT re- sults but not well-formed sentences which would Fortunately, Sennrich et harm NMT training. | 1610.07272#15 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 15 | To get a sense of what is being traded off by folding 10 styles into a single network, we trained a separate, single-style network on each style and compared them to the 10-styles network in terms of style transfer quality and training speed (Figure 5).
The left column compares the learning curves for style and content losses between the single-style networks and the 10-styles network. The losses were averaged over 32 random batches of content images. By visual inspection, we observe that the 10-styles network converges as quickly as the single-style networks in terms of style loss, but lags slightly behind in terms of content loss.
In order to quantify this observation, we compare the ï¬nal losses for 10-styles and single-style models (center column). The 10-styles networkâs content loss is around 8.7 ± 3.9% higher than its
# 1https://github.com/tensorflow/magenta
5
Published as a conference paper at ICLR 2017 | 1610.07629#15 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 16 | al. (2015b), Cheng et al. (2016b) and Zhang and Zong (2016) observe from large-scale exper- iments that the synthesized bilingual data using self-learning framework can substantially improve NMT performance. Since Dbp now contains bilin- gual dictionaries, we expect that the NMT trained on {Db, Dbp} cannot only signiï¬cantly boost the translation quality, but also solve the problem of rare word translation if they are covered by Dic.
Note that the pseudo sentence pair synthesis model can be further augmented by the mixed word/character model to solve other OOV trans- lations.
# 4 Experimental Settings
In this section we describe the data sets, data pre- processing, the training and evaluation details, and all the translation methods we compare in the ex- periments.
# 4.1 Dataset | 1610.07272#16 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 16 | Figure 4: A single style transfer network was trained to capture the style of 10 Monet paintings, ï¬ve of which are shown here. All 10 styles in this single model are in the Appendix. Golden Gate Bridge photograph by Rich Niewiroski Jr.
single-style counterparts, while the difference in style losses (8.9 ± 16.5% lower) is insigniï¬cant. While the N -styles network suffers from a slight decrease in content loss convergence speed, this may not be a fair comparison, given that it takes N times more parameter updates to train N single- style networks separately than to train them with an N -styles network.
The right column shows a comparison between the pastiches produced by the 10-styles network and the ones produced by the single-style networks. We see that both results are qualitatively similar.
3.3 THE N-STYLES MODEL IS FLEXIBLE ENOUGH TO CAPTURE VERY DIFFERENT STYLES | 1610.07629#16 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 17 | # 4.1 Dataset
We perform the experiments on Chinese-to- English translation. Our bilingual training data Db includes 630K2 sentence pairs (each sentence length is limited up to 50 words) extracted from LDC corpora3. For validation, we choose NIST 2003 (MT03) dataset. For testing, we use NIST 2004 (MT04), NIST 2005 (MT05), NIST 2006 (MT06) and NIST 2006 (MT08) datasets. The test sentences are remained as their original length. As for the source-side monolingual data Dsm, we collect about 100M Chinese sentences in which approximately 40% are provided by Sogou and the rest are collected by searching the words in the bilingual data from the web. We use two bilingual dictionaries: one is from LDC (LDC2002L27) and the other is manually collected by ourselves. The combined dictionary Dic contains 86,252 translation lexicons in total.
# 4.2 Data Preprocessing
If necessary, the Chinese sentences are word seg- mented using Stanford Word Segmenter4. The En- glish sentences are tokenized using the tokenizer script from the Moses decoder5. We limit the vo- cabulary in both Chinese and English using a fre2Without using very large-scale data, it is relatively easy to evaluate the effectiveness of the bilingual dictionaries. | 1610.07272#17 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 17 | 3.3 THE N-STYLES MODEL IS FLEXIBLE ENOUGH TO CAPTURE VERY DIFFERENT STYLES
We evaluated the ï¬exibility of the N -styles model by training a style transfer network on 32 works of art chosen for their diversity. Figure 1a shows the result of applying the trained network on eval- uation images for a subset of the styles. Once again, the full results are displayed in the Appendix. The model appears to be capable of modeling all 32 styles in spite of the tremendous variation in color palette and the spatial scale of the painting styles.
3.4 THE TRAINED NETWORK GENERALIZES ACROSS PAINTING STYLES
Since all weights in the transformer network are shared between styles, one way to incorporate a new style to a trained network is to keep the trained weights ï¬xed and learn a new set of γ and β parameters. To test the efï¬ciency of this approach, we used it to incrementally incorporate Monetâs Plum Trees in Blossom painting to the network trained on 32 varied styles. Figure 6 shows that doing so is much faster than training a new network from scratch (left) while yielding comparable pastiches: even after eight times fewer parameter updates than its single-style counterpart, the ï¬ne- tuned model produces comparable pastiches (right). | 1610.07629#17 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 18 | 3LDC2000T50, LDC2002E18, LDC2003E07, LDC2003E14, LDC2003T17, LDC2004T07. LDC2002T01, 4http://nlp.stanford.edu/software/segmenter.shtml 5http://www.statmt.org/moses/
quency threshold u. We choose uc = 10 for Chi- nese and ue = 8 for English, resulting |Vc| = 38815 and |Ve| = 30514 for Chinese and En- glish respectively in Db. As we focus on rare or unseen translation lexicons of the bilingual dictio- nary Dic in this work, we ï¬lter Dic and retain the ones (Dicx, Dicy) if Dicx /â Vc, resulting 8306 entries in which 2831 ones appear in the valida- tion and test data sets. All the OOV words are re- placed with UNK in the word-based NMT and are re-labelled into character sequences in the mixed word/character model.
# 4.3 Training and Evaluation Details | 1610.07272#18 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07272 | 19 | # 4.3 Training and Evaluation Details
We build the described models using the Zoph RNN6 in C++/CUDA and provides training In the NMT architecture across multiple GPUs. as illustrated in Fig. the encoder includes 2, two stacked LSTM layers, followed by a global attention layer, and the decoder also contains two stacked LSTM layers followed by the softmax layer. The word embedding dimension and the size of hidden layers are all set to 1000.
Each NMT model is trained on GPU K80 us- ing stochastic gradient decent algorithm AdaGrad (Duchi et al., 2011). We use a mini batch size of B = 128 and we run a total of 20 iterations for all the data sets. The training time for each model ranges from 2 days to 4 days. At test time, we em- ploy beam search with beam size b = 10. We use case-insensitive 4-gram BLEU score as the auto- matic metric (Papineni et al., 2002) for translation quality evaluation.
# 4.4 Translation Methods
In the experiments, we compare our method with the conventional SMT model and the baseline attention-based NMT model. We list all the trans- lation methods as follows: | 1610.07272#19 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 19 | 45000 s0000 Total content loss Final content loss (N styles) 30000 0 âsw0 âJadoo 15000 20000 â2sv00 0000 35000 20000 â000035000 âa0000 45000 Parameter updates Final content los (1 style) 25000 20000 s000 Total style oss inal style loss (N styles) E 10000 . 5000 oâswo âadoo â15t00 â2ad00 â2st00 sooo 35000 âa0000 âsoot 0000 1S000 20000 25000 Parameter updates Final style loss (1 style) N styles 1 style | 1610.07629#19 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 20 | In the experiments, we compare our method with the conventional SMT model and the baseline attention-based NMT model. We list all the trans- lation methods as follows:
⢠Moses: It is the state-of-the-art phrase-based SMT system (Koehn et al., 2007). We use its default conï¬guration and train a 4-gram language model on the target portion of the bilingual training data.
⢠Zoph RNN: It is the baseline attention-based NMT system (Luong et al., 2015a; Zoph et al., 2016) using two stacked LSTM layers for both of the encoder and the decoder.
6https://github.com/isi-nlp/Zoph RNN | 1610.07272#20 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 20 | N styles 1 style
Figure 5: The N -styles model exhibits learning dynamics comparable to individual models. (Left column) The N-styles model converges slightly slower in terms of content loss (top) and as fast in terms of style loss (bottom) than individual models. Training on a single Monet painting is repre- sented by two curves with the same color. The dashed curve represents the N -styles model, and the full curves represent individual models. Emphasis has been added on the styles for Vetheuil (1902) (teal) and Water Lilies (purple) for visualization purposes; remaining colors correspond to other Monet paintings (see Appendix). (Center column) The N-styles model reaches a slightly higher ï¬nal content loss than (top, 8.7 ± 3.9% increase) and a ï¬nal style loss comparable to (bot- tom, 8.9 ± 16.5% decrease) individual models. (Right column) Pastiches produced by the N -styles network are qualitatively comparable to those produced by individual networks. | 1610.07629#20 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 21 | Method Moses Zoph RNN Zoph RNN-mixed Zoph RNN-mixed-dic Zoph RNN-pseudo (K = 10) Zoph RNN-pseudo-dic (K = 10) Zoph RNN-pseudo (K = 20) Zoph RNN-pseudo-dic (K = 20) Zoph RNN-pseudo (K = 30) Zoph RNN-pseudo-dic (K = 30) Zoph RNN-pseudo (K = 40) Zoph RNN-pseudo-dic (K = 40) Zoph RNN-pseudo-mixed (K = 40) Zoph RNN-pseudoâmixed-dic (K = 40) |Vc| 38815 42769 42892 42133 42133 43080 43080 44162 44162 45195 45195 45436 45436 |Ve| MT03 MT04 MT05 MT06 MT08 23.20 25.93 26.81 27.04 27.65 28.65 26.80 29.53 27.58 30.17 27.80 30.25 28.46 30.64 30.30 34.77 35.57 36.29 35.66 36.48 35.00 36.92 36.07 37.26 35.44 36.93 38.17 38.66 31.04 37.40 | 1610.07272#21 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 21 | 10° â _ From scratch 2 â Finetuned 5,000 steps 40,000 steps 2 E 8 s f= fs 2 3 ri 3 oO . g 10° s 2 a oO 8 g 2 8 > & 10° ⬠3 i 3 = 0 5000 10000 15000 20000 25000 30000 35000 40000 Parameter updates
Figure 6: The trained network is efï¬cient at learning new styles. (Left column) Learning γ and β from a trained style transfer network converges much faster than training a model from scratch. (Right) Learning γ and β for 5,000 steps from a trained style transfer network produces pastiches comparable to that of a single network trained from scratch for 40,000 steps. Conversely, 5,000 step of training from scratch produces leads to a poor pastiche.
Previous work suggested that cleverly balancing optimization strategies offers an opportunity to blend painting styles 2. To probe the utility of this embedding, we tried convex combinations of the
# 2For instance, https://github.com/jcjohnson/neural-style
7
Published as a conference paper at ICLR 2017 | 1610.07629#21 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 22 | 36.29 35.66 36.48 35.00 36.92 36.07 37.26 35.44 36.93 38.17 38.66 31.04 37.40 38.07 38.75 38.02 38.59 36.99 38.63 37.74 39.01 37.96 39.15 39.55 40.78 28.19 32.94 34.44 34.86 34.66 35.81 34.22 36.09 34.63 36.64 34.89 36.85 36.86 38.36 30.04 33.85 36.07 36.57 36.51 38.14 36.09 38.13 36.66 38.50 36.92 38.77 38.53 39.56 30514 30630 30630 32300 31734 32813 32255 33357 32797 33961 33399 32659 32421 Ave 28.55 32.98 34.19 34.70 34.50 35.53 33.82 35.86 34.54 36.32 34.60 36.39 36.31 37.60 | 1610.07272#22 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 22 | 109 _ 83 83 i £74 Aras § 3 64 64 8 & 2 5s 55 8 8 s 46 46 3 E36 36 5 g 27 27 8 2 17 17.2 = & B os 0.8 0.0 0.0 00 #02 04 06 08 1.0 a
Figure 7: The N -styles network can arbitrarily combine artistic styles. (Left) Combining four styles, shown in the corners. Each pastiche corresponds to a different convex combination of the four stylesâ γ and β values. (Right) As we transition from one style to another (Bicentennial Print and Head of a Clown in this case), the style losses vary monotonically. | 1610.07629#22 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 23 | Table 1: Translation results (BLEU score) for different translation methods. K = 10 denotes that we synthesize 10 pseudo sentence pairs for each word translation pair (Dicx, Dicy). The column |Vc| (|Ve|) reports the vocabulary size limited by frequency threshold uc = 10 (ue = 8). Note that all the NMT systems use the single model rather than the ensemble model.
⢠Zoph RNN-mixed-dic: It is our NMT sys- tem which integrates the bilingual dictio- naries by re-labelling the rare or unknown words with character sequence on both bilin- gual training data and bilingual dictionar- ies. Zoph RNN-mixed indicates that mixed word/character model is performed only on the bilingual training data and the bilingual dictionary is not used.
⢠Zoph RNN-pseudo-dic: It is our NMT sys- tem that integrates the bilingual dictionar- ies by synthesizing adequate pseudo sen- tence pairs that contain the focused rare or unseen translation lexicons. Zoph RNN- pseudo means that the target language parts of pseudo sentence pairs are obtained by the SMT system PBMT without using the bilin- gual dictionary Dic.
Can the combined two proposed methods further boost the translation performance?
# 5.1 NMT vs. SMT | 1610.07272#23 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 23 | γ and β values to blend very distinct painting styles (Figure 1b; Figure 7, left column). Employing a single convex combination produces a smooth transition from one style to the other. Suppose (γ1, β1) and (γ2, β2) are the parameters corresponding to two different styles. We use γ = α à γ1 + (1 â α) à γ2 and β = α à β1 + (1 â α) à β2 to stylize an image. Employing convex combinations may be extended to an arbitrary number of styles 3. Figure 7 (right column) shows the style loss from the transformer network for a given source image, with respect to the Bicentennial Print and Head of a Clown paintings, as we vary α from 0 to 1. As α increases, the style loss with respect to Bicentennial Print increases, which explains the smooth fading out of that styleâs artifact in the transformed image.
# 4 DISCUSSION | 1610.07629#23 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 24 | Can the combined two proposed methods further boost the translation performance?
# 5.1 NMT vs. SMT
Table 1 reports the detailed translation quality for different methods. Comparing the ï¬rst two lines in Table 1, it is very obvious that the attention- based NMT system Zoph RNN substantially out- performs the phrase-based SMT system Moses on just 630K bilingual Chinese-English sentence pairs. The gap can be as large as 6.36 absolute BLEU points on MT04. The average improve- ment is up to 4.43 BLEU points (32.98 vs. 28.55). It is in line with the ï¬ndings reported in (Wu et al., 2016; Junczys-Dowmunt et al., 2016) which conducted experiments on tens of millions or even more parallel sentence pairs. Our experiments fur- ther show that NMT can be still much better even we have less than 1 million sentence pairs.
is a NMT system combining the two methods Zoph RNN-pseudo and Zoph RNN-mixed. Zoph RNN-pseudo-mixed to Zoph RNN-pseudo.
# 5 Translation Results and Analysis | 1610.07272#24 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 24 | # 4 DISCUSSION
It seems surprising that such a small proportion of the networkâs parameters can have such an im- pact on the overall process of style transfer. A similar intuition has been observed in auto-regressive models of images (van den Oord et al., 2016b) and audio (van den Oord et al., 2016a) where the conditioning process is mediated by adjusting the biases for subsequent samples from the model. That said, in the case of art stylization when posed as a feedforward network, it could be that the speciï¬c network architecture is unable to take full advantage of its capacity. We see evidence for this behavior in that pruning the architecture leads to qualitatively similar results. Another interpretation could be that the convolutional weights of the style transfer network encode transformations that represent âelements of styleâ. The scaling and shifting factors would then provide a way for each style to inhibit or enhance the expression of various elements of style to form a global identity of style. While this work does not attempt to verify this hypothesis, we think that this would consti- tute a very promising direction of research in understanding the computation behind style transfer networks as well as the representation of images in general. | 1610.07629#24 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 25 | # 5 Translation Results and Analysis
For translation quality evaluation, we attempt to ï¬gure out the following three questions: 1) Could the employed attention-based NMT outperform SMT even on less than 1 million sentence pairs? 2) Which model is more effective for integrating the bilingual dictionaries: mixed word/character model or pseudo sentence pair synthesis data? 3)
# 5.2 The Effect of The Mixed W/C Model
The two lines (3-4 in Table 1) presents the BLEU scores when applying the mixed word/character model this model markedly improves the translation quality over the baseline attention-based NMT, although the idea behind is very simple. | 1610.07272#25 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 25 | Concurrent to this work, Gatys et al. (2016b) demonstrated exciting new methods for revising the loss to selectively adjust the spatial scale, color information and spatial localization of the artistic style information. These methods are complementary to the results in this paper and present an interesting direction for exploring how spatial and color information uniquely factor into artistic style representation.
The question of how predictive each style image is of its corresponding style representation is also of great interest. If it is the case that the style representation can easily be predicted from a style image,
3Please see the code repository for real-time, interactive demonstration. A screen capture is available at https://www.youtube.com/watch?v=6ZHiARZmiUI.
8
Published as a conference paper at ICLR 2017
one could imagine building a transformer network which skips learning an individual conditional embedding and instead learn to produce a pastiche directly from a style and a content image, much like in the original neural algorithm of artistic style, but without any optimization loop at test time. | 1610.07629#25 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 26 | the system Zoph RNN-mixed, trained only on the bitext Db, achieves an aver- age improvement of more than 1.0 BLEU point (34.19 vs 32.98) over the baseline Zoph RNN. It indicates that the mixed word/character model can alleviate the OOV translation problem to some exChinese Word zh`uli´u d¯ongji¯a li`ey`an ¯anw`eij`i hËaixi`ao j`ingm`ai f Ëany`ingl´u hu´angpËuji¯ang ch¯aoch¯ed`ao Translation remain owner blaze placebo tsunami intravenous anti-subsidization lingchiang river take-owned lane Correct remain owner blaze placebo tsunami intravenous reactor huangpu river overtaking lane
Table 2: The effect of the Zoph RNN-mixed-dic model in using bilingual dictionaries. The Chinese word is written in Pinyin. The ï¬rst two parts are positive word translation examples, while the third part shows some bad cases. | 1610.07272#26 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 26 | Finally, the learned style representation opens the door to generative models of style: by modeling enough paintings of a given artistic movement (e.g. impressionism), one could build a collection of style embeddings upon which a generative model could be trained. At test time, a style represen- tation would be sampled from the generative model and used in conjunction with the style transfer network to produce a random pastiche of that artistic movement.
In summary, we demonstrated that conditional instance normalization constitutes a simple, efï¬cient and scalable modiï¬cation of style transfer networks that allows them to model multiple styles at the same time. A practical consequence of this approach is that a new painting style may be transmitted to and stored on a mobile device with a small number of parameters. We showed that despite its simplicity, the method is ï¬exible enough to capture very different styles while having very little impact on training time and ï¬nal performance of the trained network. Finally, we showed that the learned representation of style is useful in arbitrarily combining artistic styles. This work suggests the existence of a learned representation for artistic styles whose vocabulary is ï¬exible enough to capture a diversity of the painted world.
# ACKNOWLEDGMENTS | 1610.07629#26 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 27 | tent. For example, the number 3/.3 is an OOV word in Chinese. The mixed model transforms this word into (B)3 (M)1 (M). (E)3 and it is correctly copied into target side, yielding a correct translation 3/.3. Moreover, some named entities (e.g. person name hecker) can be well translated. When adding the bilingual dictionary Dic as training data, the system Zoph_RNN-mixed-dic further gets a moderate improvement of 0.51 BLEU points (34.70 vs 34.19) on average. We find that the mixed model could make use of some rare or unseen translation lexicons in NMT, as illus- trated in the first two parts of Table 2. In the first part of Table 2, the English side of the translation lexicon is a frequent word (e.g. remain). The Chi- nese frequent character (e.g. /it%) shares the most meaning of the whole word (zhiliz) and thus it could be correctly translated into remain. We are a little surprised by the examples in the second part of Table 2, since the correct English parts are all OOV words which require each English charac- ter to be correctly generated. It demonstrates that the | 1610.07272#27 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 27 | # ACKNOWLEDGMENTS
We would like to thank Fred Bertsch, Douglas Eck, Cinjon Resnick and the rest of the Google Ma- genta team for their feedback; Peyman Milanfar, Michael Elad, Feng Yang, Jon Barron, Bhavik Singh, Jennifer Daniel as well as the the Google Brain team for their crucial suggestions and ad- vice; an anonymous reviewer for helpful suggestions about applying this model in a mobile domain. Finally, we would like to thank the Google Cultural Institute, whose curated collection of art pho- tographs was very helpful in ï¬nding exciting style images to train on.
9
Published as a conference paper at ICLR 2017
# REFERENCES
Martın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorï¬ow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. | 1610.07629#27 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 28 | examples in the second part of Table 2, since the correct English parts are all OOV words which require each English charac- ter to be correctly generated. It demonstrates that the mixed model has some ability to predict the correct character sequence. However, this mixed model fails in many scenarios. The third part in Table 2 gives some bad cases. If the first predicted character is wrong, the final word translation will be incorrect (e.g. take-owned lane vs. overtak- ing lane). This is the main reason why the mixed model could not obtain large improvements. | 1610.07272#28 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 28 | Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248â255. IEEE, 2009.
In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pp. 341â346. ACM, 2001.
Alexei A Efros and Thomas K Leung. Texture synthesis by non-parametric sampling. In Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on, volume 2, pp. 1033â1038. IEEE, 1999.
Michael Elad and Peyman Milanfar. Style-transfer via texture-synthesis. arXiv preprint arXiv:1609.03057, 2016.
Oriel Frigo, Neus Sabater, Julie Delon, and Pierre Hellier. Split and match: Example-based adaptive patch sampling for unsupervised style transfer. 2016.
Leon Gatys, Alexander S Ecker, and Matthias Bethge. Texture synthesis using convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 262â270, 2015a. | 1610.07629#28 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 29 | # 5.3 The Effect of Data Synthesis Model
The eight lines (5-12) in Table 1 show the trans- lation performance of the pseudo sentence pair synthesis model. We can analyze the results from three perspectives: 1) the effect of the selfpseudo-dic mixed-dic K = 10 K = 20 K = 30 K = 40 0.76 0.36 0.71 0.78 0.79
Table 3: The hit rate of the bilingual dictionary for different models.
learning method for using the source-side mono- lingual data; 2) the effect of the bilingual dictio- nary; and 3) the effect of pseudo sentence pair number.
(lines with Zoph RNN-pseudo) demonstrate that the synthe- sized parallel sentence pairs using source-side monolingual data can signiï¬cantly improve the baseline NMT Zoph RNN, and the average im- provement can be up to 1.62 BLEU points (34.60 vs. 32.98). This ï¬nding is also reported by Cheng et al. (2016b) and Zhang and Zong (2016). | 1610.07272#29 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 29 | Leon A Gatys, Alexander S Ecker, and Matthias Bethge. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576, 2015b.
Leon A Gatys, Matthias Bethge, Aaron Hertzmann, and Eli Shechtman. Preserving color in neural artistic style transfer. arXiv preprint arXiv:1606.05897, 2016a.
Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, Aaron Hertzmann, and Eli Shechtman. Controlling perceptual factors in neural style transfer. CoRR, abs/1611.07865, 2016b. URL http://arxiv.org/abs/1611.07865.
Image analogies. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pp. 327â340. ACM, 2001.
Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. arXiv preprint arXiv:1603.08155, 2016.
Bela Julesz. Visual pattern discrimination. IRE Trans. Info Theory, 8:84â92, 1962. | 1610.07629#29 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 30 | augmenting Zoph RNN-pseudo with bilingual dictionaries, we can further obtain con- siderable gains. The largest average improvement can be 3.41 BLEU points when compared to the baseline NMT Zoph RNN and 2.04 BLEU points when compared to Zoph RNN-pseudo (35.86 vs. 33.82).
When investigating the effect of pseudo sen- tence pair number (from K = 10 to K = 40), we ï¬nd that the performance is largely better and better if we synthesize more pseudo sentence pairs for each rare or unseen word translation pair (Dicx, Dicy). We can also notice that improve- ment gets smaller and smaller when K grows.
# 5.4 Mixed W/C Model vs. Data Synthesis Model
Comparing the results between the mixed model and the data synthesis model (Zoph RNN-mixed- dic vs. Zoph RNN-pseudo-dic) in Table 1, we can easily see that the data synthesis model is much better to integrate bilingual dictionaries in NMT. Zoph RNN-pseudo-dic can substantially outperform Zoph RNN-mixed-dic by an average improvement up to 1.69 BLEU points (36.39 vs. 34.70). | 1610.07272#30 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 30 | Bela Julesz. Visual pattern discrimination. IRE Trans. Info Theory, 8:84â92, 1962.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Vivek Kwatra, Irfan Essa, Aaron Bobick, and Nipun Kwatra. Texture optimization for example- based synthesis. ACM Transactions on Graphics (ToG), 24(3):795â802, 2005.
Chuan Li and Michael Wand. Precomputed real-time texture synthesis with markovian generative adversarial networks. ECCV, 2016. URL http://arxiv.org/abs/1604.04382.
Lin Liang, Ce Liu, Ying-Qing Xu, Baining Guo, and Heung-Yeung Shum. Real-time texture syn- thesis by patch-based sampling. ACM Transactions on Graphics (ToG), 20(3):127â150, 2001.
Augustus Odena, Christopher Olah, and Vincent Dumoulin. Avoiding checkerboard artifacts in neural networks. Distill, 2016.
Javier Portilla and Eero Simoncelli. A parametric texture model based on joint statistics of complex wavelet coefï¬cients. International Journal of Computer Vision, 40:49â71, 1999.
10 | 1610.07629#30 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 31 | Through a deep analysis, we ï¬nd that most of rare or unseen words in test sets can be well trans- lated by Zoph RNN-pseudo-dic if they are covered by the bilingual dictionary. Table 3 reports the hit rate of the bilingual dictionaries. 0.71 indicates that 2010 (2831 à 0.71) words among the 2831 covered rare or unseen words in the test set can
be correctly translated. This table explains why Zoph RNN-pseudo-dic performs much better than Zoph RNN-mixed-dic.
The last two lines in Table 1 demonstrate that the combined method can further boost the trans- lation quality. The biggest average improvement over the baseline NMT Zoph RNN can be as large as 4.62 BLEU points, which is very promising. We believe that this method fully exploits the ca- pacity of the data synthesis model and the mixed model. Zoph RNN-pseudo-dic can well incorpo- rate the bilingual dictionary and Zoph RNN-mixed can well handle the OOV word translation. Thus, the combined method is the best. | 1610.07272#31 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 31 | 10
Published as a conference paper at ICLR 2017
Eero Simoncelli and Bruno Olshausen. Natural image statistics and neural representation. Annual Review of Neuroscience, 24:1193â1216, 2001.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor Lempitsky. Texture networks: Feed- forward synthesis of textures and stylized images. arXiv preprint arXiv:1603.03417, 2016a.
Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing in- gredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016b.
A¨aron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. CoRR, abs/1609.03499, 2016a. URL http://arxiv.org/abs/1609.03499. | 1610.07629#31 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 32 | One may argue that the proposed methods use bigger vocabulary and the performance gains may be attributed to the increased vocabulary size. We further conduct an experiment for the baseline NMT Zoph RNN by setting |Vc| = 4600 and |Ve| = 3400. We ï¬nd that this setting decreases the translation quality by an average BLEU points 0.88 (32.10 vs. 32.98). This further veriï¬es the superiority of our proposed methods.
# 6 Related Work | 1610.07272#32 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 32 | A¨aron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. Conditional image generation with pixelcnn decoders. CoRR, abs/1606.05328, 2016b. URL http://arxiv.org/abs/1606.05328.
In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp. 479â488. ACM Press/Addison-Wesley Publishing Co., 2000.
Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. European Conference on Computer Vision, pp. 818â833. Springer, 2014. In
11
Published as a conference paper at ICLR 2017
# APPENDIX
HYPERPARAMETERS
Operation Kernel size Stride Feature maps 9 3 3 9 3 3 1 2 2 1 1 1 32 64 128 128 128 128 128 128 64 32 3 C C SAME SAME SAME SAME SAME SAME ReLU ReLU ReLU Sigmoid ReLU Linear Add the input and the output Nearest-neighbor interpolation, factor 2 Convolution 3 1 C SAME ReLU Padding mode REFLECT Normalization Conditional instance normalization after every convolution
# Padding Nonlinearity | 1610.07629#32 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 33 | # 6 Related Work
The recently proposed neural machine translation has drawn more and more attention. Most of the existing methods mainly focus on designing better attention models (Luong et al., 2015b; Cheng et al., 2016a; Cohn et al., 2016; Feng et al., 2016; Liu et al., 2016; Meng et al., 2016; Mi et al., 2016a; Mi et al., 2016b; Tu et al., 2016), better objective functions for BLEU evaluation (Shen et al., 2016), better strategies for handling open vo- cabulary (Ling et al., 2015; Luong et al., 2015c; Jean et al., 2015; Sennrich et al., 2015b; Costa- Juss`a and Fonollosa, 2016; Lee et al., 2016; Li et al., 2016; Mi et al., 2016c; Wu et al., 2016) and exploiting large-scale monolingual data (Gulcehre et al., 2015; Sennrich et al., 2015a; Cheng et al., 2016b; Zhang and Zong, 2016).
Our focus in this work is aiming to fully inte- grate the discrete bilingual dictionaries into NMT. The most related works lie in three aspects: 1) applying the character-based method to deal with open vocabulary; 2) making use of the synthesized data in NMT, and 3) incorporating translation lex- icons in NMT. | 1610.07272#33 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 33 | # Padding Nonlinearity
Network â 256 Ã 256 Ã 3 input Convolution Convolution Convolution Residual block Residual block Residual block Residual block Residual block Upsampling Upsampling Convolution Residual block â C feature maps Convolution Convolution
Upsampling â C feature maps
Optimizer Adam (Kingma & Ba, 2014) (α = 0.001, β1 = 0.9, β2 = 0.999)
Parameter updates 40,000
# Batch size 16
# Weight initialization Isotropic gaussian (µ = 0, Ï = 0.01)
# Table 1: Style transfer network hyperparameters.
12
Published as a conference paper at ICLR 2017
MONET PASTICHES | 1610.07629#33 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 34 | Ling et al. (2015), Costa-Juss`a and Fonollosa (2016) and Sennrich et al. (2015b) propose purely character-based or subword-based neural machine
translation to circumvent the open word vocabu- lary problem. Luong et al. (2015c) and Wu et al. (2016) present the mixed word/character model which utilizes character sequence to replace the OOV words. We introduce the mixed model to integrate the bilingual dictionaries and ï¬nd that it is useful but not the best method.
Sennrich et al. (2015a) propose an approach to use target-side monolingual data to synthesize the bitexts. They generate the synthetic bilingual data by translating the target monolingual sentences to source language sentences and retrain NMT with the mixture of original bilingual data and the syn- thetic parallel data. Cheng et al. (2016b) and Zhang and Zong (2016) also investigate the effect of the synthesized parallel sentences. They report that the pseudo sentence pairs synthesized using the source-side monolingual data can signiï¬cantly improve the translation quality. These studies in- spire us to leverage the synthesized data to incor- porate the bilingual dictionaries in NMT. | 1610.07272#34 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 34 | Claude Monet, Grainstacks at Giverny; the Evening Sun (1888/1889).
Claude Monet, Plum Trees in Blossom (1879).
Claude Monet, Poppy Field (1873).
13
Published as a conference paper at ICLR 2017
Claude Monet, Rouen Cathedral, West Fac¸ade (1894).
Claude Monet, Sunrise (Marine) (1873).
Claude Monet, The Road to V´etheuil (1879).
14
Published as a conference paper at ICLR 2017
Claude Monet, Three Fishing Boats (1886).
Claude Monet, V´etheuil (1879).
Claude Monet, V´etheuil (1902).
15
Published as a conference paper at ICLR 2017
se
se
Claude Monet, Water Lilies (ca. 1914-1917).
VARIED PASTICHES
# Roy Lichtenstein, Bicentennial Print (1975).
Ernst Ludwig Kirchner, Boy with Sweets (1918).
16
Published as a conference paper at ICLR 2017
Paul Signac, Cassis, Cap Lombard, Opus 196 (1889).
Paul Klee, Colors from a Distance (1932). | 1610.07629#34 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 35 | Very recently, Arthur et al. (2016) try to use dis- crete translation lexicons in NMT. Their approach attempts to employ the discrete translation lexi- cons to positively inï¬uence the probability distri- bution of the output words in the NMT softmax layer. However, their approach only focuses on the words that belong to the vocabulary and the out- of-vocabulary (OOV) words are not considered. In contrast, we concentrated ourselves on the word translation lexicons which are rarely or never seen in the bilingual training data. It is a much tougher problem. The extensive experiments demonstrate that our proposed models, especially the data syn- thesis model, can solve this problem very well.
# 7 Conclusions and Future Work
In this paper, we have presented two models to bridge neural machine translation and the bilin- gual dictionaries in which translation lexicons are rarely or never seen in the bilingual training data. Our proposed methods focus on data transforma- tion mechanism which guarantees the massive and repetitive occurrence of the translation lexicon.
The mixed word/character model tackles this problem by re-labelling the OOV words with char- acter sequence, while our data synthesis model constructs adequate pseudo sentence pairs for each translation lexicon. The extensive experiments show that the data synthesis model substantially outperforms the mixed word/character model, and | 1610.07272#35 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 35 | Paul Signac, Cassis, Cap Lombard, Opus 196 (1889).
Paul Klee, Colors from a Distance (1932).
Frederic Edwin Church, Cotopaxi (1855).
17
Published as a conference paper at ICLR 2017
Jamini Roy, Cruciï¬xion.
Henri de Toulouse-Lautrec, Divan Japonais (1893).
Egon Schiele, Edith with Striped Dress, Sitting (1915).
18
Published as a conference paper at ICLR 2017
Georges Rouault, Head of a Clown (ca. 1907-1908).
William Hoare, Henry Hoare, âThe Magniï¬centâ, of Stourhead (about 1750-1760).
Giorgio de Chirico, Horses on the seashore (1927/1928).
19
Published as a conference paper at ICLR 2017
Vincent van Gogh, Landscape at Saint-R´emy (Enclosed Field with Peasant) (1889).
Nicolas Poussin, Landscape with a Calm (1650-1651).
Bernardino Fungai, Madonna and Child with Two Hermit Saints (early 1480s).
20
Published as a conference paper at ICLR 2017
Max Hermann Maxy, Portrait of a Friend (1926). | 1610.07629#35 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 36 | the combined method performs best. All of the proposed methods obtain promising improve- ments over the baseline NMT. We further ï¬nd that more than 70% of the rare or unseen words in test sets can get correct translations as long as they are covered by the bilingual dictionary.
Currently, the data synthesis model does not distinguish the original bilingual training data from the synthesized parallel sentences in which the target sides are SMT translation results. In the future work, we plan to modify the neural network structure to avoid the negative effect of the SMT translation noise.
# References
[Arthur et al.2016] Philip Arthur, Graham Neubig, and Satoshi Nakamura. 2016. Incorporating discrete translation lexicons into neural machine translation. arXiv preprint arXiv:1606.02006.
[Bahdanau et al.2014] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
Shen, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016a. Agreement-based joint training for bidirectional attention-based neural machine translation. In Proceedings of AAAI 2016. | 1610.07272#36 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07629 | 36 | Max Hermann Maxy, Portrait of a Friend (1926).
Juan Gris, Portrait of Pablo Picasso (1912).
Severini Gino, Ritmo plastico del 14 luglio (1913).
21
Published as a conference paper at ICLR 2017
Richard Diebenkorn, Seawall (1957).
Alice Bailly, Self-Portrait (1917).
Grayson Perry, The Annunciation of the Virgin Deal (2012).
22
Published as a conference paper at ICLR 2017
William Glackens, The Green Boathouse (ca. 1922).
Edvard Munch, The Scream (1910).
Vincent van Gogh, The Starry Night (1889).
23
Published as a conference paper at ICLR 2017
Pieter Bruegel the Elder, The Tower of Babel (1563).
Wolfgang Lettl, The Trial (1981).
Douglas Coupland, Thomson No. 5 (Yellow Sunset) (2011).
24
Published as a conference paper at ICLR 2017
Claude Monet, Three Fishing Boats (1886).
John Ruskin, Trees in a Lane (1847).
Giuseppe Cades, Tullia about to Ride over the Body of Her Father in Her Chariot (about 1770-1775).
25
Published as a conference paper at ICLR 2017 | 1610.07629#36 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | [
{
"id": "1603.03417"
},
{
"id": "1603.04467"
},
{
"id": "1606.05897"
},
{
"id": "1609.03057"
},
{
"id": "1603.08155"
},
{
"id": "1607.08022"
},
{
"id": "1508.06576"
}
] |
1610.07272 | 37 | [Cheng et al.2016b] Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016b. Semi-supervised learning for neural machine translation. In Proceedings of ACL 2016.
Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase represen- tations using rnn encoder-decoder for statistical In Proceedings of EMNLP machine translation. 2014.
[Chung et al.2016] Junyoung Chung, Kyunghyun Cho, and Yoshua Bengio. 2016. A character-level de- coder without explicit segmentation for neural ma- chine translation. arXiv preprint arXiv:1603.06147.
[Cohn et al.2016] Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vymolova, Kaisheng Yao, Chris Dyer, Incorporating and Gholamreza Haffari. structural alignment biases into an attentional neural translation model. In Proceedings of NAACL 2016. | 1610.07272#37 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07272 | 38 | [Costa-Juss`a and Fonollosa2016] Marta R Costa-Juss`a and Jos´e AR Fonollosa. Character- based neural machine translation. arXiv preprint arXiv:1603.00810.
[Duchi et al.2011] John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for on- line learning and stochastic optimization. The Jour- nal of Machine Learning Research, 12:2121â2159.
[Feng et al.2016] Shi Feng, Shujie Liu, Mu Li, and Implicit distortion and fertil- Ming Zhou. 2016. ity models for attention-based encoder-decoder nmt model. arXiv preprint arXiv:1601.03317.
[Gulcehre et al.2015] Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei- Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On using monolingual cor- pora in neural machine translation. arXiv preprint arXiv:1503.03535. | 1610.07272#38 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07272 | 39 | [Hochreiter and Schmidhuber1997] Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735â1780.
[Jean et al.2015] Sebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural ma- chine translation. In Proceedings of ACL 2015.
Junczys- Dowmunt, Tomasz Dwojak, and Hieu Hoang. 2016. Is neural machine translation ready for deployment? arXiv a case study on 30 translation directions. preprint arXiv:1610.01108.
[Kalchbrenner and Blunsom2013] Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous In Proceedings of EMNLP translation models. 2013.
[Koehn et al.2007] Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine In Proceedings of ACL 2007, pages translation. 177â180. | 1610.07272#39 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07272 | 40 | and Thomas Hofmann. 2016. Fully character-level neu- ral machine translation without explicit segmenta- tion. arXiv preprint arXiv:1610.03017.
and Chengqing Zong. 2016. Towards zero unknown word in neural machine translation. In Proceedings of IJCAI 2016.
[Ling et al.2015] Wang Ling, Isabel Trancoso, Chris Character- Dyer, and Alan W Black. based neural machine translation. arXiv preprint arXiv:1511.04586.
[Liu et al.2016] Lemao Liu, Masao Utiyama, Andrew Finch, and Eiichiro Sumita. 2016. Neural machine translation with supervised attention. arXiv preprint arXiv:1609.04186.
[Luong et al.2015a] Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015a. Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114.
[Luong et al.2015b] Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015b. Effective ap- proaches to attention-based neural machine transla- tion. In Proceedings of EMNLP 2015. | 1610.07272#40 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07272 | 41 | Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wo- 2015c. Addressing the rare jciech Zaremba. word problem in neural machine translation. In Proceedings of ACL 2015.
[Meng et al.2016] Fandong Meng, Zhengdong Lu, Hang Li, and Qun Liu. 2016. Interactive atten- tion for neural machine translation. arXiv preprint arXiv:1610.05011.
Sankaran, Zhiguo Wang, and Abe Ittycheriah. 2016a. A coverage embedding model for neural machine translation. In Proceedings of EMNLP 2016.
[Mi et al.2016b] Haitao Mi, Zhiguo Wang, Niyu Ge, and Abe Ittycheriah. 2016b. Supervised attentions In Proceedings of for neural machine translation. EMNLP 2016.
[Mi et al.2016c] Haitao Mi, Zhiguo Wang, and Abe It- tycheriah. 2016c. Vocabulary manipulation for large vocabulary neural machine translation. In Pro- ceedings of ACL 2016.
[Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine transla- tion. In Proceedings of ACL 2002, pages 311â318. | 1610.07272#41 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07272 | 42 | [Sennrich et al.2015a] Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709.
[Sennrich et al.2015b] Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015b. Neural machine trans- arXiv lation of rare words with subword units. preprint arXiv:1508.07909.
[Shen et al.2016] Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Proceedings of ACL 2016.
[Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence In Proceedings of learning with neural networks. NIPS 2014.
[Tu et al.2016] Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Coverage-based neural machine translation. In Proceedings of ACL 2016.
and Domain adaptation 2008. Chengqing Zong. for statistical machine translation with domain dic- tionary and monolingual corpora. In Proceedings of COLING 2008, pages 993â1000. | 1610.07272#42 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.07272 | 43 | [Wu et al.2016] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Googleâs neural ma- chine translation system: Bridging the gap between arXiv preprint human and machine translation. arXiv:1609.08144.
[Xiong et al.2006] Deyi Xiong, Qun Liu, and Shouxun Lin. 2006. Maximum entropy based phrase reorder- ing model for statistical machine translation. In Pro- ceedings of ACL-COLING, pages 521â528. Associ- ation for Computational Linguistics.
[Zhang and Zong2016] Jiajun Zhang and Chengqing Zong. 2016. Exploiting source-side monolingual data in neural machine translation. In Proceedings of EMNLP.
[Zoph et al.2016] Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Multi-source neu- ral translation. In Proceedings of NAACL 2016. | 1610.07272#43 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | [
{
"id": "1609.04186"
},
{
"id": "1503.03535"
},
{
"id": "1606.02006"
},
{
"id": "1508.07909"
},
{
"id": "1601.03317"
},
{
"id": "1610.05011"
},
{
"id": "1511.06709"
},
{
"id": "1603.06147"
},
{
"id": "1511.06114"
},
{
"id": "1609.08144"
},
{
"id": "1603.00810"
},
{
"id": "1610.03017"
},
{
"id": "1511.04586"
},
{
"id": "1610.01108"
}
] |
1610.04286 | 0 | 8 1 0 2
y a M 2 2 ] O R . s c [
2 v 6 8 2 4 0 . 0 1 6 1 : v i X r a
# Sim-to-Real Robot Learning from Pixels with Progressive Nets
Andrei A. Rusu DeepMind London, UK [email protected]
Mel VeËcerÃk DeepMind London, UK [email protected]
Thomas Rothörl DeepMind London, UK [email protected]
Nicolas Heess DeepMind London, UK [email protected]
Razvan Pascanu DeepMind London, UK [email protected]
Raia Hadsell DeepMind London, UK [email protected] | 1610.04286#0 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | Applying end-to-end learning to solve complex, interactive, pixel-driven
control tasks on a robot is an unsolved problem. Deep Reinforcement Learning
algorithms are too slow to achieve performance on a real robot, but their
potential has been demonstrated in simulated environments. We propose using
progressive networks to bridge the reality gap and transfer learned policies
from simulation to the real world. The progressive net approach is a general
framework that enables reuse of everything from low-level visual features to
high-level policies for transfer to new tasks, enabling a compositional, yet
simple, approach to building complex skills. We present an early demonstration
of this approach with a number of experiments in the domain of robot
manipulation that focus on bridging the reality gap. Unlike other proposed
approaches, our real-world experiments demonstrate successful task learning
from raw visual input on a fully actuated robot manipulator. Moreover, rather
than relying on model-based trajectory optimisation, the task learning is
accomplished using only deep reinforcement learning and sparse rewards. | http://arxiv.org/pdf/1610.04286 | Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell | cs.RO, cs.LG | null | null | cs.RO | 20161013 | 20180522 | [
{
"id": "1606.04671"
}
] |
1610.04286 | 1 | Razvan Pascanu DeepMind London, UK [email protected]
Raia Hadsell DeepMind London, UK [email protected]
Abstract: Applying end-to-end learning to solve complex, interactive, pixel- driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high- level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real- world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model- based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
Keywords: Robot learning, transfer, progressive networks, sim-to-real, CoRL.
# 1 Introduction | 1610.04286#1 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | Applying end-to-end learning to solve complex, interactive, pixel-driven
control tasks on a robot is an unsolved problem. Deep Reinforcement Learning
algorithms are too slow to achieve performance on a real robot, but their
potential has been demonstrated in simulated environments. We propose using
progressive networks to bridge the reality gap and transfer learned policies
from simulation to the real world. The progressive net approach is a general
framework that enables reuse of everything from low-level visual features to
high-level policies for transfer to new tasks, enabling a compositional, yet
simple, approach to building complex skills. We present an early demonstration
of this approach with a number of experiments in the domain of robot
manipulation that focus on bridging the reality gap. Unlike other proposed
approaches, our real-world experiments demonstrate successful task learning
from raw visual input on a fully actuated robot manipulator. Moreover, rather
than relying on model-based trajectory optimisation, the task learning is
accomplished using only deep reinforcement learning and sparse rewards. | http://arxiv.org/pdf/1610.04286 | Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell | cs.RO, cs.LG | null | null | cs.RO | 20161013 | 20180522 | [
{
"id": "1606.04671"
}
] |
1610.04286 | 2 | Keywords: Robot learning, transfer, progressive networks, sim-to-real, CoRL.
# 1 Introduction
Deep Reinforcement Learning offers new promise for achieving human-level control in robotics domains, especially for pixel-to-action scenarios where state estimation is from high dimensional sen- sors and environment interaction and feedback are critical. With deep RL, a new set of algorithms has emerged that can attain sophisticated, precise control on challenging tasks, but these accomplishments have been demonstrated primarily in simulation, rather than on actual robot platforms.
While recent advances in simulation-driven deep RL are impressive [1, 2, 3, 4, 5, 6, 7], demonstrating learning capabilities on real robots remains the bar by which we must measure the practical applica- bility of these methods. However, this poses a signiï¬cant challenge, given the "data-hungry" training regime required for current pixel-based deep RL methods, and the relative frailty of research robots and their human handlers. One solution is to use transfer learning methods to bridge the reality gap that separates simulation from real world domains. In this paper, we use progressive networks, a deep learning architecture that has recently been proposed for transfer learning, to demonstrate such an approach, thus providing a proof-of-concept pathway by which deep RL can be used to effect fast policy learning on a real robot. | 1610.04286#2 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | Applying end-to-end learning to solve complex, interactive, pixel-driven
control tasks on a robot is an unsolved problem. Deep Reinforcement Learning
algorithms are too slow to achieve performance on a real robot, but their
potential has been demonstrated in simulated environments. We propose using
progressive networks to bridge the reality gap and transfer learned policies
from simulation to the real world. The progressive net approach is a general
framework that enables reuse of everything from low-level visual features to
high-level policies for transfer to new tasks, enabling a compositional, yet
simple, approach to building complex skills. We present an early demonstration
of this approach with a number of experiments in the domain of robot
manipulation that focus on bridging the reality gap. Unlike other proposed
approaches, our real-world experiments demonstrate successful task learning
from raw visual input on a fully actuated robot manipulator. Moreover, rather
than relying on model-based trajectory optimisation, the task learning is
accomplished using only deep reinforcement learning and sparse rewards. | http://arxiv.org/pdf/1610.04286 | Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell | cs.RO, cs.LG | null | null | cs.RO | 20161013 | 20180522 | [
{
"id": "1606.04671"
}
] |
1610.04286 | 3 | Progressive nets have been shown to produce positive transfer between disparate tasks such as Atari games by utilizing lateral connections to previously learnt models [8]. The addition of new capacity for each new task allows specialized input features to be learned, an important advantage for deep RL algorithms which are improved by sharply-tuned perceptual features. An advantage of progressive
1st Conference on Robot Learning (CoRL 2017), Mountain View, United States.
nets compared with other methods for transfer learning or domain adaptation is that multiple tasks may be learned sequentially, without needing to specify source and target tasks. | 1610.04286#3 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | Applying end-to-end learning to solve complex, interactive, pixel-driven
control tasks on a robot is an unsolved problem. Deep Reinforcement Learning
algorithms are too slow to achieve performance on a real robot, but their
potential has been demonstrated in simulated environments. We propose using
progressive networks to bridge the reality gap and transfer learned policies
from simulation to the real world. The progressive net approach is a general
framework that enables reuse of everything from low-level visual features to
high-level policies for transfer to new tasks, enabling a compositional, yet
simple, approach to building complex skills. We present an early demonstration
of this approach with a number of experiments in the domain of robot
manipulation that focus on bridging the reality gap. Unlike other proposed
approaches, our real-world experiments demonstrate successful task learning
from raw visual input on a fully actuated robot manipulator. Moreover, rather
than relying on model-based trajectory optimisation, the task learning is
accomplished using only deep reinforcement learning and sparse rewards. | http://arxiv.org/pdf/1610.04286 | Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell | cs.RO, cs.LG | null | null | cs.RO | 20161013 | 20180522 | [
{
"id": "1606.04671"
}
] |
1610.04286 | 4 | nets compared with other methods for transfer learning or domain adaptation is that multiple tasks may be learned sequentially, without needing to specify source and target tasks.
This paper presents an approach for transfer from simulation to the real robot that is proven using real-world, sparse-reward tasks. The tasks are learned using end-to-end deep RL, with RGB inputs and joint velocity output actions. First, an actor-critic network is trained in simulation using multiple asynchronous workers [6]. The network has a convolutional encoder followed by an LSTM. From the LSTM state, using a linear layer, we compute a set of discrete action outputs that control the different degrees of freedom of the simulated robot as well as the value function. After training, a new network is initialized with lateral, nonlinear connections to each convolutional and recurrent layer of the simulation-trained network. The new network is trained on a similar task on the real robot. Our initial ï¬ndings show that the inductive bias imparted by the features and encoded policy of the simulation net is enough to give a dramatic learning speed-up on the real robot.
# 2 Transfer Learning from Simulation to Real | 1610.04286#4 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | Applying end-to-end learning to solve complex, interactive, pixel-driven
control tasks on a robot is an unsolved problem. Deep Reinforcement Learning
algorithms are too slow to achieve performance on a real robot, but their
potential has been demonstrated in simulated environments. We propose using
progressive networks to bridge the reality gap and transfer learned policies
from simulation to the real world. The progressive net approach is a general
framework that enables reuse of everything from low-level visual features to
high-level policies for transfer to new tasks, enabling a compositional, yet
simple, approach to building complex skills. We present an early demonstration
of this approach with a number of experiments in the domain of robot
manipulation that focus on bridging the reality gap. Unlike other proposed
approaches, our real-world experiments demonstrate successful task learning
from raw visual input on a fully actuated robot manipulator. Moreover, rather
than relying on model-based trajectory optimisation, the task learning is
accomplished using only deep reinforcement learning and sparse rewards. | http://arxiv.org/pdf/1610.04286 | Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell | cs.RO, cs.LG | null | null | cs.RO | 20161013 | 20180522 | [
{
"id": "1606.04671"
}
] |
1610.04286 | 5 | # 2 Transfer Learning from Simulation to Real
Our approach relies on the progressive nets architecture, which enables transfer learning through lateral connections which connect each layer of previously learnt network columns to each new column, thus supporting rich compositionality of features. We ï¬rst summarize progressive nets, and then we discuss their application for transfer in robot domains.
# 2.1 Progressive Networks
Progressive networks are ideal for simulation-to-real transfer of policies in robot control domains, for multiple reasons. First, features learnt for one task may be transferred to many new tasks without destruction from ï¬ne-tuning. Second, the columns may be heterogeneous, which may be important for solving different tasks, including different input modalities, or simply to improve learning speed when transferring to the real robot. Third, progressive nets add new capacity, including new input connections, when transferring to new tasks. This is advantageous for bridging the reality gap, to accommodate dissimilar inputs between simulation and real sensors. | 1610.04286#5 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | Applying end-to-end learning to solve complex, interactive, pixel-driven
control tasks on a robot is an unsolved problem. Deep Reinforcement Learning
algorithms are too slow to achieve performance on a real robot, but their
potential has been demonstrated in simulated environments. We propose using
progressive networks to bridge the reality gap and transfer learned policies
from simulation to the real world. The progressive net approach is a general
framework that enables reuse of everything from low-level visual features to
high-level policies for transfer to new tasks, enabling a compositional, yet
simple, approach to building complex skills. We present an early demonstration
of this approach with a number of experiments in the domain of robot
manipulation that focus on bridging the reality gap. Unlike other proposed
approaches, our real-world experiments demonstrate successful task learning
from raw visual input on a fully actuated robot manipulator. Moreover, rather
than relying on model-based trajectory optimisation, the task learning is
accomplished using only deep reinforcement learning and sparse rewards. | http://arxiv.org/pdf/1610.04286 | Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell | cs.RO, cs.LG | null | null | cs.RO | 20161013 | 20180522 | [
{
"id": "1606.04671"
}
] |
1610.04286 | 6 | A progressive network starts with a single column: a deep neural network having L layers with hidden activations h(1) i â Rni, with ni the number of units at layer i ⤠L, and parameters Î(1) trained to convergence. When switching to a second task, the parameters Î(1) are âfrozenâ and a new column with parameters Î(2) is instantiated (with random initialization), where layer h(2) receives input from both h(2) iâ1 via lateral connections. Progressive networks can be generalized in a straightforward manner to have arbitrary network width per column/layer, to accommodate varying degrees of task difï¬culty, or to compile lateral connections from multiple, independent networks in an ensemble setting.
ni? = fl WR, + Con? |, eo) j<k
where W (k) â RniÃnj are the lateral connections from layer i â 1 of column j, to layer i of column k and h0 is the network input. f is an element-wise non-linearity: we use f (x) = max(0, x) for all intermediate layers. | 1610.04286#6 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | Applying end-to-end learning to solve complex, interactive, pixel-driven
control tasks on a robot is an unsolved problem. Deep Reinforcement Learning
algorithms are too slow to achieve performance on a real robot, but their
potential has been demonstrated in simulated environments. We propose using
progressive networks to bridge the reality gap and transfer learned policies
from simulation to the real world. The progressive net approach is a general
framework that enables reuse of everything from low-level visual features to
high-level policies for transfer to new tasks, enabling a compositional, yet
simple, approach to building complex skills. We present an early demonstration
of this approach with a number of experiments in the domain of robot
manipulation that focus on bridging the reality gap. Unlike other proposed
approaches, our real-world experiments demonstrate successful task learning
from raw visual input on a fully actuated robot manipulator. Moreover, rather
than relying on model-based trajectory optimisation, the task learning is
accomplished using only deep reinforcement learning and sparse rewards. | http://arxiv.org/pdf/1610.04286 | Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell | cs.RO, cs.LG | null | null | cs.RO | 20161013 | 20180522 | [
{
"id": "1606.04671"
}
] |
1610.04286 | 7 | In the standard pretrain-and-ï¬netune paradigm, there is often an implicit assumption of âoverlapâ between the tasks. Finetuning is efï¬cient in this setting, as parameters need only be adjusted slightly to the target domain, and often only the top layer is retrained. In contrast, we make no assumptions about the relationship between tasks, which may in practice be orthogonal or even adversarial. Progressive networks side-step this issue by allocating a new column, potentially with different structure or inputs, for each new task. Columns in progressive networks are free to reuse, modify or ignore previously learned features via the lateral connections.
Application to Reinforcement Learning. Although progressive networks are widely applicable, this paper focuses on their application to deep reinforcement learning. In this case, each column is trained to solve a particular Markov Decision Process (MDP): the k-th column thus deï¬nes a policy
2
Ï(k)(a | s) taking as input a state s given by the environment, and generating probabilities over actions Ï(k)(a | s) := h(k) L (s). At each time-step, an action is sampled from this distribution and taken in the environment, yielding the subsequent state. This policy implicitly deï¬nes a stationary distribution ÏÏ(k)(s, a) over states and actions.
# 2.2 Approach | 1610.04286#7 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | Applying end-to-end learning to solve complex, interactive, pixel-driven
control tasks on a robot is an unsolved problem. Deep Reinforcement Learning
algorithms are too slow to achieve performance on a real robot, but their
potential has been demonstrated in simulated environments. We propose using
progressive networks to bridge the reality gap and transfer learned policies
from simulation to the real world. The progressive net approach is a general
framework that enables reuse of everything from low-level visual features to
high-level policies for transfer to new tasks, enabling a compositional, yet
simple, approach to building complex skills. We present an early demonstration
of this approach with a number of experiments in the domain of robot
manipulation that focus on bridging the reality gap. Unlike other proposed
approaches, our real-world experiments demonstrate successful task learning
from raw visual input on a fully actuated robot manipulator. Moreover, rather
than relying on model-based trajectory optimisation, the task learning is
accomplished using only deep reinforcement learning and sparse rewards. | http://arxiv.org/pdf/1610.04286 | Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell | cs.RO, cs.LG | null | null | cs.RO | 20161013 | 20180522 | [
{
"id": "1606.04671"
}
] |
1610.04286 | 8 | # 2.2 Approach
The proposed approach for transfer from simulated to real robot domains is based on a progressive network with some speciï¬c changes. First, the columns of a progressive net do not need to have identical capacity or structure, and this can be an advantage in sim-to-real situations. Thus, the simulation-trained column is designed to have sufï¬cient capacity and depth to learn the task from scratch, but the robot-trained columns have minimal capacity, to encourage fast learning and limit total parameter growth. Secondly, the layer-wise adapters proposed for progressive nets are unnecessary for the output layers of complementary sequences of tasks, so they are not used. Third, the output layer of the robot-trained column is initialised from the simulation-trained column in order to improve exploration. These architectural features are shown in Fig. 1.
simulation | reality output, output, output, output, output, | | output, 7 rt a 7 input input | 1610.04286#8 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | Applying end-to-end learning to solve complex, interactive, pixel-driven
control tasks on a robot is an unsolved problem. Deep Reinforcement Learning
algorithms are too slow to achieve performance on a real robot, but their
potential has been demonstrated in simulated environments. We propose using
progressive networks to bridge the reality gap and transfer learned policies
from simulation to the real world. The progressive net approach is a general
framework that enables reuse of everything from low-level visual features to
high-level policies for transfer to new tasks, enabling a compositional, yet
simple, approach to building complex skills. We present an early demonstration
of this approach with a number of experiments in the domain of robot
manipulation that focus on bridging the reality gap. Unlike other proposed
approaches, our real-world experiments demonstrate successful task learning
from raw visual input on a fully actuated robot manipulator. Moreover, rather
than relying on model-based trajectory optimisation, the task learning is
accomplished using only deep reinforcement learning and sparse rewards. | http://arxiv.org/pdf/1610.04286 | Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell | cs.RO, cs.LG | null | null | cs.RO | 20161013 | 20180522 | [
{
"id": "1606.04671"
}
] |
1610.04286 | 9 | simulation | reality output, output, output, output, output, | | output, 7 rt a 7 input input
Figure 1: Depiction of a progressive network, left, and a modiï¬ed progressive architecture used for robot transfer learning, right. The ï¬rst column is trained on Task 1, in simulation, the second column is trained on Task 1 on the robot, and the third column is trained on Task 2 on the robot. Columns may differ in capacity, and the adapter functions (marked âaâ) are not used for the output layers of this non-adversarial sequence of tasks. | 1610.04286#9 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | Applying end-to-end learning to solve complex, interactive, pixel-driven
control tasks on a robot is an unsolved problem. Deep Reinforcement Learning
algorithms are too slow to achieve performance on a real robot, but their
potential has been demonstrated in simulated environments. We propose using
progressive networks to bridge the reality gap and transfer learned policies
from simulation to the real world. The progressive net approach is a general
framework that enables reuse of everything from low-level visual features to
high-level policies for transfer to new tasks, enabling a compositional, yet
simple, approach to building complex skills. We present an early demonstration
of this approach with a number of experiments in the domain of robot
manipulation that focus on bridging the reality gap. Unlike other proposed
approaches, our real-world experiments demonstrate successful task learning
from raw visual input on a fully actuated robot manipulator. Moreover, rather
than relying on model-based trajectory optimisation, the task learning is
accomplished using only deep reinforcement learning and sparse rewards. | http://arxiv.org/pdf/1610.04286 | Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell | cs.RO, cs.LG | null | null | cs.RO | 20161013 | 20180522 | [
{
"id": "1606.04671"
}
] |
1610.04286 | 10 | The greatest risk in this approach to transfer learning is that rewards will be so sparse, or non-existent, in the real domain that the reinforcement learning will not improve a vastly suboptimal initial policy within a practical time frame. Thus, in order to maximise the likelihood of reward during exploration in the real domain, the new column is initialised such that the initial policy of the agent will be identical to the previous column. This is accomplished by initialising the weights coming from the last layer of the previous column to the output layer of the new column with the output weights of the previous column, and the connections incoming from the last hidden layer of the current column are initialised with zero-valued weights. Thus, using the example network in Fig. 1 (right), when and h(2) parameters Î(2) are instantiated, layer output(2) 2 . However, unlike the other parameters in Î(2), which will be randomly initialised, the weights W (2) out will be zeros and the weights U (1:2) out. Note that this only affects the initial policy of the agent and does not prevent the new column from training.
# 3 Related Literature | 1610.04286#10 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | Applying end-to-end learning to solve complex, interactive, pixel-driven
control tasks on a robot is an unsolved problem. Deep Reinforcement Learning
algorithms are too slow to achieve performance on a real robot, but their
potential has been demonstrated in simulated environments. We propose using
progressive networks to bridge the reality gap and transfer learned policies
from simulation to the real world. The progressive net approach is a general
framework that enables reuse of everything from low-level visual features to
high-level policies for transfer to new tasks, enabling a compositional, yet
simple, approach to building complex skills. We present an early demonstration
of this approach with a number of experiments in the domain of robot
manipulation that focus on bridging the reality gap. Unlike other proposed
approaches, our real-world experiments demonstrate successful task learning
from raw visual input on a fully actuated robot manipulator. Moreover, rather
than relying on model-based trajectory optimisation, the task learning is
accomplished using only deep reinforcement learning and sparse rewards. | http://arxiv.org/pdf/1610.04286 | Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell | cs.RO, cs.LG | null | null | cs.RO | 20161013 | 20180522 | [
{
"id": "1606.04671"
}
] |
1610.04286 | 11 | # 3 Related Literature
There exist many different paradigms for domain transfer and many approaches designed speciï¬cally for deep neural models, but substantially fewer approaches for transfer from simulation to reality for robot domains. Even more rare are methods that can be used for transfer in interactive, rich sensor domains using end-to-end (pixel-to-action) learning.
A growing body of work has been investigating the ability of deep networks to transfer between domains. Some research [9, 10] considers simply augmenting the target domain data with data from the source domain where an alignment exists. Building on this work, [11] starts from the observation that as one looks at higher layers in the model, the transferability of the features decreases quickly. To correct this effect, a soft constraint is added that enforces the distribution of the features to be
3
more similar. In [11], a âconfusionâ loss is proposed which forces the model to ignore variations in the data that separate the two domains [12, 13].
Based on [12], [14] attempts to address the simulation to reality gap by using aligned data. The work is focused on pose estimation of the robotic arm, where training happens on a triple loss that looks at aligned simulation to real data, including the domain confusion loss. The paper does not show the efï¬ciency of the method on learning novel complex policies. | 1610.04286#11 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | Applying end-to-end learning to solve complex, interactive, pixel-driven
control tasks on a robot is an unsolved problem. Deep Reinforcement Learning
algorithms are too slow to achieve performance on a real robot, but their
potential has been demonstrated in simulated environments. We propose using
progressive networks to bridge the reality gap and transfer learned policies
from simulation to the real world. The progressive net approach is a general
framework that enables reuse of everything from low-level visual features to
high-level policies for transfer to new tasks, enabling a compositional, yet
simple, approach to building complex skills. We present an early demonstration
of this approach with a number of experiments in the domain of robot
manipulation that focus on bridging the reality gap. Unlike other proposed
approaches, our real-world experiments demonstrate successful task learning
from raw visual input on a fully actuated robot manipulator. Moreover, rather
than relying on model-based trajectory optimisation, the task learning is
accomplished using only deep reinforcement learning and sparse rewards. | http://arxiv.org/pdf/1610.04286 | Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell | cs.RO, cs.LG | null | null | cs.RO | 20161013 | 20180522 | [
{
"id": "1606.04671"
}
] |
1610.04286 | 12 | Several recent works from the supervised learning literature, e.g. [15, 16, 17], demonstrate how ideas from the adversarial training of neural networks can be used to reduce the sensitivity of a trained network to inter-domain variations, without requiring aligned training data. Intuitively these approaches train a representation that makes it hard to distinguish between data points drawn from the different domains. These ideas have, however, not yet been tested in the context of control. Demonstrating the difï¬culty of the problem, [10] provides evidence that a simple application of a model trained on synthetic data on the real robot fails. The paper also shows that the main failure point is the discrepancy in visual cues between simulation and reality. | 1610.04286#12 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | Applying end-to-end learning to solve complex, interactive, pixel-driven
control tasks on a robot is an unsolved problem. Deep Reinforcement Learning
algorithms are too slow to achieve performance on a real robot, but their
potential has been demonstrated in simulated environments. We propose using
progressive networks to bridge the reality gap and transfer learned policies
from simulation to the real world. The progressive net approach is a general
framework that enables reuse of everything from low-level visual features to
high-level policies for transfer to new tasks, enabling a compositional, yet
simple, approach to building complex skills. We present an early demonstration
of this approach with a number of experiments in the domain of robot
manipulation that focus on bridging the reality gap. Unlike other proposed
approaches, our real-world experiments demonstrate successful task learning
from raw visual input on a fully actuated robot manipulator. Moreover, rather
than relying on model-based trajectory optimisation, the task learning is
accomplished using only deep reinforcement learning and sparse rewards. | http://arxiv.org/pdf/1610.04286 | Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell | cs.RO, cs.LG | null | null | cs.RO | 20161013 | 20180522 | [
{
"id": "1606.04671"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.