id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1611.00625#19 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | can be composed o f : frame_from_bwapi f r a m e s i n t h e c u r r e n t game s e v e r a l b a t t l e s i n t u n i t s _ m y s e l f : { // U n i t i n t : { // U n i t t a r g e t t a r g e t p o s ID ID : i n t : { // A b s o l u t e x 1 : // A b s o l u t e y 2 : } i n t i n t // Type o f a i r weapon a w t y p e : i n t // Type o f g r o u n d weapon g wt yp e : i n t // Number o f awcd : // Number o f h i t p o i n t s hp : // Number o f e n e r g y / mana p o i n t s , e n e r g y : // U n i t i n t t y p e : p o s i t i o n : f r a m e s b e f o r e n e x t a i r weapon p o s s i b l e a t t a c k i n t i n t i f any i n t t y p e { // A b s o l u t e x 1 : // A b s o l u t e y 2 : } i n t i n t // Number o f ar mor p o i n t s ar mor : // Number o f gwcd : // Ground weapon a t t a c k damage g w a t t a c k : // P r o t o s s s h i e l d : // A i r weapon a t t a c k damage a w a t t a c k : // S i z e o f s i z e i n t : // Whether u n i t enemy : b o o l // Whether u n i t i d l e : b o o l // Ground weapon max r a n g e g w r a n g e : i n t // A i r weapon max r a n g e i n t a w r a n g e : i n t f r a m e s b e f o r e n e x t g r o u n d weapon p o s s i b l e a t t a c k i n t i n t s h i e l d p o i n t s ( l i k e HP , b u t w i t h s p e c i a l p r o p e r t i e s ) i n t i n t t h e u n i t ( a i r weapon a t t a c k damage ) i s an enemy o r n o t i s i d l e , i . e . n o t f o l l o w i n g any o r d e r s c u r r e n t l y } } // Same f o r m a t a s " u n i t s _ m y s e l f " . . . u n i t s _ e n e m y : } | 1611.00625#18 | 1611.00625#20 | 1611.00625 | [
"1606.01540"
]
|
1611.00625#20 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | 6 | 1611.00625#19 | 1611.00625 | [
"1606.01540"
]
|
|
1610.07629#0 | A Learned Representation For Artistic Style | 7 1 0 2 b e F 9 ] V C . s c [ 5 v 9 2 6 7 0 . 0 1 6 1 : v i X r a Published as a conference paper at ICLR 2017 # A LEARNED REPRESENTATION FOR ARTISTIC STYLE Vincent Dumoulin & Jonathon Shlens & Manjunath Kudlur Google Brain, Mountain View, CA [email protected], [email protected], [email protected] # ABSTRACT | 1610.07629#1 | 1610.07629 | [
"1603.03417"
]
|
|
1610.07629#1 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the con- struction of an image. The degree to which one may learn and parsimoniously capture this visual vocabulary measures our understanding of the higher level fea- tures of paintings, if not images in general. In this work we investigate the con- struction of a single, scalable deep network that can parsimoniously capture the artistic style of a diversity of paintings. We demonstrate that such a network gen- eralizes across a diversity of artistic styles by reducing a painting to a point in an embedding space. Importantly, this model permits a user to explore new paint- ing styles by arbitrarily combining the styles learned from individual paintings. We hope that this work provides a useful step towards building rich models of paintings and offers a window on to the structure of the learned representation of artistic style. # INTRODUCTION A pastiche is an artistic work that imitates the style of another one. Computer vision and more recently machine learning have a history of trying to automate pastiche, that is, render an image in the style of another one. This task is called style transfer, and is closely related to the texture synthesis task. While the latter tries to capture the statistical relationship between the pixels of a source image which is assumed to have a stationary distribution at some scale, the former does so while also attempting to preserve some notion of content. On the computer vision side, Efros & Leung (1999) and Wei & Levoy (2000) attempt to â | 1610.07629#0 | 1610.07629#2 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#2 | A Learned Representation For Artistic Style | growâ textures one pixel at a time using non-parametric sampling of pixels in an examplar image. Efros & Freeman (2001) and Liang et al. (2001) extend this idea to â growingâ textures one patch at a time, and Efros & Freeman (2001) uses the approach to implement â texture transferâ , i.e. transfering the texture of an object onto another one. Kwatra et al. (2005) approaches the texture synthesis problem from an energy minimization perspective, progressively reï¬ ning the texture using an EM- like algorithm. Hertzmann et al. (2001) introduces the concept of â image analogiesâ : given a pair of â unï¬ lteredâ and â ï¬ lteredâ versions of an examplar image, a target image is processed to create an analogous â ï¬ lteredâ result. More recently, Frigo et al. (2016) treats style transfer as a local texture transfer (using an adaptive patch partition) followed by a global color transfer, and Elad & Milanfar (2016) extends Kwatraâ s energy-based method into a style transfer algorithm by taking content similarity into account. On the machine learning side, it has been shown that a trained classiï¬ er can be used as a feature extractor to drive texture synthesis and style transfer. Gatys et al. (2015a) uses the VGG-19 network (Simonyan & Zisserman, 2014) to extract features from a texture image and a synthesized texture. The two sets of features are compared and the synthesized texture is modiï¬ ed by gradient descent so that the two sets of features are as close as possible. Gatys et al. (2015b) extends this idea to style transfer by adding the constraint that the synthesized image also be close to a content image with respect to another set of features extracted by the trained VGG-19 classiï¬ | 1610.07629#1 | 1610.07629#3 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#3 | A Learned Representation For Artistic Style | er. While very ï¬ exible, this algorithm is expensive to run due to the optimization loop being carried. Ulyanov et al. (2016a), Li & Wand (2016) and Johnson et al. (2016) tackle this problem by intro- ducing a feedforward style transfer network, which is trained to go from content to pastiche image in one pass. However, in doing so some of the ï¬ exibility of the original algorithm is lost: the style transfer network is tied to a single style, which means that separate networks have to be trained 1 Published as a conference paper at ICLR 2017 (a) With conditional instance normalization, a single style transfer network can capture 32 styles at the same time, ï¬ ve of which are shown here. All 32 styles in this single model are in the Appendix. Golden Gate Bridge photograph by Rich Niewiroski Jr. (b) The style representation learned via conditional instance normalization permits the arbitrary combination of artistic styles. Each pastiche in the sequence corresponds to a different step in interpolating between the γ and β values associated with two styles the model was trained on. Figure 1: Pastiches produced by a style transfer network trained on 32 styles chosen for their variety. for every style being modeled. Subsequent work has brought some performance improvements to style transfer networks, e.g. with respect to color preservation (Gatys et al., 2016a) or style transfer quality (Ulyanov et al., 2016b), but to our knowledge the problem of the single-purpose nature of style transfer networks remains untackled. | 1610.07629#2 | 1610.07629#4 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#4 | A Learned Representation For Artistic Style | We think this is an important problem that, if solved, would have both scientiï¬ c and practical im- portance. First, style transfer has already found use in mobile applications, for which on-device processing is contingent upon the models having a reasonable memory footprint. More broadly, building a separate network for each style ignores the fact that individual paintings share many com- mon visual elements and a true model that captures artistic style would be able to exploit and learn from such regularities. Furthermore, the degree to which an artistic styling model might general- ize across painting styles would directly measure our ability to build systems that parsimoniously capture the higher level features and statistics of photographs and images (Simoncelli & Olshausen, 2001). In this work, we show that a simple modiï¬ cation of the style transfer network, namely the in- troduction of conditional instance normalization, allows it to learn multiple styles (Figure 1a).We demonstrate that this approach is ï¬ exible yet comparable to single-purpose style transfer networks, both qualitatively and in terms of convergence properties. This model reduces each style image into a point in an embedding space. Furthermore, this model provides a generic representation for artistic styles that seems ï¬ exible enough to capture new artistic styles much faster than a single-purpose net- 2 | 1610.07629#3 | 1610.07629#5 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#5 | A Learned Representation For Artistic Style | Published as a conference paper at ICLR 2017 VGG-16 Figure 2: Style transfer network training diagram (Johnson et al., 2016; Ulyanov et al., 2016a). A pastiche image is produced by feeding a content image through the style transfer network. The two images, along with a style image, are passed through a trained classiï¬ er, and the resulting interme- diate representations are used to compute the content loss Lc and style loss Ls. The parameters of the classiï¬ er are kept ï¬ xed throughout training. work. | 1610.07629#4 | 1610.07629#6 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#6 | A Learned Representation For Artistic Style | Finally, we show that the embeddding space representation permits one to arbitrarily combine artistic styles in novel ways not previously observed (Figure 1b). # 2 STYLE TRANSFER WITH DEEP NETWORKS Style transfer can be deï¬ ned as ï¬ nding a pastiche image p whose content is similar to that of a content image c but whose style is similar to that of a style image s. This objective is by nature vaguely deï¬ ned, because similarity in content and style are themselves vaguely deï¬ ned. The neural algorithm of artistic style proposes the following deï¬ nitions: â ¢ Two images are similar in content if their high-level features as extracted by a trained classiï¬ er are close in Euclidian distance. â ¢ Two images are similar in style if their low-level features as extracted by a trained classiï¬ er share the same statistics or, more concretely, if the difference between the featuresâ Gram matrices has a small Frobenius norm. The ï¬ rst point is motivated by the empirical observation that high-level features in classiï¬ ers tend to correspond to higher levels of abstractions (see Zeiler & Fergus (2014) for visualizations; see Johnson et al. (2016) for style transfer features). The second point is motivated by the observation that the artistic style of a painting may be interpreted as a visual texture (Gatys et al., 2015a). A visual texture is conjectured to be spatially homogenous and consist of repeated structural motifs whose minimal sufï¬ cient statistics are captured by lower order statistical measurements (Julesz, 1962; Portilla & Simoncelli, 1999). In its original formulation, the neural algorithm of artistic style proceeds as follows: starting from some initialization of p (e.g. c, or some random initialization), the algorithm adapts p to minimize the loss function L(s, c, p) = λsLs(p) + λcLc(p), (1) where Ls(p) is the style loss, Lc(p) is the content loss and λs, λc are scaling hyperparameters. Given a set of â style layersâ S and a set of â content layersâ C, the style and content losses are themselves deï¬ ned as | 1610.07629#5 | 1610.07629#7 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#7 | A Learned Representation For Artistic Style | £(0) =o FN G(Os(0)) = G(64(6)) I: Q) ieS ~' Le(v) = > = | i(P) ~ 65(0) |B @) jec 4 where Ï l(x) are the classiï¬ er activations at layer l, Ul is the total number of units at layer l and G(Ï l(x)) is the Gram matrix associated with the layer l activations. In practice, we set λc = 1.0 and and leave λs as a free hyper-parameter. | 1610.07629#6 | 1610.07629#8 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#8 | A Learned Representation For Artistic Style | 3 Published as a conference paper at ICLR 2017 In order to speed up the procedure outlined above, a feed-forward convolutional network, termed a style transfer network T , is introduced to learn the transformation (Johnson et al., 2016; Li & Wand, 2016; Ulyanov et al., 2016a). It takes as input a content image c and outputs the pastiche image p directly (Figure 2). The network is trained on many content images (Deng et al., 2009) using the same loss function as above, i.e. L(s, c) = λsLs(T (c)) + λcLc(T (c)). (4) While feedforward style transfer networks solve the problem of speed at test-time, they also suffer from the fact that the network T is tied to one speciï¬ c painting style. This means that a separate network T has to be trained for every style to be imitated. The real-world impact of this limitation is that it becomes prohibitive to implement a style transfer application on a memory-limited device, such as a smartphone. # 2.1 N-STYLES FEEDFORWARD STYLE TRANSFER NETWORKS Our work stems from the intuition that many styles probably share some degree of computation, and that this sharing is thrown away by training N networks from scratch when building an N - styles style transfer system. For instance, many impressionist paintings share similar paint strokes but differ in the color palette being used. In that case, it seems very wasteful to treat a set of N impressionist paintings as completely separate styles. To take this into account, we propose to train a single conditional style transfer network T (c, s) for N styles. The conditional network is given both a content image and the identity of the style to apply and produces a pastiche corresponding to that style. While the idea is straightforward on paper, there remains the open question of how conditioning should be done. In exploring this question, we found a very surprising fact about the role of normalization in style transfer networks: to model a style, it is sufï¬ cient to specialize scaling and shifting parameters after normalization to each speciï¬ c style. In other words, all convolutional weights of a style transfer network can be shared across many styles, and it is sufï¬ cient to tune parameters for an afï¬ ne transformation after normalization for each style. | 1610.07629#7 | 1610.07629#9 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#9 | A Learned Representation For Artistic Style | We call this approach conditional instance normalization. The goal of the procedure is transform a layerâ s activations x into a normalized activation z speciï¬ c to painting style s. Building off the instance normalization technique proposed in Ulyanov et al. (2016b), we augment the γ and β parameters so that theyâ re N à C matrices, where N is the number of styles being modeled and C is the number of output feature maps. Conditioning on a style is achieved as follows: e=1. (4) +4, (5) where µ and Ï are xâ s mean and standard deviation taken across spatial axes and γs and βs are obtained by selecting the row corresponding to s in the γ and β matrices (Figure 3). One added beneï¬ t of this approach is that one can stylize a single image into N painting styles with a single feed forward pass of the network with a batch size of N . In constrast, a single-style network requires N feed forward passes to perform N style transfers (Johnson et al., 2016; Li & Wand, 2016; Ulyanov et al., 2016a). Because conditional instance normalization only acts on the scaling and shifting parameters, training a style transfer network on N styles requires fewer parameters than the naive approach of training N separate networks. In a typical network setup, the model consists of roughly 1.6M parameters, only around 3K (or 0.2%) of which specify individual artistic styles. In fact, because the size of γ and β grows linearly with respect to the number of feature maps in the network, this approach requires O(N à L) parameters, where L is the total number of feature maps in the network. In addition, as is discussed in subsection 3.4, conditional instance normalization presents the advan- tage that integrating an N + 1th style to the network is cheap because of the very small number of parameters to train. 4 Published as a conference paper at ICLR 2017 <â _ io Figure 3: Conditional instance normalization. The input activation x is normalized across both spatial dimensions and subsequently scaled and shifted using style-dependent parameter vectors γs, βs where s indexes the style label. # 3 EXPERIMENTAL RESULTS 3.1 METHODOLOGY | 1610.07629#8 | 1610.07629#10 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#10 | A Learned Representation For Artistic Style | Unless noted otherwise, all style transfer networks were trained using the hyperparameters outlined in the Appendixâ s Table 1. We used the same network architecture as in Johnson et al. (2016), except for two key details: zero-padding is replaced with mirror-padding, and transposed convolutions (also sometimes called deconvolutions) are replaced with nearest-neighbor upsampling followed by a convolution. The use of mirror-padding avoids border patterns sometimes caused by zero-padding in SAME-padded convolutions, while the replacement for transposed convolutions avoids checkerboard patterning, as discussed in in Odena et al. (2016). | 1610.07629#9 | 1610.07629#11 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#11 | A Learned Representation For Artistic Style | We ï¬ nd that with these two improvements training the network no longer requires a total variation loss that was previously employed to remove high frequency noise as proposed in Johnson et al. (2016). Our training procedure follows Johnson et al. (2016). Brieï¬ y, we employ the ImageNet dataset (Deng et al., 2009) as a corpus of training content images. We train the N -style network with stochastic gradient descent using the Adam optimizer (Kingma & Ba, 2014). Details of the model architecture are in the Appendix. A complete implementation of the model in TensorFlow (Abadi et al., 2016) as well as a pretrained model are available for download 1. The evaluation images used for this work were resized such that their smaller side has size 512. Their stylized versions were then center-cropped to 512x512 pixels for display. 3.2 TRAINING A SINGLE NETWORK ON N STYLES PRODUCES STYLIZATIONS COMPARABLE TO INDEPENDENTLY-TRAINED MODELS As a ï¬ rst test, we trained a 10-styles model on stylistically similar images, namely 10 impressionist paintings from Claude Monet. Figure 4 shows the result of applying the trained network on evalu- ation images for a subset of the styles, with the full results being displayed in the Appendix. The model captures different color palettes and textures. We emphasize that 99.8% of the parameters are shared across all styles in contrast to 0.2% of the parameters which are unique to each painting style. To get a sense of what is being traded off by folding 10 styles into a single network, we trained a separate, single-style network on each style and compared them to the 10-styles network in terms of style transfer quality and training speed (Figure 5). The left column compares the learning curves for style and content losses between the single-style networks and the 10-styles network. The losses were averaged over 32 random batches of content images. By visual inspection, we observe that the 10-styles network converges as quickly as the single-style networks in terms of style loss, but lags slightly behind in terms of content loss. In order to quantify this observation, we compare the ï¬ nal losses for 10-styles and single-style models (center column). The 10-styles networkâ | 1610.07629#10 | 1610.07629#12 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#12 | A Learned Representation For Artistic Style | s content loss is around 8.7 ± 3.9% higher than its # 1https://github.com/tensorflow/magenta 5 Published as a conference paper at ICLR 2017 Figure 4: A single style transfer network was trained to capture the style of 10 Monet paintings, ï¬ ve of which are shown here. All 10 styles in this single model are in the Appendix. Golden Gate Bridge photograph by Rich Niewiroski Jr. single-style counterparts, while the difference in style losses (8.9 ± 16.5% lower) is insigniï¬ cant. While the N -styles network suffers from a slight decrease in content loss convergence speed, this may not be a fair comparison, given that it takes N times more parameter updates to train N single- style networks separately than to train them with an N -styles network. The right column shows a comparison between the pastiches produced by the 10-styles network and the ones produced by the single-style networks. We see that both results are qualitatively similar. 3.3 THE N-STYLES MODEL IS FLEXIBLE ENOUGH TO CAPTURE VERY DIFFERENT STYLES We evaluated the ï¬ exibility of the N -styles model by training a style transfer network on 32 works of art chosen for their diversity. Figure 1a shows the result of applying the trained network on eval- uation images for a subset of the styles. Once again, the full results are displayed in the Appendix. The model appears to be capable of modeling all 32 styles in spite of the tremendous variation in color palette and the spatial scale of the painting styles. 3.4 THE TRAINED NETWORK GENERALIZES ACROSS PAINTING STYLES Since all weights in the transformer network are shared between styles, one way to incorporate a new style to a trained network is to keep the trained weights ï¬ xed and learn a new set of γ and β parameters. To test the efï¬ ciency of this approach, we used it to incrementally incorporate Monetâ s Plum Trees in Blossom painting to the network trained on 32 varied styles. Figure 6 shows that doing so is much faster than training a new network from scratch (left) while yielding comparable pastiches: even after eight times fewer parameter updates than its single-style counterpart, the ï¬ | 1610.07629#11 | 1610.07629#13 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#13 | A Learned Representation For Artistic Style | ne- tuned model produces comparable pastiches (right). 3.5 THE TRAINED NETWORK CAN ARBITRARILY COMBINE PAINTING STYLES The conditional instance normalization approach raises some interesting questions about style rep- resentation. In learning a different set of γ and β parameters for every style, we are in some sense learning an embedding of styles. 6 Published as a conference paper at ICLR 2017 45000 s0000 Total content loss Final content loss (N styles) 30000 0 â sw0 â Jadoo 15000 20000 â 2sv00 0000 35000 20000 â 000035000 â a0000 45000 Parameter updates Final content los (1 style) 25000 20000 s000 Total style oss inal style loss (N styles) E 10000 . 5000 oâ swo â adoo â 15t00 â 2ad00 â 2st00 sooo 35000 â a0000 â soot 0000 1S000 20000 25000 Parameter updates Final style loss (1 style) N styles 1 style N styles 1 style Figure 5: The N -styles model exhibits learning dynamics comparable to individual models. (Left column) The N-styles model converges slightly slower in terms of content loss (top) and as fast in terms of style loss (bottom) than individual models. Training on a single Monet painting is repre- sented by two curves with the same color. The dashed curve represents the N -styles model, and the full curves represent individual models. Emphasis has been added on the styles for Vetheuil (1902) (teal) and Water Lilies (purple) for visualization purposes; remaining colors correspond to other Monet paintings (see Appendix). (Center column) The N-styles model reaches a slightly higher ï¬ nal content loss than (top, 8.7 ± 3.9% increase) and a ï¬ nal style loss comparable to (bot- tom, 8.9 ± 16.5% decrease) individual models. (Right column) Pastiches produced by the N -styles network are qualitatively comparable to those produced by individual networks. 10° â _ From scratch 2 â | 1610.07629#12 | 1610.07629#14 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#14 | A Learned Representation For Artistic Style | Finetuned 5,000 steps 40,000 steps 2 E 8 s f= fs 2 3 ri 3 oO . g 10° s 2 a oO 8 g 2 8 > & 10° â ¬ 3 i 3 = 0 5000 10000 15000 20000 25000 30000 35000 40000 Parameter updates Figure 6: The trained network is efï¬ cient at learning new styles. (Left column) Learning γ and β from a trained style transfer network converges much faster than training a model from scratch. (Right) Learning γ and β for 5,000 steps from a trained style transfer network produces pastiches comparable to that of a single network trained from scratch for 40,000 steps. Conversely, 5,000 step of training from scratch produces leads to a poor pastiche. Previous work suggested that cleverly balancing optimization strategies offers an opportunity to blend painting styles 2. To probe the utility of this embedding, we tried convex combinations of the # 2For instance, https://github.com/jcjohnson/neural-style 7 Published as a conference paper at ICLR 2017 109 _ 83 83 i £74 Aras § 3 64 64 8 & 2 5s 55 8 8 s 46 46 3 E36 36 5 g 27 27 8 2 17 17.2 = & B os 0.8 0.0 0.0 00 #02 04 06 08 1.0 a Figure 7: The N -styles network can arbitrarily combine artistic styles. (Left) Combining four styles, shown in the corners. Each pastiche corresponds to a different convex combination of the four stylesâ γ and β values. (Right) As we transition from one style to another (Bicentennial Print and Head of a Clown in this case), the style losses vary monotonically. γ and β values to blend very distinct painting styles (Figure 1b; Figure 7, left column). Employing a single convex combination produces a smooth transition from one style to the other. | 1610.07629#13 | 1610.07629#15 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#15 | A Learned Representation For Artistic Style | Suppose (γ1, β1) and (γ2, β2) are the parameters corresponding to two different styles. We use γ = α à γ1 + (1 â α) à γ2 and β = α à β1 + (1 â α) à β2 to stylize an image. Employing convex combinations may be extended to an arbitrary number of styles 3. Figure 7 (right column) shows the style loss from the transformer network for a given source image, with respect to the Bicentennial Print and Head of a Clown paintings, as we vary α from 0 to 1. As α increases, the style loss with respect to Bicentennial Print increases, which explains the smooth fading out of that styleâ s artifact in the transformed image. # 4 DISCUSSION It seems surprising that such a small proportion of the networkâ s parameters can have such an im- pact on the overall process of style transfer. A similar intuition has been observed in auto-regressive models of images (van den Oord et al., 2016b) and audio (van den Oord et al., 2016a) where the conditioning process is mediated by adjusting the biases for subsequent samples from the model. That said, in the case of art stylization when posed as a feedforward network, it could be that the speciï¬ c network architecture is unable to take full advantage of its capacity. We see evidence for this behavior in that pruning the architecture leads to qualitatively similar results. Another interpretation could be that the convolutional weights of the style transfer network encode transformations that represent â elements of styleâ . The scaling and shifting factors would then provide a way for each style to inhibit or enhance the expression of various elements of style to form a global identity of style. While this work does not attempt to verify this hypothesis, we think that this would consti- tute a very promising direction of research in understanding the computation behind style transfer networks as well as the representation of images in general. Concurrent to this work, Gatys et al. (2016b) demonstrated exciting new methods for revising the loss to selectively adjust the spatial scale, color information and spatial localization of the artistic style information. | 1610.07629#14 | 1610.07629#16 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#16 | A Learned Representation For Artistic Style | These methods are complementary to the results in this paper and present an interesting direction for exploring how spatial and color information uniquely factor into artistic style representation. The question of how predictive each style image is of its corresponding style representation is also of great interest. If it is the case that the style representation can easily be predicted from a style image, 3Please see the code repository for real-time, interactive demonstration. A screen capture is available at https://www.youtube.com/watch?v=6ZHiARZmiUI. 8 Published as a conference paper at ICLR 2017 one could imagine building a transformer network which skips learning an individual conditional embedding and instead learn to produce a pastiche directly from a style and a content image, much like in the original neural algorithm of artistic style, but without any optimization loop at test time. Finally, the learned style representation opens the door to generative models of style: by modeling enough paintings of a given artistic movement (e.g. impressionism), one could build a collection of style embeddings upon which a generative model could be trained. At test time, a style represen- tation would be sampled from the generative model and used in conjunction with the style transfer network to produce a random pastiche of that artistic movement. In summary, we demonstrated that conditional instance normalization constitutes a simple, efï¬ cient and scalable modiï¬ cation of style transfer networks that allows them to model multiple styles at the same time. A practical consequence of this approach is that a new painting style may be transmitted to and stored on a mobile device with a small number of parameters. We showed that despite its simplicity, the method is ï¬ exible enough to capture very different styles while having very little impact on training time and ï¬ nal performance of the trained network. Finally, we showed that the learned representation of style is useful in arbitrarily combining artistic styles. This work suggests the existence of a learned representation for artistic styles whose vocabulary is ï¬ exible enough to capture a diversity of the painted world. # ACKNOWLEDGMENTS | 1610.07629#15 | 1610.07629#17 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#17 | A Learned Representation For Artistic Style | We would like to thank Fred Bertsch, Douglas Eck, Cinjon Resnick and the rest of the Google Ma- genta team for their feedback; Peyman Milanfar, Michael Elad, Feng Yang, Jon Barron, Bhavik Singh, Jennifer Daniel as well as the the Google Brain team for their crucial suggestions and ad- vice; an anonymous reviewer for helpful suggestions about applying this model in a mobile domain. Finally, we would like to thank the Google Cultural Institute, whose curated collection of art pho- tographs was very helpful in ï¬ nding exciting style images to train on. | 1610.07629#16 | 1610.07629#18 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#18 | A Learned Representation For Artistic Style | 9 Published as a conference paper at ICLR 2017 # REFERENCES Martın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorï¬ ow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248â 255. IEEE, 2009. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pp. 341â 346. ACM, 2001. Alexei A Efros and Thomas K Leung. Texture synthesis by non-parametric sampling. In Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on, volume 2, pp. 1033â 1038. IEEE, 1999. Michael Elad and Peyman Milanfar. Style-transfer via texture-synthesis. arXiv preprint arXiv:1609.03057, 2016. Oriel Frigo, Neus Sabater, Julie Delon, and Pierre Hellier. Split and match: Example-based adaptive patch sampling for unsupervised style transfer. 2016. Leon Gatys, Alexander S Ecker, and Matthias Bethge. Texture synthesis using convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 262â 270, 2015a. Leon A Gatys, Alexander S Ecker, and Matthias Bethge. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576, 2015b. Leon A Gatys, Matthias Bethge, Aaron Hertzmann, and Eli Shechtman. Preserving color in neural artistic style transfer. arXiv preprint arXiv:1606.05897, 2016a. Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, Aaron Hertzmann, and Eli Shechtman. | 1610.07629#17 | 1610.07629#19 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#19 | A Learned Representation For Artistic Style | Controlling perceptual factors in neural style transfer. CoRR, abs/1611.07865, 2016b. URL http://arxiv.org/abs/1611.07865. Image analogies. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pp. 327â 340. ACM, 2001. Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. arXiv preprint arXiv:1603.08155, 2016. Bela Julesz. Visual pattern discrimination. IRE Trans. Info Theory, 8:84â 92, 1962. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Vivek Kwatra, Irfan Essa, Aaron Bobick, and Nipun Kwatra. Texture optimization for example- based synthesis. ACM Transactions on Graphics (ToG), 24(3):795â 802, 2005. Chuan Li and Michael Wand. | 1610.07629#18 | 1610.07629#20 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#20 | A Learned Representation For Artistic Style | Precomputed real-time texture synthesis with markovian generative adversarial networks. ECCV, 2016. URL http://arxiv.org/abs/1604.04382. Lin Liang, Ce Liu, Ying-Qing Xu, Baining Guo, and Heung-Yeung Shum. Real-time texture syn- thesis by patch-based sampling. ACM Transactions on Graphics (ToG), 20(3):127â 150, 2001. Augustus Odena, Christopher Olah, and Vincent Dumoulin. | 1610.07629#19 | 1610.07629#21 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#21 | A Learned Representation For Artistic Style | Avoiding checkerboard artifacts in neural networks. Distill, 2016. Javier Portilla and Eero Simoncelli. A parametric texture model based on joint statistics of complex wavelet coefï¬ cients. International Journal of Computer Vision, 40:49â 71, 1999. 10 Published as a conference paper at ICLR 2017 Eero Simoncelli and Bruno Olshausen. Natural image statistics and neural representation. Annual Review of Neuroscience, 24:1193â 1216, 2001. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor Lempitsky. Texture networks: Feed- forward synthesis of textures and stylized images. arXiv preprint arXiv:1603.03417, 2016a. | 1610.07629#20 | 1610.07629#22 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#22 | A Learned Representation For Artistic Style | Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing in- gredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016b. A¨aron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. CoRR, abs/1609.03499, 2016a. URL http://arxiv.org/abs/1609.03499. A¨aron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. Conditional image generation with pixelcnn decoders. CoRR, abs/1606.05328, 2016b. URL http://arxiv.org/abs/1606.05328. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp. 479â 488. ACM Press/Addison-Wesley Publishing Co., 2000. Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. European Conference on Computer Vision, pp. 818â 833. Springer, 2014. In 11 | 1610.07629#21 | 1610.07629#23 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#23 | A Learned Representation For Artistic Style | Published as a conference paper at ICLR 2017 # APPENDIX HYPERPARAMETERS Operation Kernel size Stride Feature maps 9 3 3 9 3 3 1 2 2 1 1 1 32 64 128 128 128 128 128 128 64 32 3 C C SAME SAME SAME SAME SAME SAME ReLU ReLU ReLU Sigmoid ReLU Linear Add the input and the output Nearest-neighbor interpolation, factor 2 Convolution 3 1 C SAME ReLU Padding mode REFLECT Normalization Conditional instance normalization after every convolution # Padding Nonlinearity Network â 256 à 256 à 3 input Convolution Convolution Convolution Residual block Residual block Residual block Residual block Residual block Upsampling Upsampling Convolution Residual block â C feature maps Convolution Convolution Upsampling â C feature maps Optimizer Adam (Kingma & Ba, 2014) (α = 0.001, β1 = 0.9, β2 = 0.999) Parameter updates 40,000 # Batch size 16 # Weight initialization Isotropic gaussian (µ = 0, Ï = 0.01) # Table 1: Style transfer network hyperparameters. 12 Published as a conference paper at ICLR 2017 | 1610.07629#22 | 1610.07629#24 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#24 | A Learned Representation For Artistic Style | MONET PASTICHES Claude Monet, Grainstacks at Giverny; the Evening Sun (1888/1889). Claude Monet, Plum Trees in Blossom (1879). Claude Monet, Poppy Field (1873). 13 Published as a conference paper at ICLR 2017 Claude Monet, Rouen Cathedral, West Fac¸ade (1894). Claude Monet, Sunrise (Marine) (1873). Claude Monet, The Road to V´etheuil (1879). | 1610.07629#23 | 1610.07629#25 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#25 | A Learned Representation For Artistic Style | 14 Published as a conference paper at ICLR 2017 Claude Monet, Three Fishing Boats (1886). Claude Monet, V´etheuil (1879). Claude Monet, V´etheuil (1902). 15 Published as a conference paper at ICLR 2017 se se Claude Monet, Water Lilies (ca. 1914-1917). VARIED PASTICHES # Roy Lichtenstein, Bicentennial Print (1975). Ernst Ludwig Kirchner, Boy with Sweets (1918). 16 | 1610.07629#24 | 1610.07629#26 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#26 | A Learned Representation For Artistic Style | Published as a conference paper at ICLR 2017 Paul Signac, Cassis, Cap Lombard, Opus 196 (1889). Paul Klee, Colors from a Distance (1932). Frederic Edwin Church, Cotopaxi (1855). 17 Published as a conference paper at ICLR 2017 Jamini Roy, Cruciï¬ xion. Henri de Toulouse-Lautrec, Divan Japonais (1893). Egon Schiele, Edith with Striped Dress, Sitting (1915). | 1610.07629#25 | 1610.07629#27 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#27 | A Learned Representation For Artistic Style | 18 Published as a conference paper at ICLR 2017 Georges Rouault, Head of a Clown (ca. 1907-1908). William Hoare, Henry Hoare, â The Magniï¬ centâ , of Stourhead (about 1750-1760). Giorgio de Chirico, Horses on the seashore (1927/1928). 19 Published as a conference paper at ICLR 2017 Vincent van Gogh, Landscape at Saint-R´emy (Enclosed Field with Peasant) (1889). Nicolas Poussin, Landscape with a Calm (1650-1651). Bernardino Fungai, Madonna and Child with Two Hermit Saints (early 1480s). | 1610.07629#26 | 1610.07629#28 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#28 | A Learned Representation For Artistic Style | 20 Published as a conference paper at ICLR 2017 Max Hermann Maxy, Portrait of a Friend (1926). Juan Gris, Portrait of Pablo Picasso (1912). Severini Gino, Ritmo plastico del 14 luglio (1913). 21 Published as a conference paper at ICLR 2017 Richard Diebenkorn, Seawall (1957). Alice Bailly, Self-Portrait (1917). Grayson Perry, The Annunciation of the Virgin Deal (2012). 22 Published as a conference paper at ICLR 2017 | 1610.07629#27 | 1610.07629#29 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#29 | A Learned Representation For Artistic Style | William Glackens, The Green Boathouse (ca. 1922). Edvard Munch, The Scream (1910). Vincent van Gogh, The Starry Night (1889). 23 Published as a conference paper at ICLR 2017 Pieter Bruegel the Elder, The Tower of Babel (1563). Wolfgang Lettl, The Trial (1981). Douglas Coupland, Thomson No. 5 (Yellow Sunset) (2011). 24 | 1610.07629#28 | 1610.07629#30 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#30 | A Learned Representation For Artistic Style | Published as a conference paper at ICLR 2017 Claude Monet, Three Fishing Boats (1886). John Ruskin, Trees in a Lane (1847). Giuseppe Cades, Tullia about to Ride over the Body of Her Father in Her Chariot (about 1770-1775). 25 Published as a conference paper at ICLR 2017 Berthe Morisot, Under the Orange Tree (1889). Giulio Romano (Giulio Pippi), Victory, Janus, Chronos and Gaea (about 1532-1534). Wassily Kandinsky, White Zig Zags (1922). | 1610.07629#29 | 1610.07629#31 | 1610.07629 | [
"1603.03417"
]
|
1610.07629#31 | A Learned Representation For Artistic Style | 26 | 1610.07629#30 | 1610.07629 | [
"1603.03417"
]
|
|
1610.07272#0 | Bridging Neural Machine Translation and Bilingual Dictionaries | 6 1 0 2 t c O 4 2 ] L C . s c [ 1 v 2 7 2 7 0 . 0 1 6 1 : v i X r a # Bridging Neural Machine Translation and Bilingual Dictionaries Jiajun Zhangâ and Chengqing Zongâ â ¡ â University of Chinese Academy of Sciences, Beijing, China National Laboratory of Pattern Recognition, CASIA, Beijing, China â ¡CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China {jjzhang,cqzong}@nlpr.ia.ac.cn | 1610.07272#1 | 1610.07272 | [
"1609.04186"
]
|
|
1610.07272#1 | Bridging Neural Machine Translation and Bilingual Dictionaries | # Abstract Neural Machine Translation (NMT) has become the new state-of-the-art in sev- eral language pairs. However, it remains a challenging problem how to integrate NMT with a bilingual dictionary which mainly contains words rarely or never seen in the bilingual training data. In this pa- per, we propose two methods to bridge NMT and the bilingual dictionaries. The core idea behind is to design novel models that transform the bilingual dictionaries into adequate sentence pairs, so that NMT can distil latent bilingual mappings from the ample and repetitive phenomena. One method leverages a mixed word/character model and the other attempts at synthesiz- ing parallel sentences guaranteeing mas- sive occurrence of the translation lexi- con. Extensive experiments demonstrate that the proposed methods can remarkably improve the translation quality, and most of the rare words in the test sentences can obtain correct translations if they are cov- ered by the dictionary. Typically, NMT adopts the encoder-decoder ar- chitecture which consists of two recurrent neural networks. The encoder network models the se- mantics of the source sentence and transforms the source sentence into the context vector represen- tation, from which the decoder network generates the target translation word by word. One important feature of NMT is that each word in the vocabulary is mapped into a low- dimensional (word embed- ding). The use of continuous representations en- ables NMT to learn latent bilingual mappings for accurate translation and explore the statistical sim- ilarity between words (e.g. desk and table) as well. As a disadvantage of the statistical models, NMT can learn good word embeddings and accurate bilingual mappings only when the words occur frequently in the parallel sentence pairs. However, low-frequency words are ubiquitous, especially when the training data is not enough (e.g. low- resource language pairs). Fortunately, in many language pairs and domains, we have handmade bilingual dictionaries which mainly contain words rarely or never seen in the training corpus. There- fore, it remains a big challenge how to bridge NMT and the bilingual dictionaries. | 1610.07272#0 | 1610.07272#2 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#2 | Bridging Neural Machine Translation and Bilingual Dictionaries | 1 # 1 Introduction Due to its superior ability in modelling the end-to-end translation process, neural machine translation (NMT), recently proposed by (Kalch- brenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014), has become the novel paradigm and achieved the new state-of-the-art translation performance for several language pairs, such as English-to-French, English-to-German and Chinese-to-English (Sutskever et al., 2014; Bahdanau et al., 2014; Luong et al., 2015b; Sen- nrich et al., 2015b; Wu et al., 2016). Recently, Arthur et al. (2016) attempt at incor- porating discrete translation lexicons into NMT. The main idea of their method is leveraging the discrete translation lexicons to positively inï¬ uence the probability distribution of the output words in the NMT softmax layer. However, their approach only addresses the translation lexicons which are in the restricted vocabulary 1 of NMT. The out-of- vocabulary (OOV) words are out of their consid- eration. 1NMT usually keeps only the words whose occurrence is more than a threshold (e.g. 10), since very rare words can not yield good embeddings and large vocabulary leads to high computational complexity. | 1610.07272#1 | 1610.07272#3 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#3 | Bridging Neural Machine Translation and Bilingual Dictionaries | ' Chinese Source: : Chinese Pinyin: Bilingual Dictionary: 1. mixed word/character model 4L1E > fireworks JEZE A A HY Be RR ALE i zhéngzai wei ziji de chuangyi shifang lihua ; English Reference: was setting off fireworks for its creativity 2. pseudo sentence pair synthesis model <B>#L <E>4é > fireworks the fireworks light up the night fireworks open in the sky LEAT RH) they held talks on fireworks JH tek Riz thie AY 4L4E fireworks product for London Olympics. Vv mixed word/character NUT y. EE ANSI ¥. is trying to release their own creative fireworks EM <B>4L <E>4E NMT trained on mixed corpus J IEE AA HY is releasing their own creative fireworks FE | 1610.07272#2 | 1610.07272#4 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#4 | Bridging Neural Machine Translation and Bilingual Dictionaries | Figure 1: The framework of our proposed methods. In this paper, we aim at making full use of all the bilingual dictionaries, especially the ones covering the rare or OOV words. Our basic idea is to trans- form the low-frequency word pair in bilingual dic- tionaries into adequate sequence pairs which guar- antee the frequent occurrence of the word pair, so that NMT can learn translation mappings between the source word and the target word. To achieve this goal, we propose two methods, as shown in Fig. 1. | 1610.07272#3 | 1610.07272#5 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#5 | Bridging Neural Machine Translation and Bilingual Dictionaries | In the test sentence, the Chi- nese word lË ihu¯a appears only once in our train- ing data and the baseline NMT cannot correctly translate this word. Fortunately, our bilingual dic- tionary contains this translation lexicon. Our ï¬ rst method extends the mixed word/character model proposed by Wu et al. (2016) to re-label the rare words in both of the dictionary and training data with character sequences in which characters are now frequent and the character translation map- pings can be learnt by NMT. Instead of backing off words into characters, our second method is well designed to synthesize adequate pseudo sentence pairs containing the translation lexicon, allowing NMT to learn the word translation mappings. | 1610.07272#4 | 1610.07272#6 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#6 | Bridging Neural Machine Translation and Bilingual Dictionaries | We make the following contributions in this pa- per: â ¢ We propose a low-frequency to high- frequency framework to bridge NMT and the bilingual dictionaries. â ¢ We propose and investigate two methods to utilize the bilingual dictionaries. One ex- tends the mixed word/character model and the other designs a pseudo sentence pair syn- thesis model. â ¢ The extensive experiments on Chinese-to- English translation show that our proposed methods signiï¬ cantly outperform the strong attention-based NMT. We further ï¬ nd that most of rare words can be correctly trans- lated, as long as they are covered by the bilin- gual dictionary. # 2 Neural Machine Translation Our framework bridging NMT and the discrete bilingual dictionaries can be applied in any neural machine translation model. Without loss of gen- erality, we use the attention-based NMT proposed by (Luong et al., 2015b), which utilizes stacked Long-Short Term Memory (LSTM, (Hochreiter and Schmidhuber, 1997)) layers for both encoder and decoder as illustrated in Fig. 2. The encoder-decoder NMT ï¬ rst encodes the source sentence X = (x1, x2, · · · , xTx) into a se- quence of context vectors C = (h1, h2, · · · , hTx) whose size varies with respect to the source sen- tence length. Then, the encoder-decoder NMT decodes from the context vectors C and gener- ates target translation Y = (y1, y2, · · · , yTy ) one word each time by maximizing the probability of p(yi|y<i, C). Note that xj (yi) is word embedding corresponding to the jth (ith) word in the source (target) sentence. | 1610.07272#5 | 1610.07272#7 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#7 | Bridging Neural Machine Translation and Bilingual Dictionaries | Next, we brieï¬ y review the en- ' ' ' i ' ' | | start ' ' ' ' decoder ' : + ' i ' encoder i ! ' i ht | â > |h} |-> â hi, ' ' 7 ! ' x1 x2 Xt | Figure 2: The architecture of the attention-based NMT which has m stacked LSTM layers for en- coder and l stacked LSTM layers for decoder. coder introducing how to obtain C and the decoder addressing how to calculate p(yi|y<i, C). context vectors C = ) are generated by the en- is Encoder: 1 , hm The (hm coder using m stacked LSTM layers. hk j calculated as follows: 2 , · · · , hm Tx j = LST M (hk hk jâ 1, hkâ 1 j ) (1) Where hkâ 1 Decoder: probability p(yi|y<i, C) is computed in different ways according to the choice of the context C at time i. In (Cho et al., 2014), the authors choose C = hm , Tx while Bahdanau et al. (2014) use different context ci at different time step and the conditional probability will become: p(yi|y<i, C) = p(yi|y<i, ci) = sof tmax(W Ë zi)) (2) | 1610.07272#6 | 1610.07272#8 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#8 | Bridging Neural Machine Translation and Bilingual Dictionaries | where Ë zi is the attention output: Ë zi = tahn(Wc[zl i; ci]) (3) The attention model calculates ci as the weighted sum of the source-side context vectors, just as illustrated in the middle part of Fig. 2. Tx G = S- Oj 2 (4) j=l where αij is a normalized item calculated as fol- lows: Qj; = byt 4 (5) 7 Sy ht 2h zk i is computed using the following formula: iâ 1, zkâ 1 i = LST M (zk zk i i will be calculated by combining If k = 1, z1 Ë ziâ 1 as feed input (Luong et al., 2015b): i = LST M (z1 z1 iâ 1, yiâ 1, Ë ziâ 1) (7) | 1610.07272#7 | 1610.07272#9 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#9 | Bridging Neural Machine Translation and Bilingual Dictionaries | Given the sentence aligned bilingual train- ing data Db = {(X (n) , Y (n) n=1 , all the pa- b rameters of the encoder-decoder NMT are opti- mized to maximize the following conditional log- likelihood: N Ty 1 Li)=5> S > logp(y!â |y?, XA) (8) n=1 i=1 # Incorporating Bilingual Dictionaries The word translation pairs in bilingual dictionar- ies are difï¬ cult to use in neural machine transla- tion, mainly because they are rarely or never seen in the parallel training corpus. We attempt to build a bridge between NMT and bilingual dictionaries. We believe the bridge is data transformation that can transform rarely or unseen word translation pairs into frequent ones and provide NMT ade- quate information to learn latent translation map- pings. In this work, we propose two methods to perform data transformation from character level and word level respectively. # 3.1 Mixed Word/Character Model = dictionary Dic Given {(Dic(i) i=1, we focus on the trans- lation lexicons (Dicx, Dicy) if Dicx is a rare or unknown word in the bilingual corpus Db. | 1610.07272#8 | 1610.07272#10 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#10 | Bridging Neural Machine Translation and Bilingual Dictionaries | We ï¬ rst introduce data transformation using the character-based method. We all know that words are composed of characters and most of the char- acters are frequent even though the word is never seen. This idea is popularly used to deploy open vocabulary NMT (Ling et al., 2015; Costa-Juss`a and Fonollosa, 2016; Chung et al., 2016). Character translation mappings are much eas- ier to learn for NMT than word translation map- pings. However, given a character sequence of a source language word, NMT cannot guarantee the generated character sequence would lead to a valid target language word. Therefore, we prefer the framework mixing the words and characters, which is employed by Wu et al. (2016) to handle OOV words. If it is a frequent word, we keep it un- changed. Otherwise, we fall back to the character sequence. We perform data transformation on both parallel training corpus and bilingual dictionaries. Here, English sentences and words are adopted as ex- amples. Suppose we keep the English vocabulary V in which the frequency of each word exceeds a threshold K. For each English word w (e.g. oak) in a parallel sentence pair (Xb, Yb) or in a trans- lation lexicon (Dicx, Dicy), if w â V , w will be left as it is. Otherwise, w is re-labelled by charac- ter sequence. For example, oak will be: oak + (B)o (M)a (E)k (9) Where (B), (/) and (£) denotes respectively begin, middle and end of a word. | 1610.07272#9 | 1610.07272#11 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#11 | Bridging Neural Machine Translation and Bilingual Dictionaries | # 3.2 Pseudo Sentence Pair Synthesis Model Since NMT is a data driven approach, it can learn latent translation mappings for a word pair (Dicx, Dicy) if these exist many parallel sen- tences containing (Dicx, Dicy). Along this line, we propose the pseudo sentence pair synthesis In this model, we aim at synthesiz- model. ing for a rare or unknown translation lexicon (Dicx, Dicy) the adequate pseudo parallel sen- tences {(X j j=1 each of which contains (Dicx, Dicy). Although there are no enough bilingual sen- tence pairs in many languages (and many do- mains), a huge amount of the monolingual data is available in the web. In this paper, we plan to make use of the source-side monolingual data Dsm = {(x im) M_|(M > N)to synthesize the pseudo bilingual sentence pairs Dy, = {(Xp, ¥p)} 7.1. For constructing Dbp, we resort to statistical ma- chine translation (SMT) and apply a self-learning Algorithm 1 Pseudo Sentence Pair Synthesis. Input: bilingual training data Db; bilingual dic- tionary Dic; source language monolingual data Dsm; pseudo sentence pair number K for each (Dicx, Dicy); Output: pseudo {(X j p )}J p, Y j sentence j=1: pairs Dbp = 1: Build an SMT system PBMT on {Db, Dic}; 2: Dbp = {}; 3: for each (Dicx, Dicy) in Dic do Retrieve K monolingual 4: p }K Translate {X k p }K p , Y k p }K k=1 using PBMT; p }K k=1 into Dbp; sentences {Y k Add {X k # 6: 7: end for 8: return Dbp | 1610.07272#10 | 1610.07272#12 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#12 | Bridging Neural Machine Translation and Bilingual Dictionaries | method as illustrated in Algorithm 1. In contrast to NMT, statistical machine translation (SMT, e.g. phrase-based SMT (Koehn et al., 2007; Xiong et al., 2006)) is easy to integrate bilingual dictio- naries (Wu et al., 2008) as long as we consider the translation lexicons of bilingual dictionaries as phrasal translation rules. Following (Wu et al., 2008), we ï¬ rst merge the bilingual sentence cor- pus Db with the bilingual dictionaries Dic, and employ the phrase-based SMT to train an SMT system called PBMT (line 1 in Algorithm 1). For each rare or unknown word translation pair (Dicx, Dicy), we can easily retrieve the ad- equate source language monolingual sentences {(X k p ) from the web or other data collections. PBMT is then applied to translate {(X k p )}K k=1 to generate target language translations {(Y k p )}K k=1. As PBMT employs the bilingual dictionaries Dic as additional transla- tion rules, each target translation sentence Yp â {(Y k p )}K k=1 will contain Dicy. Then, the sen- tence pair (X k p ) will include the word trans- lation pair (Dicx, Dicy). Finally, we can pair p )}K {(X k k=1 to yield pseudo sen- tence pairs {(X k p )}K k=1, which will be added into Dbp (line 2-6 in Algorithm 1). The original bilingual corpus Db and the pseudo bilingual sentence pairs Dbp are combined to- gether to train a new NMT model. Some may worry that the target parts of Dbp are SMT re- sults but not well-formed sentences which would Fortunately, Sennrich et harm NMT training. al. (2015b), Cheng et al. (2016b) and Zhang and Zong (2016) observe from large-scale exper- iments that the synthesized bilingual data using self-learning framework can substantially improve NMT performance. Since Dbp now contains bilin- gual dictionaries, we expect that the NMT trained on {Db, Dbp} cannot only signiï¬ | 1610.07272#11 | 1610.07272#13 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#13 | Bridging Neural Machine Translation and Bilingual Dictionaries | cantly boost the translation quality, but also solve the problem of rare word translation if they are covered by Dic. Note that the pseudo sentence pair synthesis model can be further augmented by the mixed word/character model to solve other OOV trans- lations. # 4 Experimental Settings In this section we describe the data sets, data pre- processing, the training and evaluation details, and all the translation methods we compare in the ex- periments. # 4.1 Dataset We perform the experiments on Chinese-to- English translation. Our bilingual training data Db includes 630K2 sentence pairs (each sentence length is limited up to 50 words) extracted from LDC corpora3. For validation, we choose NIST 2003 (MT03) dataset. For testing, we use NIST 2004 (MT04), NIST 2005 (MT05), NIST 2006 (MT06) and NIST 2006 (MT08) datasets. | 1610.07272#12 | 1610.07272#14 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#14 | Bridging Neural Machine Translation and Bilingual Dictionaries | The test sentences are remained as their original length. As for the source-side monolingual data Dsm, we collect about 100M Chinese sentences in which approximately 40% are provided by Sogou and the rest are collected by searching the words in the bilingual data from the web. We use two bilingual dictionaries: one is from LDC (LDC2002L27) and the other is manually collected by ourselves. The combined dictionary Dic contains 86,252 translation lexicons in total. | 1610.07272#13 | 1610.07272#15 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#15 | Bridging Neural Machine Translation and Bilingual Dictionaries | # 4.2 Data Preprocessing If necessary, the Chinese sentences are word seg- mented using Stanford Word Segmenter4. The En- glish sentences are tokenized using the tokenizer script from the Moses decoder5. We limit the vo- cabulary in both Chinese and English using a fre- 2Without using very large-scale data, it is relatively easy to evaluate the effectiveness of the bilingual dictionaries. 3LDC2000T50, LDC2002E18, LDC2003E07, LDC2003E14, LDC2003T17, LDC2004T07. LDC2002T01, 4http://nlp.stanford.edu/software/segmenter.shtml 5http://www.statmt.org/moses/ | 1610.07272#14 | 1610.07272#16 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#16 | Bridging Neural Machine Translation and Bilingual Dictionaries | quency threshold u. We choose uc = 10 for Chi- nese and ue = 8 for English, resulting |Vc| = 38815 and |Ve| = 30514 for Chinese and En- glish respectively in Db. As we focus on rare or unseen translation lexicons of the bilingual dictio- nary Dic in this work, we ï¬ lter Dic and retain the ones (Dicx, Dicy) if Dicx /â Vc, resulting 8306 entries in which 2831 ones appear in the valida- tion and test data sets. All the OOV words are re- placed with UNK in the word-based NMT and are re-labelled into character sequences in the mixed word/character model. # 4.3 Training and Evaluation Details We build the described models using the Zoph RNN6 in C++/CUDA and provides training In the NMT architecture across multiple GPUs. as illustrated in Fig. the encoder includes 2, two stacked LSTM layers, followed by a global attention layer, and the decoder also contains two stacked LSTM layers followed by the softmax layer. The word embedding dimension and the size of hidden layers are all set to 1000. Each NMT model is trained on GPU K80 us- ing stochastic gradient decent algorithm AdaGrad (Duchi et al., 2011). We use a mini batch size of B = 128 and we run a total of 20 iterations for all the data sets. The training time for each model ranges from 2 days to 4 days. At test time, we em- ploy beam search with beam size b = 10. We use case-insensitive 4-gram BLEU score as the auto- matic metric (Papineni et al., 2002) for translation quality evaluation. # 4.4 Translation Methods In the experiments, we compare our method with the conventional SMT model and the baseline attention-based NMT model. We list all the trans- lation methods as follows: | 1610.07272#15 | 1610.07272#17 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#17 | Bridging Neural Machine Translation and Bilingual Dictionaries | â ¢ Moses: It is the state-of-the-art phrase-based SMT system (Koehn et al., 2007). We use its default conï¬ guration and train a 4-gram language model on the target portion of the bilingual training data. â ¢ Zoph RNN: It is the baseline attention-based NMT system (Luong et al., 2015a; Zoph et al., 2016) using two stacked LSTM layers for both of the encoder and the decoder. 6https://github.com/isi-nlp/Zoph RNN Method Moses Zoph RNN Zoph RNN-mixed Zoph RNN-mixed-dic Zoph RNN-pseudo (K = 10) Zoph RNN-pseudo-dic (K = 10) Zoph RNN-pseudo (K = 20) Zoph RNN-pseudo-dic (K = 20) Zoph RNN-pseudo (K = 30) Zoph RNN-pseudo-dic (K = 30) Zoph RNN-pseudo (K = 40) Zoph RNN-pseudo-dic (K = 40) Zoph RNN-pseudo-mixed (K = 40) Zoph RNN-pseudoâ | 1610.07272#16 | 1610.07272#18 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#18 | Bridging Neural Machine Translation and Bilingual Dictionaries | mixed-dic (K = 40) |Vc| 38815 42769 42892 42133 42133 43080 43080 44162 44162 45195 45195 45436 45436 |Ve| MT03 MT04 MT05 MT06 MT08 23.20 25.93 26.81 27.04 27.65 28.65 26.80 29.53 27.58 30.17 27.80 30.25 28.46 30.64 30.30 34.77 35.57 36.29 35.66 36.48 35.00 36.92 36.07 37.26 35.44 36.93 38.17 38.66 31.04 37.40 38.07 38.75 38.02 38.59 36.99 38.63 37.74 39.01 37.96 39.15 39.55 40.78 28.19 32.94 34.44 34.86 34.66 35.81 34.22 36.09 34.63 36.64 34.89 36.85 36.86 38.36 30.04 33.85 36.07 36.57 36.51 38.14 36.09 38.13 36.66 38.50 36.92 38.77 38.53 39.56 30514 30630 30630 32300 31734 32813 32255 33357 32797 33961 33399 32659 32421 Ave 28.55 32.98 34.19 34.70 34.50 35.53 33.82 35.86 34.54 36.32 34.60 36.39 36.31 37.60 Table 1: Translation results (BLEU score) for different translation methods. K = 10 denotes that we synthesize 10 pseudo sentence pairs for each word translation pair (Dicx, Dicy). | 1610.07272#17 | 1610.07272#19 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#19 | Bridging Neural Machine Translation and Bilingual Dictionaries | The column |Vc| (|Ve|) reports the vocabulary size limited by frequency threshold uc = 10 (ue = 8). Note that all the NMT systems use the single model rather than the ensemble model. â ¢ Zoph RNN-mixed-dic: It is our NMT sys- tem which integrates the bilingual dictio- naries by re-labelling the rare or unknown words with character sequence on both bilin- gual training data and bilingual dictionar- ies. Zoph RNN-mixed indicates that mixed word/character model is performed only on the bilingual training data and the bilingual dictionary is not used. | 1610.07272#18 | 1610.07272#20 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#20 | Bridging Neural Machine Translation and Bilingual Dictionaries | â ¢ Zoph RNN-pseudo-dic: It is our NMT sys- tem that integrates the bilingual dictionar- ies by synthesizing adequate pseudo sen- tence pairs that contain the focused rare or unseen translation lexicons. Zoph RNN- pseudo means that the target language parts of pseudo sentence pairs are obtained by the SMT system PBMT without using the bilin- gual dictionary Dic. Can the combined two proposed methods further boost the translation performance? # 5.1 NMT vs. SMT Table 1 reports the detailed translation quality for different methods. Comparing the ï¬ rst two lines in Table 1, it is very obvious that the attention- based NMT system Zoph RNN substantially out- performs the phrase-based SMT system Moses on just 630K bilingual Chinese-English sentence pairs. The gap can be as large as 6.36 absolute BLEU points on MT04. The average improve- ment is up to 4.43 BLEU points (32.98 vs. 28.55). | 1610.07272#19 | 1610.07272#21 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#21 | Bridging Neural Machine Translation and Bilingual Dictionaries | It is in line with the ï¬ ndings reported in (Wu et al., 2016; Junczys-Dowmunt et al., 2016) which conducted experiments on tens of millions or even more parallel sentence pairs. Our experiments fur- ther show that NMT can be still much better even we have less than 1 million sentence pairs. is a NMT system combining the two methods Zoph RNN-pseudo and Zoph RNN-mixed. Zoph RNN-pseudo-mixed to Zoph RNN-pseudo. | 1610.07272#20 | 1610.07272#22 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#22 | Bridging Neural Machine Translation and Bilingual Dictionaries | # 5 Translation Results and Analysis For translation quality evaluation, we attempt to ï¬ gure out the following three questions: 1) Could the employed attention-based NMT outperform SMT even on less than 1 million sentence pairs? 2) Which model is more effective for integrating the bilingual dictionaries: mixed word/character model or pseudo sentence pair synthesis data? 3) # 5.2 The Effect of The Mixed W/C Model The two lines (3-4 in Table 1) presents the BLEU scores when applying the mixed word/character model this model markedly improves the translation quality over the baseline attention-based NMT, although the idea behind is very simple. the system Zoph RNN-mixed, trained only on the bitext Db, achieves an aver- age improvement of more than 1.0 BLEU point (34.19 vs 32.98) over the baseline Zoph RNN. It indicates that the mixed word/character model can alleviate the OOV translation problem to some ex- Chinese Word zh`uli´u d¯ongji¯a li`ey`an ¯anw`eij`i hË aixi`ao j`ingm`ai f Ë any`ingl´u hu´angpË uji¯ang ch¯aoch¯ed`ao Translation remain owner blaze placebo tsunami intravenous anti-subsidization lingchiang river take-owned lane Correct remain owner blaze placebo tsunami intravenous reactor huangpu river overtaking lane | 1610.07272#21 | 1610.07272#23 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#23 | Bridging Neural Machine Translation and Bilingual Dictionaries | Table 2: The effect of the Zoph RNN-mixed-dic model in using bilingual dictionaries. The Chinese word is written in Pinyin. The ï¬ rst two parts are positive word translation examples, while the third part shows some bad cases. tent. For example, the number 3/.3 is an OOV word in Chinese. The mixed model transforms this word into (B)3 (M)1 (M). (E)3 and it is correctly copied into target side, yielding a correct translation 3/.3. Moreover, some named entities (e.g. person name hecker) can be well translated. When adding the bilingual dictionary Dic as training data, the system Zoph_RNN-mixed-dic further gets a moderate improvement of 0.51 BLEU points (34.70 vs 34.19) on average. We find that the mixed model could make use of some rare or unseen translation lexicons in NMT, as illus- trated in the first two parts of Table 2. In the first part of Table 2, the English side of the translation lexicon is a frequent word (e.g. remain). The Chi- nese frequent character (e.g. /it%) shares the most meaning of the whole word (zhiliz) and thus it could be correctly translated into remain. We are a little surprised by the examples in the second part of Table 2, since the correct English parts are all OOV words which require each English charac- ter to be correctly generated. It demonstrates that the mixed model has some ability to predict the correct character sequence. However, this mixed model fails in many scenarios. The third part in Table 2 gives some bad cases. If the first predicted character is wrong, the final word translation will be incorrect (e.g. take-owned lane vs. overtak- ing lane). This is the main reason why the mixed model could not obtain large improvements. # 5.3 The Effect of Data Synthesis Model The eight lines (5-12) in Table 1 show the trans- lation performance of the pseudo sentence pair synthesis model. We can analyze the results from three perspectives: 1) the effect of the self- pseudo-dic mixed-dic K = 10 K = 20 K = 30 K = 40 0.76 0.36 0.71 0.78 0.79 | 1610.07272#22 | 1610.07272#24 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#24 | Bridging Neural Machine Translation and Bilingual Dictionaries | Table 3: The hit rate of the bilingual dictionary for different models. learning method for using the source-side mono- lingual data; 2) the effect of the bilingual dictio- nary; and 3) the effect of pseudo sentence pair number. (lines with Zoph RNN-pseudo) demonstrate that the synthe- sized parallel sentence pairs using source-side monolingual data can signiï¬ cantly improve the baseline NMT Zoph RNN, and the average im- provement can be up to 1.62 BLEU points (34.60 vs. 32.98). | 1610.07272#23 | 1610.07272#25 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#25 | Bridging Neural Machine Translation and Bilingual Dictionaries | This ï¬ nding is also reported by Cheng et al. (2016b) and Zhang and Zong (2016). augmenting Zoph RNN-pseudo with bilingual dictionaries, we can further obtain con- siderable gains. The largest average improvement can be 3.41 BLEU points when compared to the baseline NMT Zoph RNN and 2.04 BLEU points when compared to Zoph RNN-pseudo (35.86 vs. 33.82). When investigating the effect of pseudo sen- tence pair number (from K = 10 to K = 40), we ï¬ nd that the performance is largely better and better if we synthesize more pseudo sentence pairs for each rare or unseen word translation pair (Dicx, Dicy). We can also notice that improve- ment gets smaller and smaller when K grows. # 5.4 Mixed W/C Model vs. Data Synthesis Model Comparing the results between the mixed model and the data synthesis model (Zoph RNN-mixed- dic vs. Zoph RNN-pseudo-dic) in Table 1, we can easily see that the data synthesis model is much better to integrate bilingual dictionaries in NMT. Zoph RNN-pseudo-dic can substantially outperform Zoph RNN-mixed-dic by an average improvement up to 1.69 BLEU points (36.39 vs. 34.70). | 1610.07272#24 | 1610.07272#26 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#26 | Bridging Neural Machine Translation and Bilingual Dictionaries | Through a deep analysis, we ï¬ nd that most of rare or unseen words in test sets can be well trans- lated by Zoph RNN-pseudo-dic if they are covered by the bilingual dictionary. Table 3 reports the hit rate of the bilingual dictionaries. 0.71 indicates that 2010 (2831 à 0.71) words among the 2831 covered rare or unseen words in the test set can be correctly translated. This table explains why Zoph RNN-pseudo-dic performs much better than Zoph RNN-mixed-dic. The last two lines in Table 1 demonstrate that the combined method can further boost the trans- lation quality. The biggest average improvement over the baseline NMT Zoph RNN can be as large as 4.62 BLEU points, which is very promising. We believe that this method fully exploits the ca- pacity of the data synthesis model and the mixed model. Zoph RNN-pseudo-dic can well incorpo- rate the bilingual dictionary and Zoph RNN-mixed can well handle the OOV word translation. Thus, the combined method is the best. One may argue that the proposed methods use bigger vocabulary and the performance gains may be attributed to the increased vocabulary size. We further conduct an experiment for the baseline NMT Zoph RNN by setting |Vc| = 4600 and |Ve| = 3400. | 1610.07272#25 | 1610.07272#27 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#27 | Bridging Neural Machine Translation and Bilingual Dictionaries | We ï¬ nd that this setting decreases the translation quality by an average BLEU points 0.88 (32.10 vs. 32.98). This further veriï¬ es the superiority of our proposed methods. # 6 Related Work The recently proposed neural machine translation has drawn more and more attention. Most of the existing methods mainly focus on designing better attention models (Luong et al., 2015b; Cheng et al., 2016a; Cohn et al., 2016; Feng et al., 2016; Liu et al., 2016; Meng et al., 2016; Mi et al., 2016a; Mi et al., 2016b; Tu et al., 2016), better objective functions for BLEU evaluation (Shen et al., 2016), better strategies for handling open vo- cabulary (Ling et al., 2015; Luong et al., 2015c; Jean et al., 2015; Sennrich et al., 2015b; Costa- Juss`a and Fonollosa, 2016; Lee et al., 2016; Li et al., 2016; Mi et al., 2016c; Wu et al., 2016) and exploiting large-scale monolingual data (Gulcehre et al., 2015; Sennrich et al., 2015a; Cheng et al., 2016b; Zhang and Zong, 2016). Our focus in this work is aiming to fully inte- grate the discrete bilingual dictionaries into NMT. The most related works lie in three aspects: 1) applying the character-based method to deal with open vocabulary; 2) making use of the synthesized data in NMT, and 3) incorporating translation lex- icons in NMT. Ling et al. (2015), Costa-Juss`a and Fonollosa (2016) and Sennrich et al. (2015b) propose purely character-based or subword-based neural machine translation to circumvent the open word vocabu- lary problem. Luong et al. (2015c) and Wu et al. (2016) present the mixed word/character model which utilizes character sequence to replace the OOV words. We introduce the mixed model to integrate the bilingual dictionaries and ï¬ | 1610.07272#26 | 1610.07272#28 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#28 | Bridging Neural Machine Translation and Bilingual Dictionaries | nd that it is useful but not the best method. Sennrich et al. (2015a) propose an approach to use target-side monolingual data to synthesize the bitexts. They generate the synthetic bilingual data by translating the target monolingual sentences to source language sentences and retrain NMT with the mixture of original bilingual data and the syn- thetic parallel data. Cheng et al. (2016b) and Zhang and Zong (2016) also investigate the effect of the synthesized parallel sentences. They report that the pseudo sentence pairs synthesized using the source-side monolingual data can signiï¬ cantly improve the translation quality. These studies in- spire us to leverage the synthesized data to incor- porate the bilingual dictionaries in NMT. Very recently, Arthur et al. (2016) try to use dis- crete translation lexicons in NMT. Their approach attempts to employ the discrete translation lexi- cons to positively inï¬ uence the probability distri- bution of the output words in the NMT softmax layer. However, their approach only focuses on the words that belong to the vocabulary and the out- of-vocabulary (OOV) words are not considered. In contrast, we concentrated ourselves on the word translation lexicons which are rarely or never seen in the bilingual training data. | 1610.07272#27 | 1610.07272#29 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#29 | Bridging Neural Machine Translation and Bilingual Dictionaries | It is a much tougher problem. The extensive experiments demonstrate that our proposed models, especially the data syn- thesis model, can solve this problem very well. # 7 Conclusions and Future Work In this paper, we have presented two models to bridge neural machine translation and the bilin- gual dictionaries in which translation lexicons are rarely or never seen in the bilingual training data. Our proposed methods focus on data transforma- tion mechanism which guarantees the massive and repetitive occurrence of the translation lexicon. The mixed word/character model tackles this problem by re-labelling the OOV words with char- acter sequence, while our data synthesis model constructs adequate pseudo sentence pairs for each translation lexicon. The extensive experiments show that the data synthesis model substantially outperforms the mixed word/character model, and the combined method performs best. All of the proposed methods obtain promising improve- ments over the baseline NMT. | 1610.07272#28 | 1610.07272#30 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#30 | Bridging Neural Machine Translation and Bilingual Dictionaries | We further ï¬ nd that more than 70% of the rare or unseen words in test sets can get correct translations as long as they are covered by the bilingual dictionary. Currently, the data synthesis model does not distinguish the original bilingual training data from the synthesized parallel sentences in which the target sides are SMT translation results. In the future work, we plan to modify the neural network structure to avoid the negative effect of the SMT translation noise. # References [Arthur et al.2016] Philip Arthur, Graham Neubig, and Satoshi Nakamura. 2016. Incorporating discrete translation lexicons into neural machine translation. arXiv preprint arXiv:1606.02006. [Bahdanau et al.2014] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Shen, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016a. Agreement-based joint training for bidirectional attention-based neural machine translation. In Proceedings of AAAI 2016. [Cheng et al.2016b] Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016b. Semi-supervised learning for neural machine translation. In Proceedings of ACL 2016. Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase represen- tations using rnn encoder-decoder for statistical In Proceedings of EMNLP machine translation. 2014. [Chung et al.2016] Junyoung Chung, Kyunghyun Cho, and Yoshua Bengio. 2016. A character-level de- coder without explicit segmentation for neural ma- chine translation. arXiv preprint arXiv:1603.06147. | 1610.07272#29 | 1610.07272#31 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#31 | Bridging Neural Machine Translation and Bilingual Dictionaries | [Cohn et al.2016] Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vymolova, Kaisheng Yao, Chris Dyer, Incorporating and Gholamreza Haffari. structural alignment biases into an attentional neural translation model. In Proceedings of NAACL 2016. [Costa-Juss`a and Fonollosa2016] Marta R Costa-Juss`a and Jos´e AR Fonollosa. Character- based neural machine translation. arXiv preprint arXiv:1603.00810. [Duchi et al.2011] John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for on- line learning and stochastic optimization. The Jour- nal of Machine Learning Research, 12:2121â | 1610.07272#30 | 1610.07272#32 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#32 | Bridging Neural Machine Translation and Bilingual Dictionaries | 2159. [Feng et al.2016] Shi Feng, Shujie Liu, Mu Li, and Implicit distortion and fertil- Ming Zhou. 2016. ity models for attention-based encoder-decoder nmt model. arXiv preprint arXiv:1601.03317. [Gulcehre et al.2015] Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei- Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On using monolingual cor- pora in neural machine translation. arXiv preprint arXiv:1503.03535. [Hochreiter and Schmidhuber1997] Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735â | 1610.07272#31 | 1610.07272#33 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#33 | Bridging Neural Machine Translation and Bilingual Dictionaries | 1780. [Jean et al.2015] Sebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural ma- chine translation. In Proceedings of ACL 2015. Junczys- Dowmunt, Tomasz Dwojak, and Hieu Hoang. 2016. Is neural machine translation ready for deployment? arXiv a case study on 30 translation directions. preprint arXiv:1610.01108. [Kalchbrenner and Blunsom2013] Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous In Proceedings of EMNLP translation models. 2013. [Koehn et al.2007] Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. | 1610.07272#32 | 1610.07272#34 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#34 | Bridging Neural Machine Translation and Bilingual Dictionaries | Moses: Open source toolkit for statistical machine In Proceedings of ACL 2007, pages translation. 177â 180. and Thomas Hofmann. 2016. Fully character-level neu- ral machine translation without explicit segmenta- tion. arXiv preprint arXiv:1610.03017. and Chengqing Zong. 2016. Towards zero unknown word in neural machine translation. In Proceedings of IJCAI 2016. [Ling et al.2015] Wang Ling, Isabel Trancoso, Chris Character- Dyer, and Alan W Black. based neural machine translation. arXiv preprint arXiv:1511.04586. [Liu et al.2016] Lemao Liu, Masao Utiyama, Andrew Finch, and Eiichiro Sumita. 2016. Neural machine translation with supervised attention. arXiv preprint arXiv:1609.04186. [Luong et al.2015a] Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015a. Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114. [Luong et al.2015b] Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015b. Effective ap- proaches to attention-based neural machine transla- tion. In Proceedings of EMNLP 2015. Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wo- 2015c. Addressing the rare jciech Zaremba. word problem in neural machine translation. In Proceedings of ACL 2015. [Meng et al.2016] Fandong Meng, Zhengdong Lu, Hang Li, and Qun Liu. 2016. Interactive atten- tion for neural machine translation. arXiv preprint arXiv:1610.05011. Sankaran, Zhiguo Wang, and Abe Ittycheriah. 2016a. A coverage embedding model for neural machine translation. In Proceedings of EMNLP 2016. | 1610.07272#33 | 1610.07272#35 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#35 | Bridging Neural Machine Translation and Bilingual Dictionaries | [Mi et al.2016b] Haitao Mi, Zhiguo Wang, Niyu Ge, and Abe Ittycheriah. 2016b. Supervised attentions In Proceedings of for neural machine translation. EMNLP 2016. [Mi et al.2016c] Haitao Mi, Zhiguo Wang, and Abe It- tycheriah. 2016c. Vocabulary manipulation for large vocabulary neural machine translation. In Pro- ceedings of ACL 2016. [Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine transla- tion. In Proceedings of ACL 2002, pages 311â 318. [Sennrich et al.2015a] Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709. [Sennrich et al.2015b] Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015b. Neural machine trans- arXiv lation of rare words with subword units. preprint arXiv:1508.07909. [Shen et al.2016] Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Proceedings of ACL 2016. [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence In Proceedings of learning with neural networks. NIPS 2014. [Tu et al.2016] Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Coverage-based neural machine translation. In Proceedings of ACL 2016. and Domain adaptation 2008. Chengqing Zong. for statistical machine translation with domain dic- tionary and monolingual corpora. In Proceedings of COLING 2008, pages 993â 1000. | 1610.07272#34 | 1610.07272#36 | 1610.07272 | [
"1609.04186"
]
|
1610.07272#36 | Bridging Neural Machine Translation and Bilingual Dictionaries | [Wu et al.2016] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Googleâ s neural ma- chine translation system: Bridging the gap between arXiv preprint human and machine translation. arXiv:1609.08144. [Xiong et al.2006] Deyi Xiong, Qun Liu, and Shouxun Lin. 2006. Maximum entropy based phrase reorder- ing model for statistical machine translation. In Pro- ceedings of ACL-COLING, pages 521â 528. Associ- ation for Computational Linguistics. [Zhang and Zong2016] Jiajun Zhang and Chengqing Zong. 2016. Exploiting source-side monolingual data in neural machine translation. In Proceedings of EMNLP. [Zoph et al.2016] Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Multi-source neu- ral translation. In Proceedings of NAACL 2016. | 1610.07272#35 | 1610.07272 | [
"1609.04186"
]
|
|
1610.04286#0 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | 8 1 0 2 y a M 2 2 ] O R . s c [ 2 v 6 8 2 4 0 . 0 1 6 1 : v i X r a # Sim-to-Real Robot Learning from Pixels with Progressive Nets Andrei A. Rusu DeepMind London, UK [email protected] Mel VeË cerà k DeepMind London, UK [email protected] Thomas Rothörl DeepMind London, UK [email protected] Nicolas Heess DeepMind London, UK [email protected] Razvan Pascanu DeepMind London, UK [email protected] Raia Hadsell DeepMind London, UK [email protected] Abstract: Applying end-to-end learning to solve complex, interactive, pixel- driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high- level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real- world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model- based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards. | 1610.04286#1 | 1610.04286 | [
"1606.04671"
]
|
|
1610.04286#1 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | Keywords: Robot learning, transfer, progressive networks, sim-to-real, CoRL. # 1 Introduction Deep Reinforcement Learning offers new promise for achieving human-level control in robotics domains, especially for pixel-to-action scenarios where state estimation is from high dimensional sen- sors and environment interaction and feedback are critical. With deep RL, a new set of algorithms has emerged that can attain sophisticated, precise control on challenging tasks, but these accomplishments have been demonstrated primarily in simulation, rather than on actual robot platforms. While recent advances in simulation-driven deep RL are impressive [1, 2, 3, 4, 5, 6, 7], demonstrating learning capabilities on real robots remains the bar by which we must measure the practical applica- bility of these methods. | 1610.04286#0 | 1610.04286#2 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#2 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | However, this poses a signiï¬ cant challenge, given the "data-hungry" training regime required for current pixel-based deep RL methods, and the relative frailty of research robots and their human handlers. One solution is to use transfer learning methods to bridge the reality gap that separates simulation from real world domains. In this paper, we use progressive networks, a deep learning architecture that has recently been proposed for transfer learning, to demonstrate such an approach, thus providing a proof-of-concept pathway by which deep RL can be used to effect fast policy learning on a real robot. Progressive nets have been shown to produce positive transfer between disparate tasks such as Atari games by utilizing lateral connections to previously learnt models [8]. The addition of new capacity for each new task allows specialized input features to be learned, an important advantage for deep RL algorithms which are improved by sharply-tuned perceptual features. An advantage of progressive 1st Conference on Robot Learning (CoRL 2017), Mountain View, United States. nets compared with other methods for transfer learning or domain adaptation is that multiple tasks may be learned sequentially, without needing to specify source and target tasks. This paper presents an approach for transfer from simulation to the real robot that is proven using real-world, sparse-reward tasks. The tasks are learned using end-to-end deep RL, with RGB inputs and joint velocity output actions. First, an actor-critic network is trained in simulation using multiple asynchronous workers [6]. The network has a convolutional encoder followed by an LSTM. From the LSTM state, using a linear layer, we compute a set of discrete action outputs that control the different degrees of freedom of the simulated robot as well as the value function. After training, a new network is initialized with lateral, nonlinear connections to each convolutional and recurrent layer of the simulation-trained network. The new network is trained on a similar task on the real robot. | 1610.04286#1 | 1610.04286#3 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#3 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | Our initial ï¬ ndings show that the inductive bias imparted by the features and encoded policy of the simulation net is enough to give a dramatic learning speed-up on the real robot. # 2 Transfer Learning from Simulation to Real Our approach relies on the progressive nets architecture, which enables transfer learning through lateral connections which connect each layer of previously learnt network columns to each new column, thus supporting rich compositionality of features. We ï¬ rst summarize progressive nets, and then we discuss their application for transfer in robot domains. # 2.1 Progressive Networks Progressive networks are ideal for simulation-to-real transfer of policies in robot control domains, for multiple reasons. First, features learnt for one task may be transferred to many new tasks without destruction from ï¬ ne-tuning. Second, the columns may be heterogeneous, which may be important for solving different tasks, including different input modalities, or simply to improve learning speed when transferring to the real robot. Third, progressive nets add new capacity, including new input connections, when transferring to new tasks. This is advantageous for bridging the reality gap, to accommodate dissimilar inputs between simulation and real sensors. A progressive network starts with a single column: a deep neural network having L layers with hidden activations h(1) i â Rni, with ni the number of units at layer i â ¤ L, and parameters Î (1) trained to convergence. When switching to a second task, the parameters Î (1) are â frozenâ and a new column with parameters Î (2) is instantiated (with random initialization), where layer h(2) receives input from both h(2) iâ 1 via lateral connections. Progressive networks can be generalized in a straightforward manner to have arbitrary network width per column/layer, to accommodate varying degrees of task difï¬ culty, or to compile lateral connections from multiple, independent networks in an ensemble setting. ni? = fl WR, + Con? |, eo) j<k where W (k) â Rnià nj are the lateral connections from layer i â 1 of column j, to layer i of column k and h0 is the network input. f is an element-wise non-linearity: we use f (x) = max(0, x) for all intermediate layers. In the standard pretrain-and-ï¬ netune paradigm, there is often an implicit assumption of â overlapâ between the tasks. | 1610.04286#2 | 1610.04286#4 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#4 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | Finetuning is efï¬ cient in this setting, as parameters need only be adjusted slightly to the target domain, and often only the top layer is retrained. In contrast, we make no assumptions about the relationship between tasks, which may in practice be orthogonal or even adversarial. Progressive networks side-step this issue by allocating a new column, potentially with different structure or inputs, for each new task. Columns in progressive networks are free to reuse, modify or ignore previously learned features via the lateral connections. Application to Reinforcement Learning. Although progressive networks are widely applicable, this paper focuses on their application to deep reinforcement learning. In this case, each column is trained to solve a particular Markov Decision Process (MDP): the k-th column thus deï¬ | 1610.04286#3 | 1610.04286#5 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#5 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | nes a policy 2 Ï (k)(a | s) taking as input a state s given by the environment, and generating probabilities over actions Ï (k)(a | s) := h(k) L (s). At each time-step, an action is sampled from this distribution and taken in the environment, yielding the subsequent state. This policy implicitly deï¬ nes a stationary distribution Ï Ï (k)(s, a) over states and actions. # 2.2 Approach The proposed approach for transfer from simulated to real robot domains is based on a progressive network with some speciï¬ c changes. First, the columns of a progressive net do not need to have identical capacity or structure, and this can be an advantage in sim-to-real situations. Thus, the simulation-trained column is designed to have sufï¬ cient capacity and depth to learn the task from scratch, but the robot-trained columns have minimal capacity, to encourage fast learning and limit total parameter growth. Secondly, the layer-wise adapters proposed for progressive nets are unnecessary for the output layers of complementary sequences of tasks, so they are not used. Third, the output layer of the robot-trained column is initialised from the simulation-trained column in order to improve exploration. | 1610.04286#4 | 1610.04286#6 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#6 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | These architectural features are shown in Fig. 1. simulation | reality output, output, output, output, output, | | output, 7 rt a 7 input input Figure 1: Depiction of a progressive network, left, and a modiï¬ ed progressive architecture used for robot transfer learning, right. The ï¬ rst column is trained on Task 1, in simulation, the second column is trained on Task 1 on the robot, and the third column is trained on Task 2 on the robot. Columns may differ in capacity, and the adapter functions (marked â | 1610.04286#5 | 1610.04286#7 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#7 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | aâ ) are not used for the output layers of this non-adversarial sequence of tasks. The greatest risk in this approach to transfer learning is that rewards will be so sparse, or non-existent, in the real domain that the reinforcement learning will not improve a vastly suboptimal initial policy within a practical time frame. Thus, in order to maximise the likelihood of reward during exploration in the real domain, the new column is initialised such that the initial policy of the agent will be identical to the previous column. This is accomplished by initialising the weights coming from the last layer of the previous column to the output layer of the new column with the output weights of the previous column, and the connections incoming from the last hidden layer of the current column are initialised with zero-valued weights. Thus, using the example network in Fig. 1 (right), when and h(2) parameters Î (2) are instantiated, layer output(2) 2 . However, unlike the other parameters in Î (2), which will be randomly initialised, the weights W (2) out will be zeros and the weights U (1:2) out. Note that this only affects the initial policy of the agent and does not prevent the new column from training. | 1610.04286#6 | 1610.04286#8 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#8 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | # 3 Related Literature There exist many different paradigms for domain transfer and many approaches designed speciï¬ cally for deep neural models, but substantially fewer approaches for transfer from simulation to reality for robot domains. Even more rare are methods that can be used for transfer in interactive, rich sensor domains using end-to-end (pixel-to-action) learning. A growing body of work has been investigating the ability of deep networks to transfer between domains. Some research [9, 10] considers simply augmenting the target domain data with data from the source domain where an alignment exists. Building on this work, [11] starts from the observation that as one looks at higher layers in the model, the transferability of the features decreases quickly. To correct this effect, a soft constraint is added that enforces the distribution of the features to be | 1610.04286#7 | 1610.04286#9 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#9 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | 3 more similar. In [11], a â confusionâ loss is proposed which forces the model to ignore variations in the data that separate the two domains [12, 13]. Based on [12], [14] attempts to address the simulation to reality gap by using aligned data. The work is focused on pose estimation of the robotic arm, where training happens on a triple loss that looks at aligned simulation to real data, including the domain confusion loss. The paper does not show the efï¬ ciency of the method on learning novel complex policies. Several recent works from the supervised learning literature, e.g. [15, 16, 17], demonstrate how ideas from the adversarial training of neural networks can be used to reduce the sensitivity of a trained network to inter-domain variations, without requiring aligned training data. Intuitively these approaches train a representation that makes it hard to distinguish between data points drawn from the different domains. These ideas have, however, not yet been tested in the context of control. Demonstrating the difï¬ culty of the problem, [10] provides evidence that a simple application of a model trained on synthetic data on the real robot fails. The paper also shows that the main failure point is the discrepancy in visual cues between simulation and reality. Partial success on transferring from simulation to a real robot has been reported [18, 19, 20]. They focus primarily on the problem of transfer from a more restricted simpler version of a task to the full, more difï¬ cult version. While transfer from simulation to reality remains difï¬ cult, progress has been made with directly learning neural network control policies on a real robot, both from low-dimensional representations of the state and from visual input (e.g. [21],[22]). While the results are impressive, to achieve sufï¬ cient data efï¬ ciency these works currently rely on relatively restrictive task setups, specialized visual architectures, and carefully designed training regimes. Alternative approaches embrace big data ideas for robotics ([23, 24]). # 4 Experiments For training in simulation, we use the Asynchronous Advantage Actor-Critic (A3C) framework introduced in [6]. Compared to DQN [25], the model simultaneously learns a policy and a value function for predicting expected future rewards, and can be trained with CPUs, using multiple threads. A3C has been shown to converge faster than DQN, which makes it advantageous for research experimentation. | 1610.04286#8 | 1610.04286#10 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#10 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | For the manipulation domain of the Jaco arm, the agent policy controls nine degrees of freedom using velocity commands. This includes six joints on the arm plus three actuated ï¬ ngers. The full policy Î (A|s, θ) comprises nine joint policies learnt by the agent, each one a softmax connected to the inputs from the previous layer and any lateral connections. Each joint policy i has three actions (a ï¬ xed positive velocity, a ï¬ xed negative velocity, and a zero velocity): Ï i(ai|s; θi). This discrete action set, while potentially lacking the precision of a continuous control policy, has worked well in practice. There is also a single value function that is linearly connected to the previous layer and lateral layers: V (s, θv). We evaluate both feedforward and recurrent neural networks. Both have convolutional input layers followed by either a fully connected layer or an LSTM. A standard-sized network is used for the simulation-trained column and a reduced-capacity network is used for the robot-trained columns, chosen because we found empirically that more capacity does not accelerate learning (see Section4.2), presumably because of the features reused from the previous column. Details of the architecture are given in Figure 2 and Table 1. In all variants, the input is 3x64x64 pixels and the output is 28 (9 discrete joint policies plus one value function). The MuJoCo physics simulator [26] is used to train the ï¬ rst column for our experiments, with a rendered camera view to provide observations. In the real domain, a similarly positioned RGB camera provides the input. While the modeled Jaco and its dynamics are quite accurate, the visual discrepancies are obvious, as shown in Figure 3. The experiments are all focused around the task of reaching to a visual target, with only pure rewards provided as feedback (no shaped rewards). Though simple, this task requires that the state of the arm and the position of the target are correctly inferred from visual observations, and that the agent learns robust control over a high-dimensional state space. The arm is set to a random start position at the beginning of every episode, and the target is placed randomly within a 40cm by 30cm area. The agent receives a reward of +1 if its palm is within 10cm of the target, and episodes last for at most | 1610.04286#9 | 1610.04286#11 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#11 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | 4 ala] [xl LSTM DHâ i fe â nye cam Poo Pa 2 â â â conv, 1? 1,2 | t feedforward wide narrow recurrent wide narrow fc (output) LSTM fc conv 2 conv 1 params 28 - 512 32 16 621K 28 128 128 32 16 39K 299K 28 - 32 8 8 28 16 16 8 8 37K Figure 2: Detailed schematic of progressive recurrent network architecture. The activations of the LSTM are connected as inputs to the progressive column. | 1610.04286#10 | 1610.04286#12 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#12 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | The factored policy and single value function are shown. _ Table 1: Network sizes for wide columns (simulation-trained) and narrow columns (robot- trained). For all networks, the ï¬ rst convolutional layer uses 8x8, stride 4 kernels and the second uses 5x5, stride 2 kernels. The total parameters include the lateral connections. Figure 3: Sample images from the real camera input image and the MuJoCo-rendered image. Though a more realistic model appearance could have been used, the blocky Jaco model was used to accelerate MuJoCo rendering, which was done on CPUs. The images show the diversity of Jaco start positions and target positions. | 1610.04286#11 | 1610.04286#13 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#13 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | 50 steps. Though there is some variance due to randomized starting states, a well-performing agent can achieve an average score of over 30 points by quickly reaching to the target and remaining in safe positions at all times. The episode is terminated if the agent causes a safety violation through self-intersection, by touching the table top, or by exceeding set joint limits. # 4.1 Training in simulation The ï¬ rst column is trained in simulation using A3C, as previously mentioned, using a wide feedfor- ward or recurrent network. Intuitively, it makes sense to use a larger capacity network for training in simulation, to reach maximum performance. | 1610.04286#12 | 1610.04286#14 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#14 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | We veriï¬ ed this intuition by comparing wide and narrow Simulation-trained first column 4o Simulation-trained first column (LSTM) â â Wide network â Narrow network â â Narrow network â Wide network 0 1 2 3 4 0 1 2 3 4 Steps wv Steps Figure 4: Learning curves are shown for wide and narrow versions of the feedforward (left) and recurrent (right) models, which are trained with the MuJoCo simulator. The plots show mean and variance over 5 training runs with different seeds and hyperparameters. Stable performance is reached after approximately 50 million steps, which is more than one million episodes. While both the feedforward and the recurrent models learn the task, the recurrent network reaches a higher ï¬ | 1610.04286#13 | 1610.04286#15 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#15 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | nal mean score. 5 Real-robot-trained progressive nets vs. baselines 35 25 Wide column (progressive) Narrow column (progressive) Wide column (finetuned) Narrow column (from scratch) Wide column (from scratch) Rewards 15 ie} 10000 20000 30000 40000 50000 60000 Steps Figure 5: Real robot training: We compare progressive, ï¬ netuning, and â from scratchâ learning curves. All experiments use a recurrent architecture, trained on the robot, from RGB inputs. We compare wide and narrow columns for both the progressive experiments and the randomly initialized baseline. For all results, a median- ï¬ ltered solid curve is shown overlaid on the raw rewards (dotted line). The â from scratchâ | 1610.04286#14 | 1610.04286#16 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#16 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | baseline was a randomly initialized narrow or wide column, both of which fail to get any reward during training. network architectures, and found that the narrow network had slower learning and worse performance (see Figure 4). We also see that the LSTM model out-performs the feedforward model by an average of 3 points per episode. Even on this relatively simple task, full performance is only achieved after substantial interaction with the environment, on the order of 50 million steps - a number which is infeasible with a real robot. The simulation training, compared with the real robot, is accelerated because of fast rendering, multithreaded learning algorithms, and the ability to continuously train without human involvement. We calculate that learning this task, which trains to convergence in 24 hours using a CPU compute cluster, would take 53 days on the real robot even with continuous training for 24 hours a day. Moreover, multiple experiments in parallel were used to explore hyperparameters in simulation; this sort of search would multiply the hypothetical real robot training time. In simulation, we explore learning rates and entropy costs, which are sampled uniformly at random on a log scale. Learning rates are sampled between 5e-5 and 5e-3 and entropy costs between 1e-5 and 1e-2. | 1610.04286#15 | 1610.04286#17 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#17 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | The conï¬ guration with the best ï¬ nal performance from a grid of 30 is chosen as ï¬ rst column. For real Jaco experiments, both learning rates and entropy costs were optimized separately using a simulated transfer experiment with a single-threaded agent (A2C). # 4.2 Transfer to the robot To train on the real Jaco, a ï¬ at target is manually repositioned within a 40cm by 30cm area on every third episode. Rewards are given automatically by tracking the colored target and giving reward based on the position of the Jaco gripper with respect to it. We train a baseline from scratch, a ï¬ netuned ï¬ | 1610.04286#16 | 1610.04286#18 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#18 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | rst column, and a progressive second column. Each experiment is run for approximately 60000 steps (about four hours). The baseline is trained by randomly initializing a narrow network and then training. We also try a randomly initialized wide network. As seen in Figure 5 (green curve), the randomly initialized column fails to learn and the agent gets zero reward throughout training. The progressive second column gets to 34 points, while the experiment with ï¬ netuning, which starts with the simulation-trained column and continues training on the robot, does not reach the same score as the progressive network. Finetuning vs. progressive approaches. The progressive approach is clearly well-suited for contin- ual learning scenarios, where it is important to mitigate forgetting of previous tasks while supporting transfer to new tasks, but the advantage is less intuitive for curricula of tasks where the focus is on | 1610.04286#17 | 1610.04286#19 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#19 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | 6 Subtle perspective changes Significant perspective changes Subtle color changes Significant color changes 25 40 â finetuned â finetuned 35 â progressive 30 35 â finetuned 30 â progressive â finetuned â progressive â progressive $25 Final rewards ° 0 50 100 150 200 250 300 0 50 100 150 200 250 0 50 100 150 200 250 300 Trials sorted by decreasing final reward: Trials sorted by decreasing final rewards Trials sorted by decreasing final rewards Trials sorted by decreasing final 0 50 100 150 200 250 300 # Final rewards Figure 6: To analyse the relative stability and performance of ï¬ | 1610.04286#18 | 1610.04286#20 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#20 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | netuning vs. progressive approaches, we add color or perspective changes to the environment in simulation and then train 300 networks with different random seeds, learning rates, and entropy costs. The progressive networks have signiï¬ cantly higher performance and less sensitivity to hyperparameter selection for all four experiments. maximising transfer learning. To assess this empirically, we start with a simulator-trained ï¬ rst column, as described above, and then either ï¬ netune that column or add a narrow progressive column and retrain for the reacher task under a variety of conditions, including small or large color changes and small or large perspective changes. | 1610.04286#19 | 1610.04286#21 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#21 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | For each of these environment perturbations, we train 300 times with different seeds, learning rates, and entropy costs, which are the most sensitive hyperparameters. As shown in Figure 6, we ï¬ nd that progressive networks are more stable and reach higher ï¬ nal performance than ï¬ netuning. # 4.3 Transfer to a dynamic robot task with proprioception Unlike the ï¬ netuning paradigm, which is unable to accommodate changing network morphology or new input modalities, progressive nets offer a ï¬ exibility that is advantageous for transferring to new data sources while still leveraging previous knowledge. To demonstrate this, we train a second column on the reacher task but add proprioceptive features as an additional input, alongside the RGB images. The proprioceptive features are joint angles and velocities for each of the 9 joints of the arm and ï¬ ngers, 18 in total, input to a MLP (a single linear layer plus ReLU) and joined with the outputs of the convolutional stack. Then, a third progressive column is added that only learns from the proprioceptive features, while the visual input is forwarded through the previous columns and the features are used via the lateral connections. A diagram of this architecture is shown in Figure 7 (left). To evaluate this architecture, we train on a dynamic target task. By employing a small motorized pulley, the red target is smoothly translated across the table with random reversals in the motion, creating a tracking task that requires a different control policy while maintaining a similar visual presentation. Other aspects of the task, including rewards and episode lengths, were kept the same. If the second column is trained on this conveyor task, the learning is relatively slow, and full performance is reached after 50000 steps (about 4 hours). If the second column is instead trained on the static reacher task, and the third column is then trained on the conveyor task, we observe immediate transfer, and full performance is reached almost immediately (Figure 7, right). This demonstrates both the utility of progressive nets for curriculum tasks, as well as the capability of the architecture to immediately reuse previously learnt features. | 1610.04286#20 | 1610.04286#22 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#22 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | # 5 Discussion Transfer learning, the ability to accumulate and transfer knowledge to new domains, is a core characteristic of intelligent beings. Progressive neural networks offer a framework that can be used for continual learning of many tasks and which facilitates transfer learning, even across the divide which separates simulation and robot. We took full advantage of the ï¬ exibility and computational scaling afforded by simulation and compared many hyperparameters and architectures for a random start, random target control task with visual input, then successfully transferred the skill to an agent training on the real robot. In order to fulï¬ ll the potential of deep reinforcement learning applied in real-world robotic domains, learning needs to become many times more efï¬ cient. One route to achieving this is via transfer learning from simulation-trained agents. We have described an initial set of experiments that prove that progressive nets can be used to achieve reliable, fast transfer for pixel-to-action RL policies. | 1610.04286#21 | 1610.04286#23 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#23 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | 7 300 # rewards Real-robot-trained progressive nets (conveyor task) 30 : ; O f i ISL | | @ â Progressive 3 column x | x @ x â â â Progressive 2 column . . . 0 static task | Static task dynamic task 0 10000 20000 30000 40000 50000 60000 Steps Figure 7: Real robot training results are shown for the dynamic â conveyorâ task. A three-column architecture is depicted (left), in which vision (x) is used to train column one, vision and proprioception (Ï ) are used in column two, and only proprioception is used to train column three. Encoder 1 is a convolutional net, encoder 2 is a convolutional net with proprioceptive features added before the LSTM, and encoder 3 is an MLP. The learning curves (right) show the results of training on a conveyor (dynamic target) task. If the conveyor task is learned as the third column, rather than the second, then the learning is signiï¬ cantly faster. | 1610.04286#22 | 1610.04286#24 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#24 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | # References [1] S. Levine and P. Abbeel. under unknown dynamics. and K. Q. Weinberger, pages 1071â 1079. Curran Associates, 5444-learning-neural-network-policies-with-guided-policy-search-under-unknown-dynamics. pdf. [2] J. Schulman, S. Levine, P. Moritz, M. I. Jordan, and P. Abbeel. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML), 2015. | 1610.04286#23 | 1610.04286#25 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#25 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | [3] N. Heess, G. Wayne, D. Silver, T. P. Lillicrap, T. Erez, and Y. Tassa. Learning continuous con- In Advances in Neural Information Processing Sys- trol policies by stochastic value gradients. tems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2944â 2952, 2015. URL http://papers.nips.cc/paper/ 5796-learning-continuous-control-policies-by-stochastic-value-gradients. [4] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. | 1610.04286#24 | 1610.04286#26 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#26 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | Continuous control with deep reinforcement learning. Proceedings of the International Conference on Learning Representations (ICLR), 2016. URL http://arxiv.org/abs/1509.02971. [5] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel. High-dimensional continuous control using generalized advantage estimation. In Proceedings of the International Conference on Learning Representations (ICLR), 2016. [6] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. | 1610.04286#25 | 1610.04286#27 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#27 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | Asynchronous methods for deep reinforcement learning. In Intâ l Conf. on Machine Learning (ICML), 2016. [7] S. Gu, T. P. Lillicrap, I. Sutskever, and S. Levine. Continuous deep q-learning with model-based acceleration. In ICML 2016, 2016. [8] A. Rusu, N. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, and R. | 1610.04286#26 | 1610.04286#28 | 1610.04286 | [
"1606.04671"
]
|
1610.04286#28 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016. [9] X. Peng, B. Sun, K. Ali, and K. Saenko. Learning deep object detectors from 3d models. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 1278â 1286, 2015. [10] H. Su, C. R. Qi, Y. Li, and L. J. Guibas. | 1610.04286#27 | 1610.04286#29 | 1610.04286 | [
"1606.04671"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.