doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1602.00367 | 36 | [Mesnil et al.2014] Gr´egoire Mesnil, MarcâAurelio Ran- zato, Tomas Mikolov, and Yoshua Bengio. 2014. En- semble of generative and discriminative techniques for sentiment analysis of movie reviews. arXiv preprint arXiv:1412.5335.
[Sainath et al.2015] T.N. Sainath, O. Vinyals, A. Senior, and H. Sak. 2015. Convolutional, long short-term memory, fully connected deep neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pages 4580â 4584, April.
[Socher et al.2013] Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP.
[Srivastava et al.2014] Nitish Srivastava, Geoffrey Hin- ton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to pre- vent neural networks from overï¬tting. Journal of Ma- chine Learning Research, 15:1929â1958. | 1602.00367#36 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 37 | [Sundermeyer et al.2015] Martin Sundermeyer, Hermann Ney, and Ralf Schluter. 2015. From feedforward to recurrent lstm neural networks for language modeling. Audio, Speech, and Language Processing, IEEE/ACM Transactions on, 23(3):517â529.
[Tang et al.2015] Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neu- ral network for sentiment classiï¬cation. In Proceed- ings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1422â1432.
Kyle Kastner, Aaron C. Courville, Yoshua Bengio, Matteo Mat- 2015. Reseg: A teucci, and KyungHyun Cho. recurrent neural network for object segmentation. CoRR, abs/1511.07053. [Werbos1990] P. Werbos.
1990. Backpropagation In
through time: what does it do and how to do it. Proceedings of IEEE, volume 78, pages 1550â1560. [Zeiler2012] Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701. | 1602.00367#37 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1601.06759 | 0 | 6 1 0 2
g u A 9 1 ] V C . s c [
3 v 9 5 7 6 0 . 1 0 6 1 : v i X r a
# Pixel Recurrent Neural Networks
# A¨aron van den Oord Nal Kalchbrenner Koray Kavukcuoglu
[email protected] [email protected] [email protected]
Google DeepMind
# Abstract
occluded
# completions
# original
Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at tractable and scalable. We once expressive, present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the dis- crete probability of the raw pixel values and en- codes the complete set of dependencies in the image. Architectural novelties include fast two- dimensional recurrent layers and an effective use of residual connections in deep recurrent net- works. We achieve log-likelihood scores on nat- ural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model ap- pear crisp, varied and globally coherent. | 1601.06759#0 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 1 | Figure 1. Image completions sampled from a PixelRNN.
eling is building complex and expressive models that are also tractable and scalable. This trade-off has resulted in a large variety of generative models, each having their ad- vantages. Most work focuses on stochastic latent variable models such as VAEâs (Rezende et al., 2014; Kingma & Welling, 2013) that aim to extract meaningful representa- tions, but often come with an intractable inference step that can hinder their performance.
# 1. Introduction
Generative image modeling is a central problem in unsu- pervised learning. Probabilistic density models can be used for a wide variety of tasks that range from image compres- sion and forms of reconstruction such as image inpainting (e.g., see Figure 1) and deblurring, to generation of new images. When the model is conditioned on external infor- mation, possible applications also include creating images based on text descriptions or simulating future frames in a planning task. One of the great advantages in generative modeling is that there are practically endless amounts of image data available to learn from. However, because im- ages are high dimensional and highly structured, estimating the distribution of natural images is extremely challenging. | 1601.06759#1 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 2 | One of the most important obstacles in generative modProceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s).
One effective approach to tractably model a joint distribu- tion of the pixels in the image is to cast it as a product of conditional distributions; this approach has been adopted in autoregressive models such as NADE (Larochelle & Mur- ray, 2011) and fully visible neural networks (Neal, 1992; Bengio & Bengio, 2000). The factorization turns the joint modeling problem into a sequence problem, where one learns to predict the next pixel given all the previously gen- erated pixels. But to model the highly nonlinear and long- range correlations between pixels and the complex condi- tional distributions that result, a highly expressive sequence model is necessary. | 1601.06759#2 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 3 | Recurrent Neural Networks (RNN) are powerful models that offer a compact, shared parametrization of a series of conditional distributions. RNNs have been shown to excel at hard sequence problems ranging from handwriting gen- eration (Graves, 2013), to character prediction (Sutskever et al., 2011) and to machine translation (Kalchbrenner & Blunsom, 2013). A two-dimensional RNN has produced very promising results in modeling grayscale images and textures (Theis & Bethge, 2015).
In this paper we advance two-dimensional RNNs and apPixel Recurrent Neural Networks
Mask B 2. oe ee eee cs Multi-scale context Mask A Context
Figure 2. Left: To generate pixel xi one conditions on all the pre- viously generated pixels left and above of xi. Center: To gen- erate a pixel in the multi-scale case we can also condition on the subsampled image pixels (in light blue). Right: Diagram of the connectivity inside a masked convolution. In the ï¬rst layer, each of the RGB channels is connected to previous channels and to the context, but is not connected to itself. In subsequent layers, the channels are also connected to themselves. | 1601.06759#3 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 4 | The contributions of the paper are as follows. In Section 3 we design two types of PixelRNNs corresponding to the two types of LSTM layers; we describe the purely convo- lutional PixelCNN that is our fastest architecture; and we design a Multi-Scale version of the PixelRNN. In Section 5 we show the relative beneï¬ts of using the discrete softmax distribution in our models and of adopting residual connec- tions for the LSTM layers. Next we test the models on MNIST and on CIFAR-10 and show that they obtain log- likelihood scores that are considerably better than previous results. We also provide results for the large-scale Ima- geNet dataset resized to both 32 à 32 and 64 à 64 pixels; to our knowledge likelihood values from generative models have not previously been reported on this dataset. Finally, we give a qualitative evaluation of the samples generated from the PixelRNNs. | 1601.06759#4 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 5 | ply them to large-scale modeling of natural images. The resulting PixelRNNs are composed of up to twelve, fast two-dimensional Long Short-Term Memory (LSTM) lay- ers. These layers use LSTM units in their state (Hochreiter & Schmidhuber, 1997; Graves & Schmidhuber, 2009) and adopt a convolution to compute at once all the states along one of the spatial dimensions of the data. We design two types of these layers. The ï¬rst type is the Row LSTM layer where the convolution is applied along each row; a similar technique is described in (Stollenga et al., 2015). The sec- ond type is the Diagonal BiLSTM layer where the convolu- tion is applied in a novel fashion along the diagonals of the image. The networks also incorporate residual connections (He et al., 2015) around LSTM layers; we observe that this helps with training of the PixelRNN for up to twelve layers of depth. | 1601.06759#5 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 6 | We also consider a second, simpliï¬ed architecture which shares the same core components as the PixelRNN. We ob- serve that Convolutional Neural Networks (CNN) can also be used as sequence model with a ï¬xed dependency range, by using Masked convolutions. The PixelCNN architec- ture is a fully convolutional network of ï¬fteen layers that preserves the spatial resolution of its input throughout the layers and outputs a conditional distribution at each loca- tion.
# 2. Model
Our aim is to estimate a distribution over natural images that can be used to tractably compute the likelihood of im- ages and to generate new ones. The network scans the im- age one row at a time and one pixel at a time within each row. For each pixel it predicts the conditional distribution over the possible pixel values given the scanned context. Figure 2 illustrates this process. The joint distribution over the image pixels is factorized into a product of conditional distributions. The parameters used in the predictions are shared across all pixel positions in the image. | 1601.06759#6 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 7 | To capture the generation process, Theis & Bethge (2015) propose to use a two-dimensional LSTM network (Graves & Schmidhuber, 2009) that starts at the top left pixel and proceeds towards the bottom right pixel. The advantage of the LSTM network is that it effectively handles long-range dependencies that are central to object and scene under- standing. The two-dimensional structure ensures that the signals are well propagated both in the left-to-right and top- to-bottom directions.
In this section we ï¬rst focus on the form of the distribution, whereas the next section will be devoted to describing the architectural innovations inside PixelRNN.
Both PixelRNN and PixelCNN capture the full generality of pixel inter-dependencies without introducing indepen- dence assumptions as in e.g., latent variable models. The dependencies are also maintained between the RGB color values within each individual pixel. Furthermore, in con- trast to previous approaches that model the pixels as con- tinuous values (e.g., Theis & Bethge (2015); Gregor et al. (2014)), we model the pixels as discrete values using a multinomial distribution implemented with a simple soft- max layer. We observe that this approach gives both repre- sentational and training advantages for our models.
# 2.1. Generating an Image Pixel by Pixel | 1601.06759#7 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 8 | # 2.1. Generating an Image Pixel by Pixel
The goal is to assign a probability p(x) to each image x formed of n à n pixels. We can write the image x as a one- dimensional sequence x1, ..., xn2 where pixels are taken from the image row by row. To estimate the joint distri- bution p(x) we write it as the product of the conditional distributions over the pixels:
p(x) =] [e(eiles, 214) dd) i=1
Pixel Recurrent Neural Networks
The value p(xi|x1, ..., xiâ1) is the probability of the i-th pixel xi given all the previous pixels x1, ..., xiâ1. The gen- eration proceeds row by row and pixel by pixel. Figure 2 (Left) illustrates the conditioning scheme.
Each pixel xi is in turn jointly determined by three values, one for each of the color channels Red, Green and Blue (RGB). We rewrite the distribution p(xi|x<i) as the fol- lowing product:
p(xi,R|x<i)p(xi,G|x<i, xi,R)p(xi,B|x<i, xi,R, xi,G) (2)
Each of the colors is thus conditioned on the other channels as well as on all the previously generated pixels. | 1601.06759#8 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 9 | Each of the colors is thus conditioned on the other channels as well as on all the previously generated pixels.
_ Sai nn
Figure 3. In the Diagonal BiLSTM, to allow for parallelization along the diagonals, the input map is skewed by offseting each row by one position with respect to the previous row. When the spatial layer is computed left to right and column by column, the output map is shifted back into the original size. The convolution uses a kernel of size 2 Ã 1.
Note that during training and evaluation the distributions over the pixel values are computed in parallel, while the generation of an image is sequential.
dimensional convolution has size k à 1 where k ⥠3; the larger the value of k the broader the context that is captured. The weight sharing in the convolution ensures translation invariance of the computed features along each row.
# 2.2. Pixels as Discrete Variables | 1601.06759#9 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 10 | # 2.2. Pixels as Discrete Variables
Previous approaches use a continuous distribution for the values of the pixels in the image (e.g. Theis & Bethge (2015); Uria et al. (2014)). By contrast we model p(x) as a discrete distribution, with every conditional distribution in Equation 2 being a multinomial that is modeled with a softmax layer. Each channel variable xi,â simply takes one of 256 distinct values. The discrete distribution is represen- tationally simple and has the advantage of being arbitrarily multimodal without prior on the shape (see Fig. 6). Exper- imentally we also ï¬nd the discrete distribution to be easy to learn and to produce better performance compared to a continuous distribution (Section 5). | 1601.06759#10 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 11 | The computation proceeds as follows. An LSTM layer has an input-to-state component and a recurrent state-to-state component that together determine the four gates inside the LSTM core. To enhance parallelization in the Row LSTM the input-to-state component is ï¬rst computed for the entire two-dimensional input map; for this a k à 1 convolution is used to follow the row-wise orientation of the LSTM itself. The convolution is masked to include only the valid context (see Section 3.4) and produces a tensor of size 4h à n à n, representing the four gate vectors for each position in the input map, where h is the number of output feature maps.
# 3. Pixel Recurrent Neural Networks
To compute one step of the state-to-state component of the LSTM layer, one is given the previous hidden and cell states hiâ1 and ciâ1, each of size h à n à 1. The new hidden and cell states hi, ci are obtained as follows: | 1601.06759#11 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 12 | In this section we describe the architectural components that compose the PixelRNN. In Sections 3.1 and 3.2, we describe the two types of LSTM layers that use convolu- tions to compute at once the states along one of the spatial dimensions. In Section 3.3 we describe how to incorporate residual connections to improve the training of a PixelRNN with many LSTM layers. In Section 3.4 we describe the softmax layer that computes the discrete joint distribution of the colors and the masking technique that ensures the proper conditioning scheme. In Section 3.5 we describe the PixelCNN architecture. Finally in Section 3.6 we describe the multi-scale architecture.
# 3.1. Row LSTM
The Row LSTM is a unidirectional layer that processes the image row by row from top to bottom computing fea- tures for a whole row at once; the computation is per- formed with a one-dimensional convolution. For a pixel xi the layer captures a roughly triangular context above the pixel as shown in Figure 4 (center). The kernel of the one{o;, fi, i;, gi] = o(K** ® hy_| + K* ®x;) f,Oc¢-14+i; Ogi (3) 0; © tanh(c;) Ci hj | 1601.06759#12 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 13 | where x; of size h x n x 1 is row i of the input map, and ® represents the convolution operation and © the element- wise multiplication. The weights K** and K** are the kernel weights for the state-to-state and the input-to-state components, where the latter is precomputed as described above. In the case of the output, forget and input gates 0,, f, and i;, the activation a is the logistic sigmoid function, whereas for the content gate g;, o is the tanh function. Each step computes at once the new state for an entire row of the input map. Because the Row LSTM has a triangular receptive field (Figure 4), it is unable to capture the entire available context.
Pixel Recurrent Neural Networks
oo000 2 _® @-©-9-0- ©0000 ones oeees ooe@0o°o 00800 @ECOO oo cof~ Oo 00 cfoo oo cto Oo OTe Kore) OOOO Ooo000 ofe (e) oy olronene) O01I000 C@e0°0 C0@e@00 C0e@0o°o oo000 lomonenene) oo000 PixelCNN Row LSTM Diagonal BiLSTM | 1601.06759#13 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 14 | Figure 4. Visualization of the input-to-state and state-to-state mappings for the three proposed architectures.
# 3.2. Diagonal BiLSTM
The Diagonal BiLSTM is designed to both parallelize the computation and to capture the entire available context for any image size. Each of the two directions of the layer scans the image in a diagonal fashion starting from a cor- ner at the top and reaching the opposite corner at the bot- tom. Each step in the computation computes at once the LSTM state along a diagonal in the image. Figure 4 (right) illustrates the computation and the resulting receptive ï¬eld. | 1601.06759#14 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 15 | The diagonal computation proceeds as follows. We ï¬rst skew the input map into a space that makes it easy to ap- ply convolutions along diagonals. The skewing operation offsets each row of the input map by one position with re- spect to the previous row, as illustrated in Figure 3; this results in a map of size n à (2n â 1). At this point we can compute the input-to-state and state-to-state components of the Diagonal BiLSTM. For each of the two directions, the input-to-state component is simply a 1 à 1 convolution K is that contributes to the four gates in the LSTM core; the op- eration generates a 4h à n à n tensor. The state-to-state recurrent component is then computed with a column-wise convolution K ss that has a kernel of size 2 à 1. The step takes the previous hidden and cell states, combines the con- tribution of the input-to-state component and produces the next hidden and cell states, as deï¬ned in Equation 3. The output feature map is then skewed back into an n à n map by removing the offset positions. This computation is re- peated for each of the two | 1601.06759#15 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 17 | Besides reaching the full dependency ï¬eld, the Diagonal BiLSTM has the additional advantage that it uses a con- volutional kernel of size 2 à 1 that processes a minimal amount of information at each step yielding a highly non- linear computation. Kernel sizes larger than 2 à 1 are not particularly useful as they do not broaden the already global receptive ï¬eld of the Diagonal BiLSTM.
# 3.3. Residual Connections | 1601.06759#17 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 18 | # 3.3. Residual Connections
We train PixelRNNs of up to twelve layers of depth. As a means to both increase convergence speed and propagate signals more directly through the network, we deploy resid- ual connections (He et al., 2015) from one LSTM layer to the next. Figure 5 shows a diagram of the residual blocks. The input map to the PixelRNN LSTM layer has 2h fea- tures. The input-to-state component reduces the number of features by producing h features per gate. After applying the recurrent layer, the output map is upsampled back to 2h features per position via a 1 Ã 1 convolution and the input map is added to the output map. This method is related to previous approaches that use gating along the depth of the recurrent network (Kalchbrenner et al., 2015; Zhang et al., 2016), but has the advantage of not requiring additional gates. Apart from residual connections, one can also use learnable skip connections from each layer to the output. In the experiments we evaluate the relative effectiveness of residual and layer-to-output skip connections.
ReLU - 1x1 Conv 1x1 Conv 2h ry 2h h ReLU - 3x3 Conv h ry h 2h eres 2h LSTM
Figure 5. Residual blocks for a PixelCNN (left) and PixelRNNs. | 1601.06759#18 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 19 | Figure 5. Residual blocks for a PixelCNN (left) and PixelRNNs.
# 3.4. Masked Convolution
The h features for each input position at every layer in the network are split into three parts, each corresponding to one of the RGB channels. When predicting the R chan- nel for the current pixel xi, only the generated pixels left and above of xi can be used as context. When predicting the G channel, the value of the R channel can also be used as context in addition to the previously generated pixels. Likewise, for the B channel, the values of both the R and G channels can be used. To restrict connections in the net- work to these dependencies, we apply a mask to the input- to-state convolutions and to other purely convolutional lay- ers in a PixelRNN. | 1601.06759#19 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 20 | We use two types of masks that we indicate with mask A and mask B, as shown in Figure 2 (Right). Mask A is ap- plied only to the ï¬rst convolutional layer in a PixelRNN and restricts the connections to those neighboring pixels and to those colors in the current pixels that have already been predicted. On the other hand, mask B is applied to all the subsequent input-to-state convolutional transitions and relaxes the restrictions of mask A by also allowing the connection from a color to itself. The masks can be eas- ily implemented by zeroing out the corresponding weights in the input-to-state convolutions after each update. SimiPixel Recurrent Neural Networks
PixelCNN Row LSTM Diagonal BiLSTM 7 à 7 conv mask A Multiple residual blocks: (see ï¬g 5) Conv 3 à 3 mask B i-s: 3 à 1 mask B s-s: 3 à 1 no mask Row LSTM Diagonal BiLSTM i-s: 1 à 1 mask B s-s: 1 à 2 no mask ReLU followed by 1 à 1 conv, mask B (2 layers) 256-way Softmax for each RGB color (Natural images) or Sigmoid (MNIST)
Table 1. Details of the architectures. In the LSTM architectures i-s and s-s stand for input-state and state-state convolutions. | 1601.06759#20 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 21 | Table 1. Details of the architectures. In the LSTM architectures i-s and s-s stand for input-state and state-state convolutions.
layer in the conditional PixelRNN, one simply maps the c à n à n conditioning map into a 4h à n à n map that is added to the input-to-state map of the corresponding layer; this is performed using a 1 à 1 unmasked convolution. The larger n à n image is then generated as usual.
# 4. Speciï¬cations of Models
In this section we give the speciï¬cations of the PixelRNNs used in the experiments. We have four types of networks: the PixelRNN based on Row LSTM, the one based on Di- agonal BiLSTM, the fully convolutional one and the Multi- Scale one.
lar masks have also been used in variational autoencoders (Gregor et al., 2014; Germain et al., 2015).
# 3.5. PixelCNN | 1601.06759#21 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 22 | lar masks have also been used in variational autoencoders (Gregor et al., 2014; Germain et al., 2015).
# 3.5. PixelCNN
The Row and Diagonal LSTM layers have a potentially unbounded dependency range within their receptive ï¬eld. This comes with a computational cost as each state needs to be computed sequentially. One simple workaround is to make the receptive ï¬eld large, but not unbounded. We can use standard convolutional layers to capture a bounded receptive ï¬eld and compute features for all pixel positions at once. The PixelCNN uses multiple convolutional lay- ers that preserve the spatial resolution; pooling layers are not used. Masks are adopted in the convolutions to avoid seeing the future context; masks have previously also been used in non-convolutional models such as MADE (Ger- main et al., 2015). Note that the advantage of paralleliza- tion of the PixelCNN over the PixelRNN is only available during training or during evaluating of test images. The image generation process is sequential for both kinds of networks, as each sampled pixel needs to be given as input back into the network. | 1601.06759#22 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 23 | Table 1 speciï¬es each layer in the single-scale networks. The ï¬rst layer is a 7 à 7 convolution that uses the mask of type A. The two types of LSTM networks then use a vari- able number of recurrent layers. The input-to-state con- volution in this layer uses a mask of type B, whereas the state-to-state convolution is not masked. The PixelCNN uses convolutions of size 3 à 3 with a mask of type B. The top feature map is then passed through a couple of layers consisting of a Rectiï¬ed Linear Unit (ReLU) and a 1Ã1 convolution. For the CIFAR-10 and ImageNet experi- ments, these layers have 1024 feature maps; for the MNIST experiment, the layers have 32 feature maps. Residual and layer-to-output connections are used across the layers of all three networks. | 1601.06759#23 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 24 | The networks used in the experiments have the following hyperparameters. For MNIST we use a Diagonal BiLSTM with 7 layers and a value of h = 16 (Section 3.3 and Figure 5 right). For CIFAR-10 the Row and Diagonal BiLSTMs have 12 layers and a number of h = 128 units. The Pixel- CNN has 15 layers and h = 128. For 32 Ã 32 ImageNet we adopt a 12 layer Row LSTM with h = 384 units and for 64 Ã 64 ImageNet we use a 4 layer Row LSTM with h = 512 units; the latter model does not use residual con- nections.
# 3.6. Multi-Scale PixelRNN
The Multi-Scale PixelRNN is composed of an uncondi- tional PixelRNN and one or more conditional PixelRNNs. The unconditional network ï¬rst generates in the standard way a smaller sÃs image that is subsampled from the orig- inal image. The conditional network then takes the s à s image as an additional input and generates a larger n à n image, as shown in Figure 2 (Middle).
# 5. Experiments | 1601.06759#24 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 25 | # 5. Experiments
In this section we describe our experiments and results. We begin by describing the way we evaluate and compare our results. In Section 5.2 we give details about the training. Then we give results on the relative effectiveness of archi- tectural components and our best results on the MNIST, CIFAR-10 and ImageNet datasets.
The conditional network is similar to a standard PixelRNN, but each of its layers is biased with an upsampled version of the small s à s image. The upsampling and biasing pro- cesses are deï¬ned as follows. In the upsampling process, one uses a convolutional network with deconvolutional lay- ers to construct an enlarged feature map of size c à n à n, where c is the number of features in the output map of the upsampling network. Then, in the biasing process, for each
# 5.1. Evaluation
All our models are trained and evaluated on the log- likelihood loss function coming from a discrete distribu- tion. Although natural image data is usually modeled with continuous distributions using density functions, we can compare our results with previous art in the following way.
Pixel Recurrent Neural Networks | 1601.06759#25 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 26 | Pixel Recurrent Neural Networks
In the literature it is currently best practice to add real- valued noise to the pixel values to dequantize the data when using density functions (Uria et al., 2013). When uniform noise is added (with values in the interval [0, 1]), then the log-likelihoods of continuous and discrete models are di- rectly comparable (Theis et al., 2015). In our case, we can use the values from the discrete distribution as a piecewise- uniform continuous function that has a constant value for every interval [i, i + 1], i = 1, 2, . . . 256. This correspond- ing distribution will have the same log-likelihood (on data with added noise) as the original discrete distribution (on discrete data).
In Figure 6 we show a few softmax activations from the model. Although we donât embed prior information about the meaning or relations of the 256 color categories, e.g. that pixel values 51 and 52 are neighbors, the distributions predicted by the model are meaningful and can be multi- modal, skewed, peaked or long tailed. Also note that values 0 and 255 often get a much higher probability as they are more frequent. Another advantage of the discrete distribu- tion is that we do not worry about parts of the distribution mass lying outside the interval [0, 255], which is something that typically happens with continuous distributions. | 1601.06759#26 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 27 | For MNIST we report the negative log-likelihood in nats as it is common practice in literature. For CIFAR-10 and ImageNet we report negative log-likelihoods in bits per di- mension. The total discrete log-likelihood is normalized by the dimensionality of the images (e.g., 32 Ã 32 Ã 3 = 3072 for CIFAR-10). These numbers are interpretable as the number of bits that a compression scheme based on this model would need to compress every RGB color value (van den Oord & Schrauwen, 2014b; Theis et al., 2015); in practice there is also a small overhead due to arithmetic coding.
# 5.2. Training Details
A 0 2550 255 | 1601.06759#27 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 28 | # 5.2. Training Details
A 0 2550 255
Our models are trained on GPUs using the Torch toolbox. From the different parameter update rules tried, RMSProp gives best convergence performance and is used for all ex- periments. The learning rate schedules were manually set for every dataset to the highest values that allowed fast con- vergence. The batch sizes also vary for different datasets. For smaller datasets such as MNIST and CIFAR-10 we use smaller batch sizes of 16 images as this seems to regularize the models. For ImageNet we use as large a batch size as allowed by the GPU memory; this corresponds to 64 im- ages/batch for 32 Ã 32 ImageNet, and 32 images/batch for 64 Ã 64 ImageNet. Apart from scaling and centering the images at the input of the network, we donât use any other preprocessing or augmentation. For the multinomial loss function we use the raw pixel color values as categories. For all the PixelRNN models, we learn the initial recurrent state of the network.
Figure 6. Example softmax activations from the model. The top left shows the distribution of the ï¬rst pixel red value (ï¬rst value to sample).
# 5.4. Residual Connections | 1601.06759#28 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 29 | # 5.4. Residual Connections
Another core component of the networks is residual con- nections. In Table 2 we show the results of having residual connections, having standard skip connections or having both, in the 12-layer CIFAR-10 Row LSTM model. We see that using residual connections is as effective as using skip connections; using both is also effective and preserves the advantage.
# 5.3. Discrete Softmax Distribution
Apart from being intuitive and easy to implement, we ï¬nd that using a softmax on discrete pixel values instead of a mixture density approach on continuous pixel values gives better results. For the Row LSTM model with a softmax output distribution we obtain 3.06 bits/dim on the CIFAR- 10 validation set. For the same model with a Mixture of Conditional Gaussian Scale Mixtures (MCGSM) (Theis & Bethge, 2015) we obtain 3.22 bits/dim.
No skip Skip No residual: Residual: 3.22 3.07 3.09 3.06
Table 2. Effect of residual and skip connections in the Row LSTM network evaluated on the Cifar-10 validation set in bits/dim.
When using both the residual and skip connections, we see in Table 3 that performance of the Row LSTM improves with increased depth. This holds for up to the 12 LSTM layers that we tried. | 1601.06759#29 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 31 | Figure 7. Samples from models trained on CIFAR-10 (left) and ImageNet 32x32 (right) images. In general we can see that the models capture local spatial dependencies relatively well. The ImageNet model seems to be better at capturing more global structures than the CIFAR-10 model. The ImageNet model was larger and trained on much more data, which explains the qualitative difference in samples.
# layers: 1 2 3 6 9 12 NLL: 3.30 3.20 3.17 3.09 3.08 3.06
Table 3. Effect of the number of layers on the negative log likeli- hood evaluated on the CIFAR-10 validation set (bits/dim).
# 5.5. MNIST
Although the goal of our work was to model natural images on a large scale, we also tried our model on the binary ver- sion (Salakhutdinov & Murray, 2008) of MNIST (LeCun et al., 1998) as it is a good sanity check and there is a lot of previous art on this dataset to compare with. In Table 4 we report the performance of the Diagonal BiLSTM model and that of previous published results. To our knowledge this is the best reported result on MNIST so far. | 1601.06759#31 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 32 | Model NLL Test DBM 2hl [1]: DBN 2hl [2]: NADE [3]: EoNADE 2hl (128 orderings) [3]: EoNADE-5 2hl (128 orderings) [4]: DLGM [5]: DLGM 8 leapfrog steps [6]: DARN 1hl [7]: MADE 2hl (32 masks) [8]: DRAW [9]: PixelCNN: Row LSTM: Diagonal BiLSTM (1 layer, h = 32): Diagonal BiLSTM (7 layers, h = 16): â 84.62 â 84.55 88.33 85.10 84.68 â 86.60 â 85.51 â 84.13 86.64 ⤠80.97 81.30 80.54 80.75 79.20
# 5.6. CIFAR-10 | 1601.06759#32 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 33 | # 5.6. CIFAR-10
Next we test our models on the CIFAR-10 dataset (Krizhevsky, 2009). Table 5 lists the results of our mod- els and that of previously published approaches. All our results were obtained without data augmentation. For the proposed networks, the Diagonal BiLSTM has the best performance, followed by the Row LSTM and the Pixel- CNN. This coincides with the size of the respective recep- tive ï¬elds: the Diagonal BiLSTM has a global view, the Row LSTM has a partially occluded view and the Pixel- CNN sees the fewest pixels in the context. This suggests that effectively capturing a large receptive ï¬eld is impor- tant. Figure 7 (left) shows CIFAR-10 samples generated | 1601.06759#33 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 34 | Table 4. Test set performance of different models on MNIST in nats (negative log-likelihood). Prior results taken from [1] (Salakhutdinov & Hinton, 2009), [2] (Murray & Salakhutdinov, 2009), [3] (Uria et al., 2014), [4] (Raiko et al., 2014), [5] (Rezende et al., 2014), [6] (Salimans et al., 2015), [7] (Gregor et al., 2014), [8] (Germain et al., 2015), [9] (Gregor et al., 2015).
from the Diagonal BiLSTM.
# 5.7. ImageNet
Although to our knowledge the are no published results on the ILSVRC ImageNet dataset (Russakovsky et al., 2015) that we can compare our models with, we give our ImaPixel Recurrent Neural Networks | 1601.06759#34 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 35 | Figure 8. Samples from models trained on ImageNet 64x64 images. Left: normal model, right: multi-scale model. The single-scale model trained on 64x64 images is less able to capture global structure than the 32x32 model. The multi-scale model seems to resolve this problem. Although these models get similar performance in log-likelihood, the samples on the right do seem globally more coherent.
Model Uniform Distribution: Multivariate Gaussian: NICE [1]: Deep Diffusion [2]: Deep GMMs [3]: RIDE [4]: PixelCNN: Row LSTM: Diagonal BiLSTM: NLL Test (Train) 8.00 4.70 4.48 4.20 4.00 3.47 3.14 (3.08) 3.07 (3.00) 3.00 (2.93)
occluded
# completions
# original
wena Gm ee Ee TAS A bali Mh SN AYN pin 3s 2 ids | 1601.06759#35 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 36 | occluded
# completions
# original
wena Gm ee Ee TAS A bali Mh SN AYN pin 3s 2 ids
Table 5. Test set performance of different models on CIFAR-10 in bits/dim. For our models we give training performance in brack- ets. [1] (Dinh et al., 2014), [2] (Sohl-Dickstein et al., 2015), [3] (van den Oord & Schrauwen, 2014a), [4] personal communication (Theis & Bethge, 2015).
Image size NLL Validation (Train) 32x32: 64x64: 3.86 (3.83) 3.63 (3.57)
Figure 9. Image completions sampled from a model that was trained on 32x32 ImageNet images. Note that diversity of the completions is high, which can be attributed to the log-likelihood loss function used in this generative model, as it encourages mod- els with high entropy. As these are sampled from the model, we can easily generate millions of different completions. It is also interesting to see that textures such as water, wood and shrubbery are also inputed relative well (see Figure 1).
Table 6. Negative log-likelihood performance on 32Ã32 and 64Ã 64 ImageNet in bits/dim. | 1601.06759#36 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 37 | Table 6. Negative log-likelihood performance on 32Ã32 and 64Ã 64 ImageNet in bits/dim.
geNet log-likelihood performance in Table 6 (without data augmentation). On ImageNet the current PixelRNNs do not appear to overï¬t, as we saw that their validation per- formance improved with size and depth. The main con- straint on model size are currently computation time and GPU memory.
Note that the ImageNet models are in general less com- pressible than the CIFAR-10 images. ImageNet has greater variety of images, and the CIFAR-10 images were most
likely resized with a different algorithm than the one we used for ImageNet images. The ImageNet images are less blurry, which means neighboring pixels are less correlated to each other and thus less predictable. Because the down- sampling method can inï¬uence the compression perfor- mance, we have made the used downsampled images avail- able1.
Figure 7 (right) shows 32 Ã 32 samples drawn from our model trained on ImageNet. Figure 8 shows 64 Ã 64 sam- ples from the same model with and without multi-scale
1http://image-net.org/small/download.php
Pixel Recurrent Neural Networks
conditioning. Finally, we also show image completions sampled from the model in Figure 9.
# 6. Conclusion | 1601.06759#37 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 38 | Pixel Recurrent Neural Networks
conditioning. Finally, we also show image completions sampled from the model in Figure 9.
# 6. Conclusion
Graves, Alex and Schmidhuber, J¨urgen. Ofï¬ine handwrit- ing recognition with multidimensional recurrent neural networks. In Advances in Neural Information Process- ing Systems, 2009.
In this paper we signiï¬cantly improve and build upon deep recurrent neural networks as generative models for natural images. We have described novel two-dimensional LSTM layers: the Row LSTM and the Diagonal BiLSTM, that scale more easily to larger datasets. The models were trained to model the raw RGB pixel values. We treated the pixel values as discrete random variables by using a soft- max layer in the conditional distributions. We employed masked convolutions to allow PixelRNNs to model full de- pendencies between the color channels. We proposed and evaluated architectural improvements in these models re- sulting in PixelRNNs with up to 12 LSTM layers.
Gregor, Karol, Danihelka, Ivo, Mnih, Andriy, Blundell, Charles, and Wierstra, Daan. Deep autoregressive net- works. In Proceedings of the 31st International Confer- ence on Machine Learning, 2014. | 1601.06759#38 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 39 | Gregor, Karol, Danihelka, Ivo, Graves, Alex, and Wierstra, Daan. DRAW: A recurrent neural network for image generation. Proceedings of the 32nd International Con- ference on Machine Learning, 2015.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
We have shown that the PixelRNNs signiï¬cantly improve the state of the art on the MNIST and CIFAR-10 datasets. We also provide new benchmarks for generative image modeling on the ImageNet dataset. Based on the samples and completions drawn from the models we can conclude that the PixelRNNs are able to model both spatially local and long-range correlations and are able to produce images that are sharp and coherent. Given that these models im- prove as we make them larger and that there is practically unlimited data available to train on, more computation and larger models are likely to further improve the results.
Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short- term memory. Neural computation, 1997. | 1601.06759#39 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 40 | Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short- term memory. Neural computation, 1997.
Kalchbrenner, Nal and Blunsom, Phil. Recurrent continu- ous translation models. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Pro- cessing, 2013.
Kalchbrenner, Nal, Danihelka, Ivo, and Graves, Alex. arXiv preprint Grid long short-term memory. arXiv:1507.01526, 2015.
# Acknowledgements
The authors would like to thank Shakir Mohamed and Guil- laume Desjardins for helpful input on this paper and Lu- cas Theis, Alex Graves, Karen Simonyan, Lasse Espeholt, Danilo Rezende, Karol Gregor and Ivo Danihelka for in- sightful discussions.
# References
Kingma, Diederik P and Welling, Max. Auto-encoding arXiv preprint arXiv:1312.6114, variational bayes. 2013.
Krizhevsky, Alex. Learning multiple layers of features from tiny images. 2009.
Larochelle, Hugo and Murray, Iain. The neural autore- gressive distribution estimator. The Journal of Machine Learning Research, 2011. | 1601.06759#40 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 41 | Larochelle, Hugo and Murray, Iain. The neural autore- gressive distribution estimator. The Journal of Machine Learning Research, 2011.
Bengio, Yoshua and Bengio, Samy. Modeling high- dimensional discrete data with multi-layer neural net- works. pp. 400â406. MIT Press, 2000.
LeCun, Yann, Bottou, L´eon, Bengio, Yoshua, and Haffner, Patrick. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998.
Dinh, Laurent, Krueger, David, and Bengio, Yoshua. NICE: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014.
Evaluat- ing probabilities under high-dimensional latent variable models. In Advances in Neural Information Processing Systems, 2009.
Iain, and Larochelle, Hugo. MADE: Masked autoencoder for dis- tribution estimation. arXiv preprint arXiv:1502.03509, 2015.
Graves, Alex. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
Neal, Radford M. Connectionist learning of belief net- works. Artiï¬cial intelligence, 1992. | 1601.06759#41 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 42 | Neal, Radford M. Connectionist learning of belief net- works. Artiï¬cial intelligence, 1992.
Raiko, Tapani, Li, Yao, Cho, Kyunghyun, and Bengio, Yoshua. Iterative neural autoregressive distribution es- In Advances in Neural Information timator NADE-k. Processing Systems, 2014.
Pixel Recurrent Neural Networks
Rezende, Danilo J, Mohamed, Shakir, and Wierstra, Daan. Stochastic backpropagation and approximate inference In Proceedings of the 31st in deep generative models. International Conference on Machine Learning, 2014.
Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, Karpa- thy, Andrej, Khosla, Aditya, Bernstein, Michael, Berg, Alexander C., and Fei-Fei, Li. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 2015.
the 31st International Conference on Machine Learning, 2014.
van den Oord, A¨aron and Schrauwen, Benjamin. Factoring variations in natural images with deep gaussian mixture models. In Advances in Neural Information Processing Systems, 2014a. | 1601.06759#42 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 43 | van den Oord, A¨aron and Schrauwen, Benjamin. The student-t mixture as a natural image patch prior with ap- plication to image compression. The Journal of Machine Learning Research, 2014b.
Salakhutdinov, Ruslan and Hinton, Geoffrey E. Deep boltz- mann machines. In International Conference on Artiï¬- cial Intelligence and Statistics, 2009.
Salakhutdinov, Ruslan and Murray, Iain. On the quantita- tive analysis of deep belief networks. In Proceedings of the 25th international conference on Machine learning, 2008.
Zhang, Yu, Chen, Guoguo, Yu, Dong, Yao, Kaisheng, Khu- danpur, Sanjeev, and Glass, James. Highway long short- term memory RNNs for distant speech recognition. In Proceedings of the International Conference on Acous- tics, Speech and Signal Processing, 2016.
Salimans, Tim, Kingma, Diederik P, and Welling, Max. Markov chain monte carlo and variational inference: Bridging the gap. Proceedings of the 32nd International Conference on Machine Learning, 2015. | 1601.06759#43 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 44 | Sohl-Dickstein, Jascha, Weiss, Eric A., Maheswaranathan, Niru, and Ganguli, Surya. Deep unsupervised learning using nonequilibrium thermodynamics. Proceedings of the 32nd International Conference on Machine Learn- ing, 2015.
Stollenga, Marijn F, Byeon, Wonmin, Liwicki, Marcus, and Schmidhuber, Juergen. Parallel multi-dimensional lstm, with application to fast biomedical volumetric im- In Advances in Neural Information age segmentation. Processing Systems 28. 2015.
Sutskever, Ilya, Martens, James, and Hinton, Geoffrey E. Generating text with recurrent neural networks. In Pro- ceedings of the 28th International Conference on Ma- chine Learning, 2011.
Theis, Lucas and Bethge, Matthias. Generative image mod- eling using spatial LSTMs. In Advances in Neural Infor- mation Processing Systems, 2015.
Theis, Lucas, van den Oord, A¨aron, and Bethge, Matthias. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844, 2015.
Iain, and Larochelle, Hugo. RNADE: The real-valued neural autoregressive density- estimator. In Advances in Neural Information Processing Systems, 2013. | 1601.06759#44 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06759 | 45 | Iain, and Larochelle, Hugo. RNADE: The real-valued neural autoregressive density- estimator. In Advances in Neural Information Processing Systems, 2013.
Uria, Benigno, Murray, Iain, and Larochelle, Hugo. A deep and tractable density estimator. In Proceedings of
Pixel Recurrent Neural Networks
See Sea ah = AA Tels ae 45 3 GS a SARs lve TG peer reer Ae Mises Ss Scott ak HSA) Sn Shoes i ey 2 Cob Ae i AREEA 2 ik is Rati iO FE Rg as al 62 Sa bes o>) AR ae pai ee BE ew ep OB AS ale te eK ASI BOBS of eer eee Rls a OM eed Se pie Ge Gils Bes tea ike sate EMRE EY Sas kare Ste ca
Figure 10. Additional samples from a model trained on ImageNet 32x32 (right) images. | 1601.06759#45 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | [
{
"id": "1511.01844"
},
{
"id": "1502.03509"
},
{
"id": "1507.01526"
},
{
"id": "1512.03385"
}
] |
1601.06071 | 1 | [email protected]
# Abstract
Based on the assumption that there exists a neu- ral network that efï¬ciently represents a set of Boolean functions between all binary inputs and outputs, we propose a process for developing and deploying neural networks whose weight param- eters, bias terms, input, and intermediate hid- den layer output signals, are all binary-valued, and require only basic bit logic for the feedfor- ward pass. The proposed Bitwise Neural Net- work (BNN) is especially suitable for resource- constrained environments, since it replaces ei- ther ï¬oating or ï¬xed-point arithmetic with signif- icantly more efï¬cient bitwise operations. Hence, the BNN requires for less spatial complexity, less memory bandwidth, and less power consumption in hardware. In order to design such networks, we propose to add a few training schemes, such as weight compression and noisy backpropaga- tion, which result in a bitwise network that per- forms almost as well as its corresponding real- valued network. We test the proposed network on the MNIST dataset, represented using binary features, and show that BNNs result in compet- itive performance while offering dramatic com- putational savings. | 1601.06071#1 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.06071 | 3 | Although DNNs are extending the state of the art results for various tasks, such as image classiï¬cation (Goodfel- low et al., 2013), speech recognition (Hinton et al., 2012), speech enhancement (Xu et al., 2014), etc, it is also the case that the relatively bigger networks with more parame- ters than before call for more resources (processing power, memory, battery time, etc), which are sometimes critically constrained in applications running on embedded devices. Examples of those applications span from context-aware computing, collecting and analysing a variety of sensor sig- nals on the device (Baldauf et al., 2007), to always-on com- puter vision applications (e.g. Google glasses), to speech- driven personal assistant services, such as âHey, Siri.â A primary concern that hinders those applications from be- ing more successful is that they assume an always-on pat- tern recognition engine on the device, which will drain the battery fast unless it is carefully implemented to minimize the use of resources. Additionally, even in an environment with the necessary resources being available, speeding up a DNN can greatly improve the user experience when it comes to tasks like searching big databases (Salakhutdinov & Hinton, 2009). In either case, a more compact yet still well-performing DNN is a welcome improvement.
# 1. Introduction | 1601.06071#3 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.06071 | 4 | # 1. Introduction
According to the universal approximation theorem, a sin- gle hidden layer with a ï¬nite number of units can approx- imate a continuous function with some mild assumptions (Cybenko, 1989; Hornik, 1991). While this theorem im- plies a shallow network with a potentially intractable num- ber of hidden units when it comes to modeling a compliProceedings of the 31 st International Conference on Machine Learning, Lille, France, 2015. JMLR: W&CP volume 37. Copy- right 2015 by the author(s).
Efï¬cient computational structures for deploying artiï¬cial neural networks have long been studied in the literature. Most of the effort is focused on training networks whose weights can be transformed into some quantized represen- tations with a minimal loss of performance (Fiesler et al., 1990; Hwang & Sung, 2014). They typically use the quan- tized weights in the feedforward step at every training iter- ation, so that the trained weights are robust to the known quantization noise caused by a limited precision. It was also shown that 10 bits and 12 bits are enough to represent gradients and storing weights for implementing the state- of-the-art maxout networks even for training the network (Courbariaux et al., 2014). However, in those quantized
Bitwise Neural Networks | 1601.06071#4 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.06071 | 6 | With the proposed Bitwise Neural Networks (BNN), we take a more extreme view that every input node, output node, and weight, is represented by a single bit. For ex- ample, a weight matrix between two hidden layers of 1024 units is a 1024 à 1025 matrix of binary values rather than quantized real values (including the bias). Although learn- ing those bitwise weights as a Boolean concept is an NP- complete problem (Pitt & Valiant, 1988), the bitwise net- works have been studied in the limited setting, such as µ- perceptron networks where an input node is allowed to be connected to one and only one hidden node and its ï¬nal layer is a union of those hidden nodes (Golea et al., 1992). A more practical network was proposed in (Soudry et al., 2014) recently, where the posterior probabilities of the bi- nary weights were sought using the Expectation Back Prop- agation (EBP) scheme, which is similar to backpropagation in its form, but has some advantages, such as parameter- free learning and a straightforward discretization of the weights. Its promising results on binary text classiï¬cation tasks however, rely on the real-valued bias terms and aver- aging of predictions from differently sampled parameters.
# 2. Feedforward in Bitwise Neural Networks | 1601.06071#6 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.06071 | 7 | # 2. Feedforward in Bitwise Neural Networks
It has long been known that any Boolean function, which takes binary values as input and produces binary outputs as well, can be represented as a bitwise network with one hidden layer (McCulloch & Pitts, 1943), for example, by merely memorizing all the possible mappings between in- put and output patterns. We deï¬ne the forward propagation procedure as follows based on the assumption that we have trained such a network with bipolar binary parameters:
Kil ai = b+ > wey @z qd) j
# 2 = sign(a;), gle BX w!
2 = sign(a;), (2)
, Wl â BKlÃKlâ1 , bl â BKl , (3)
where B is the set of bipolar binaries, i.e. ±11, and â stands for the bitwise XNOR operation (see Figure 1 (a)). l, j, and i indicate a layer, input and output units of the layer, respec- tively. We use bold characters for a vector (or a matrix if capicalized). K l is the number of input units at l-th layer. Therefore, z0 equals to an input vector, where we omit the sample index for the notational convenience. We use the sign activation function to generate the bipolar outputs. | 1601.06071#7 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.06071 | 8 | This paper presents a completely bitwise network where all participating variables are bipolar binaries. Therefore, in its feedforward only XNOR and bit counting operations are used instead of multiplication, addition, and a nonlinear activation on ï¬oating or ï¬xed-point variables. For training, we propose a two-stage approach, whose ï¬rst part is typical network training with a weight compression technique that helps the real-valued model to easily be converted into a BNN. To train the actual BNN, we use those compressed weights to initialize the BNN parameters, and do noisy backpropagation based on the tentative bitwise parameters. To binarize the input signals, we can adapt any binariza- tion techniques, e.g. ï¬xed-point representations and hash codes. Regardless of the binarization scheme, each input node is given only a single bit at a time, as opposed to a bit packet representing a ï¬xed-point number. This is signiï¬- cantly different from the networks with quantized inputs, where a real-valued signal is quantized into a set of bits, and then all those bits are fed to an input node in place of their corresponding single real value. | 1601.06071#8 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.06071 | 9 | where a real-valued signal is quantized into a set of bits, and then all those bits are fed to an input node in place of their corresponding single real value. Lastly, we apply the sign function as our activation function instead of a sig- moid to make sure the input to the next layer is bipolar bi- nary as well. We compare the performance of the proposed BNN with its corresponding ordinary real-valued networks on hand-written digit recognition tasks, and show that the bitwise operations can do the job with a very small perfor- mance loss, while providing a large margin of improvement in terms of the necessary computational resources. | 1601.06071#9 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.06071 | 10 | We can check the prediction error E by measuring the bit- wise agreement of target vector t and the output units of L-th layer using XNOR as a multiplication operator,
KEt E= Yo (1-4 @ 2f*")/2, (4)
but this error function can be tentatively replaced by involv- ing a softmax layer during the training phase.
The XNOR operation is a faster substitute of binary mul- tiplication. Therefore, (1) and (2) can be seen as a special version of the ordinary feedforward step that only works when the inputs, weights, and bias are all bipolar binaries. Note that these bipolar bits will in practice be implemented using 0/1 binary values, where (2) activation is equivalent to counting the number of 1âs and then checking if the accu- mulation is bigger than the half of the number of input units plus 1. With no loss of generality, in this paper we will use the ±1 bipolar representation since it is more ï¬exible in deï¬ning hyperplanes and examining the network behavior. | 1601.06071#10 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.06071 | 11 | Sometimes a BNN can solve the same problem as a real- valued network without any size modiï¬cations, but in gen- eral we should expect that a BNN could require larger net- work structures than a real-valued one. For example, the XOR problem in Figure 1 (b) can have an inï¬nite num- ber of solutions with real-valued parameters once a pair
1In the bipolar binary representation, +1 stands for the âTRUEâ status, while â1 is for âFALSE.â
Bitwise Neural Networks
(a) (b) (c) (d)
(e)
Figure 1. (a) An XNOR table. (b) The XOR problem that needs two hyperplanes. (c) A multi-layer perceptron that solves the XOR problem. (d) A linearly separable problem while bitwise networks need two hyperplanes to solve it (y = x2). (e) A bit- wise network with zero weights that solves the y = x2 problem. | 1601.06071#11 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.06071 | 12 | tions of a real-valued system, such as the power consump- tion of multipliers and adders for the ï¬oating-point opera- tions, various dynamic ranges of the ï¬xed-point representa- tions, erroneous ï¬ips of the most signiï¬cant bits, etc. Note that if the bitwise parameters are sparse, we can further reduce the number of hyperplanes. For example, for an in- active element in the weight matrix W due to the sparsity, we can simply ignore the computation for it similarly to the operations on the sparse representations. Conceptually, we can say that those inactive weights serve as zero weights, so that a BNN can solve the problem in Figure 1 (d) by using only one hyperplane as in (e). From now on, we will use this extended version of BNN with inactive weights, yet there are some cases where BNN needs more hyperplanes than a real-valued network even with the sparsity.
# 3. Training Bitwise Neural Networks
We ï¬rst train some compressed network parameters, and then retrain them using noisy backpropagation for BNNs.
# 3.1. Real-valued Networks with Weight Compression | 1601.06071#12 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.06071 | 13 | # 3.1. Real-valued Networks with Weight Compression
First, we train a real-valued network that takes either bit- wise inputs or real-valued inputs ranged between â1 and +1. A special part of this network is that we constrain the weights to have values between â1 and +1 as well by wrapping them with tanh. Similarly, if we choose tanh for the activation, we can say that the network is a relaxed ver- sion of the corresponding bipolar BNN. With this weight compression technique, the relaxed forward pass during training is deï¬ned as follows:
of hyperplanes can successfully discriminate (1, 1) and (â1, â1) from (1, â1) and (â1, 1). Among all the pos- sible solutions, we can see that binary weights and bias are enough to deï¬ne the hyperplanes, x1 â x2 + 1 > 0 and âx1 + x2 + 1 > 0 (dashes). Likewise, the separation per- formance of the particular BNN deï¬ned in (c) has the same classiï¬cation power once the inputs are binary as well.
Kind ai = tanh(b})
2 = tanh (ai ; (6) | 1601.06071#13 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.06071 | 14 | Kind ai = tanh(b})
2 = tanh (ai ; (6)
where all the binary values in (1) and (2) are real for the time being: ¯Wl â RKlÃKlâ1 , and ¯zl â RKl . The bars on top of the notations are for the distinction.
Figure 1 (d) shows another example where BNN requires more hyperplanes than a real-valued network. This linearly separable problem is solvable with only one hyperplane, such as â0.1x1 + x2 + 0.5 > 0, but it is impossible to de- scribe such a hyperplane with binary coefï¬cients. We can instead come up with a solution by combining multiple bi- nary hyperplanes that will eventually increase the perceived complexity of the model. However, even with a larger num- ber of nodes, the BNN is not necessarily more complex than the smaller real-valued network. This is because a pa- rameter or a node of BNN requires only one bit to represent while a real-valued node generally requires more than that, up to 64 bits. Moreover, the simple XNOR and bit count- ing operations of BNN bypass the computational complicaWeight compression needs some changes in the backprop- agation procedure. In a hidden layer we calculate the error, | 1601.06071#14 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.06071 | 15 | Kot 6 (n) = ( > tanh(w{')6!"1(n)) . (1 â tanh? (a\)).
Note that the errors fron the next layer are multiplied with the compressed versions of the weights. Hence, the gradi- ents of the parameters in the case of batch learning are Vw; ; = (~ di(n)z,") : (1 â tanh? (wi,)),
Vw; ; = (~ di(n)z,") : (1 â tanh? (wi,)), Vil = O30) ; (1 â tanh? (1),
Bitwise Neural Networks
with the additional term from the chain rule on the com- pressed weights.
Table 1. Classiï¬cation errors for real-valued and bitwise networks on different types of bitwise features
# 3.2. Training BNN with Noisy Backpropagation | 1601.06071#15 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.06071 | 16 | # 3.2. Training BNN with Noisy Backpropagation
Since we have trained a real-valued network with a proper range of weights, what we do next is to train the actual bitwise network. The training procedure is similar to the ones with quantized weights (Fiesler et al., 1990; Hwang & Sung, 2014), except that the values we deal with are all bits, and the operations on them are bitwise. To this end, we ï¬rst initialize all the real-valued parameters, ¯W and ¯b, with the ones learned from the previous section. Then, we setup a sparsity parameter λ which says the proportion of the zeros after the binarization. Then, we divide the param- eters into three groups: +1, 0, or â1. Therefore, λ decides the boundaries β, e.g. wl ij < âβ. Note that ij| < β equals to λK lK lâ1. the number of zero weights | ¯wl The main idea of this second training phase is to feedfor- ward using the binarized weights and the bit operations as in (1) and (2). Then, during noisy backpropagation the errors and gradients are calculated using those binarized weights and signals as well: | 1601.06071#16 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.06071 | 17 | Kitt di(n) = So wi5'd;*(n), Vols = Vamp, Vb) = Soin). (7)
NETWORKS BIPOLAR 0 OR 1 FIXED-POINT (2BITS) FLOATING-POINT NETWORKS (64BITS) 1.17% 1.32% 1.36% BNN 1.33% 1.36% 1.47%
the network suitable for initializing the following bipolar bitwise network. The number of iterations from 500 to 1, 000 was enough to build a baseline. The ï¬rst row of Table 1 shows the performance of the baseline real-valued network with 64bits ï¬oating-point. As for the input to the real-valued networks, we rescale the pixel intensities into the bipolar range, i.e. from â1 to +1, for the bipolar case (the ï¬rst column). In the second column, we use the origi- nal input between 0 and 1 as it is. For the third column, we encode the four equally spaced regions between 0 to 1 into two bits, and feed each bit into each input node. Hence, the baseline network for the third input type has 1, 568 binary input nodes rather than 784 as in the other cases. | 1601.06071#17 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.06071 | 18 | In this way, the gradients and errors properly take the bina- rization of the weights and the signals into account. Since the gradients can get too small to update the binary param- eters W and b, we instead update their corresponding real- valued parameters,
¯wl i,j â ¯wl i,j â ηâ ¯wl i,j, i,j â ¯bl ¯bl i â ηâ¯bl i, (8)
with η as a learning rate parameter. Finally, at the end of each update we binarize them again with β. We repeat this procedure at every epoch.
Once we learn the real-valued parameters, now we train the BNN, but with binarized inputs. For instance, instead of real values between â1 and +1 in the bipolar case, we take their sign as the bipolar binary features. As for the 0/1 binaries, we simply round the pixel intensity. Fixed-point inputs are already binarized. Now we train the new BNN with the noisy backpropagation technique as described in 3.2. The second row of Table 1 shows the BNN results. We see that the bitwise networks perform well with very small additional errors. Note that the performance of the original real-valued dropout network with similar network topology (logistic units without max-norm constraint) is 1.35%.
# 4. Experiments | 1601.06071#18 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.06071 | 19 | # 4. Experiments
# 5. Conclusion
In this section we go over the details and results of the hand-written digit recognition task on the MNIST data set (LeCun et al., 1998) using the proposed BNN system. Throughout the training, we adopt the softmax output layer for these multiclass classiï¬cation cases. All the networks have three hidden layers with 1024 units per layer.
From the ï¬rst round of training, we get a regular dropout network with the same setting suggested in (Srivastava et al., 2014), except the fact that we used the hyperbolic tangent for both weight compression and activation to make
In this work we propose a bitwise version of artiï¬cial neu- ral networks, where all the inputs, weights, biases, hid- den units, and outputs can be represented with single bits and operated on using simple bitwise logic. Such a net- work is very computationally efï¬cient and can be valuable for resource-constrained situations, particularly in cases where ï¬oating-point / ï¬xed-point variables and operations are prohibitively expensive. In the future we plan to in- vestigate a bitwise version of convolutive neural networks, where efï¬cient computing is more desirable.
Bitwise Neural Networks
# References | 1601.06071#19 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.06071 | 20 | Bitwise Neural Networks
# References
Baldauf, M., Dustdar, S., and Rosenberg, F. A survey on context-aware systems. International Journal of Ad Hoc and Ubiquitous Computing, 2(4):263â277, January 2007.
McCulloch, W. S. and Pitts, W. H. A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics, 5(4):115â133, 1943.
Pitt, L. and Valiant, L. G. Computational limitations on learning from examples. Journal of the Association for Computing Machinery, 35:965â984, 1988.
Bengio, Y. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1):1â127, 2009.
Courbariaux, M., Bengio, Y., and David, J.-P. Low pre- arXiv preprint cision arithmetic for deep learning. arXiv:1412.7024, 2014.
Cybenko, G. Approximations by superpositions of sig- moidal functions. Mathematics of Control, Signals, and Systems, 2(4):303â314, 1989.
Salakhutdinov, R. and Hinton, G. Semantic hashing. Inter- national Journal of Approximate Reasoning, 50(7):969 â 978, 2009. | 1601.06071#20 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.06071 | 21 | Salakhutdinov, R. and Hinton, G. Semantic hashing. Inter- national Journal of Approximate Reasoning, 50(7):969 â 978, 2009.
Soudry, D., Hubara, I., and Meir, R. Expectation backprop- agation: Parameter-free training of multilayer neural net- works with continuous or discrete weights. In Advances in Neural Information Processing Systems (NIPS), 2014.
Fiesler, E., Choudry, A., and Caulï¬eld, H. J. Weight dis- cretization paradigm for optical neural networks. In The Hagueâ90, 12-16 April, pp. 164â173. International Soci- ety for Optics and Photonics, 1990.
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Research, 15(1):1929â1958, January 2014.
Golea, M., Marchand, M., and Hancock, T. R. On learning µ-perceptron networks with binary weights. In Advances in Neural Information Processing Systems (NIPS), pp. 591â598, 1992. | 1601.06071#21 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.06071 | 22 | Xu, Y., Du, J., Dai, L.-R., and Lee, C.-H. An experimen- tal study on speech enhancement based on deep neural networks. IEEE Signal Processing Letters, 21(1):65â68, 2014.
Goodfellow, I. J., Warde-Farley, D., Mirza, M., Courville, A., and Bengio, Y. Maxout networks. In Proceedings of the International Conference on Machine Learning (ICML), 2013.
Hinton, G., Deng, L., Yu, D., Dahl, G. E., M, Abdel- rahman, Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T., and Kingsbury, B. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82â97, 2012.
Hinton, G. E., Osindero, S., and Teh, Y. A fast learning algorithm for deep belief nets. Neural Computation, 18 (7):1527â1554, 2006.
Hornik, K. Approximation capabilities of multilayer feed- forward networks. Neural Networks, 4(2):251â257, 1991. | 1601.06071#22 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.06071 | 23 | Hornik, K. Approximation capabilities of multilayer feed- forward networks. Neural Networks, 4(2):251â257, 1991.
Hwang, K. and Sung, W. Fixed-point feedforward deep neural network design using weights +1, 0, and -1. In 2014 IEEE Workshop on Signal Processing Systems (SiPS), Oct 2014.
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient- based learning applied to document recognition. Pro- ceedings of the IEEE, 86(11):2278â2324, November 1998. | 1601.06071#23 | Bitwise Neural Networks | Based on the assumption that there exists a neural network that efficiently
represents a set of Boolean functions between all binary inputs and outputs, we
propose a process for developing and deploying neural networks whose weight
parameters, bias terms, input, and intermediate hidden layer output signals,
are all binary-valued, and require only basic bit logic for the feedforward
pass. The proposed Bitwise Neural Network (BNN) is especially suitable for
resource-constrained environments, since it replaces either floating or
fixed-point arithmetic with significantly more efficient bitwise operations.
Hence, the BNN requires for less spatial complexity, less memory bandwidth, and
less power consumption in hardware. In order to design such networks, we
propose to add a few training schemes, such as weight compression and noisy
backpropagation, which result in a bitwise network that performs almost as well
as its corresponding real-valued network. We test the proposed network on the
MNIST dataset, represented using binary features, and show that BNNs result in
competitive performance while offering dramatic computational savings. | http://arxiv.org/pdf/1601.06071 | Minje Kim, Paris Smaragdis | cs.LG, cs.AI, cs.NE | This paper was presented at the International Conference on Machine
Learning (ICML) Workshop on Resource-Efficient Machine Learning, Lille,
France, Jul. 6-11, 2015 | International Conference on Machine Learning (ICML) Workshop on
Resource-Efficient Machine Learning, Lille, France, Jul. 6-11, 2015 | cs.LG | 20160122 | 20160122 | [] |
1601.04468 | 0 | 6 1 0 2 n a J 8 1 ] L C . s c [
1 v 8 6 4 4 0 . 1 0 6 1 : v i X r a
# Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation
Artem Sokolov Stefan Riezlerâ Computational Linguistics and IWRâ Heidelberg University, 69120 Heidelberg, Germany
[email protected] [email protected]
# Tanguy Urvoy Orange Labs, 2 Avenue Pierre Marzin, 22307 Lannion, France
[email protected]
# Abstract | 1601.04468#0 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 1 | # Tanguy Urvoy Orange Labs, 2 Avenue Pierre Marzin, 22307 Lannion, France
[email protected]
# Abstract
We present an approach to structured prediction from bandit feedback, called Bandit Structured Prediction, where only the value of a task loss function at a single predicted point, instead of a correct structure, is observed in learning. We present an application to discriminative rerank- ing in Statistical Machine Translation (SMT) where the learning algorithm only has access to a 1 â BLEU loss evaluation of a predicted translation instead of obtaining a gold standard reference translation. In our experiment bandit feedback is obtained by evaluating BLEU on reference translations without revealing them to the algorithm. This can be thought of as a simulation of interactive machine translation where an SMT system is personalized by a user who provides single point feedback to predicted translations. Our experiments show that our approach improves translation quality and is comparable to approaches that employ more in- formative feedback in learning.
# 1 Introduction | 1601.04468#1 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 2 | # 1 Introduction
Learning from bandit1 feedback describes an online learning scenario, where on each of a se- quence of rounds, a learning algorithm makes a prediction, and receives partial information in terms of feedback to a single predicted point. In difference to the full information supervised scenario, the learner does not know what the correct prediction looks like, nor what would have happened if it had predicted differently. This scenario has (ï¬nancially) important real world ap- plications such as online advertising (Chapelle et al., 2014) that showcases a tradeoff between exploration (a new ad needs to be displayed in order to learn its click-through rate) and exploita- tion (displaying the ad with the current best estimate is better in the short term). Crucially, in this scenario it is unrealistic to expect more detailed feedback than a user click on the displayed ad. Similar to the online advertising scenario, there are many potential applications of bandit learning to NLP situations where feedback is limited for various reasons. For example, on- line learning has been applied successfully in interactive statistical machine translation (SMT) (Bertoldi et al., 2014; Denkowski et al., 2014; Green et al., 2014). Post-editing feedback clearly is limited by its high cost and by the required expertise of users, however, current approaches force the full information supervised scenario onto the problem of learning from user post-edits. | 1601.04468#2 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 3 | 1The name is inherited from a model where in each round a gambler pulls an arm of a different slot machine (âone-armed banditâ), with the goal of maximizing his reward relative to the maximal possible reward, without apriori knowledge of the optimal slot machine.
Bandit learning would allow to learn from partial user feedback that is easier and faster to ob- tain than full information. An example where user feedback is limited by a time constraint is simultaneous translation of a speech input stream (Cho et al., 2013). Clearly, it is unrealistic to expect user feedback that goes beyond a one-shot user quality estimate of the predicted trans- lation in this scenario. Another example is SMT domain adaptation where the translations of a large out-of-domain model are re-ranked based on bandit feedback on in-domain data. This can also be seen as a simulation of personalized machine translation where a given large SMT system is adapted to a user solely by single-point user feedback to predicted structures. | 1601.04468#3 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 4 | The goal of this paper is to develop algorithms for structured prediction from bandit feed- back, tailored to NLP problems. We investigate possibilities to âbanditizeâ objectives such as expected loss (Och, 2003; Smith and Eisner, 2006; Gimpel and Smith, 2010) that have been proposed for structured prediction in NLP. Since most current approaches to bandit optimiza- tion rely on a multiclass classiï¬cation scenario, the ï¬rst challenge of our work is to adapt bandit learning to structured prediction over exponentially large structured output spaces (Taskar et al., 2004; Tsochantaridis et al., 2005). Furthermore, most theoretical work on online learning with bandit feedback relies on convexity assumptions about objective functions, both in the non- stochastic adversarial setting (Flaxman et al., 2005; Shalev-Shwartz, 2012) as well as in the stochastic optimization framework (Spall, 2003; Nemirovski et al., 2009; Bach and Moulines, 2011). Our case is a non-convex optimization problem, which we analyze in the simple and elegant framework of pseudogradient adaptation that allows us to show convergence of the pre- sented algorithm (Polyak and Tsypkin, 1973; Polyak, 1987).
The central contributions of this paper are: | 1601.04468#4 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 5 | The central contributions of this paper are:
An algorithm for minimization of expected loss for structured prediction from bandit feed- back, called Bandit Structured Prediction.
An analysis of convergence of our algorithm in the stochastic optimization framework of pseudogradient adaptation.
An experimental evaluation on structured learning in SMT. Our experiment follows a sim- ulation design that is standard in bandit learning, namely by simulating bandit feedback by evaluating task loss functions against gold standard structures without revealing them to the learning algorithm.
As a disclaimer, we would like to note that improvements over traditional full-information structured prediction cannot be expected from learning from partial feedback. Instead, the goal is to investigate learning situations in which full information is not available. Similarly, a comparison between our approach and dueling bandits (Yue and Joachims, 2009) is skewed towards the latter approach that has access to two-point feedback instead of one-point feedback as in our case. While it has been shown that querying the loss function at two points leads to convergence results that closely resemble bounds for the full information case (Agarwal et al., 2010), such feedback is clearly twice as expensive and, depending on the application, might not be elicitable from users.
# 2 Related Work | 1601.04468#5 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 6 | # 2 Related Work
Stochastic Approximation. Online learning from bandit feedback dates back to Robbins (1952) who formulated the task as a problem of sequential decision making. His analysis, as ours, is in a stochastic setting, i.e., certain assumptions are made on the probability distri- bution of feedback and its noisy realization. Stochastic approximation covers bandit feedback as noisy observations which only allow to compute noisy gradients that equal true gradients
in expectation. While the stochastic approximation framework is quite general, most theoreti- cal analyses of convergence and convergence rate are based on (strong) convexity assumptions (Polyak and Juditsky, 1992; Spall, 2003; Nemirovski et al., 2009; Bach and Moulines, 2011, 2013) and thus not applicable to our case. | 1601.04468#6 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 7 | Non-Stochastic Bandits. Auer et al. (2002) initiated an active area of research on non- stochastic bandit learning, i.e., no statistical assumptions are made on the distribution of feed- back, including models of feedback as a malicious choice of an adaptive adversary. The ad- versarial bandit setting has been extended to take context or side information into account, using models based on general linear classiï¬ers (Auer et al., 2002; Langford and Zhang, 2007; Chu et al., 2011). However, they formalize a multi-class classiï¬cation problem that is not easily scalable to general exponentially large structured output spaces. Furthermore, most theoretical analyses rely on online (strongly) convex optimization (Flaxman et al., 2005; Shalev-Shwartz, 2012) thus limiting the applicability to our case. | 1601.04468#7 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 8 | Neurodynamic Programming. Bertsekas and Tsitsiklis (1996) cover optimization for neural networks and reinforcement learning under the name of âneurodynamic programmingâ. Both areas are dealing with non-convex objectives that lead to stochastic iterative algorithms. Inter- estingly, the available analyses of non-convex optimization for neural networks and reinforce- ment learning in Bertsekas and Tsitsiklis (1996), Sutton et al. (2000), or Bottou (2004) rely heavily on Polyak and Tsypkin (1973)âs pseudogradient framework. We apply their simple and elegant framework directly to give asymptotic guarantees for our algorithm. | 1601.04468#8 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 9 | In the area of NLP, recently algorithms for response-based learning have NLP Applications. been proposed to alleviate the supervision problem by extracting supervision signals from task- based feedback to system predictions. For example, Goldwasser and Roth (2013) presented an online structured learning algorithm that uses positive executability of a semantic parse against a database to convert a predicted parse into a gold standard structure for learning. Riezler et al. (2014) apply a similar idea to SMT by using the executability of a semantic parse of a translated database query as signal to convert a predicted translation into gold standard reference in struc- tured learning. Sokolov et al. (2015) present a coactive learning approach to structured learning in SMT where instead of a gold standard reference a slight improvement over the prediction is shown to be sufï¬cient for learning. Saluja and Zhang (2014) present an incorporation of bi- nary feedback into an latent structured SVM for discriminative SMT training. NLP applications based on reinforcement learning have been presented by Branavan et al. (2009) or Chang et al. (2015). Their model differs from ours in that it is structured as a sequence of states at which actions and rewards are computed, however, the theoretical foundation of both types of models can be traced back to Polyak and Tsypkin (1973)âs pseudogradient framework . | 1601.04468#9 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 10 | # 3 Expected Loss Minimization under Full Information
The expected loss learning criterion for structured prediction is deï¬ned as a minimization of the expectation of a task loss function with respect to the conditional distribution over structured be a structured outputs (Gimpel and Smith, 2010; Yuille and He, 2012). More formally, let input space, let [0, 1] quantify the loss ây(yâ²) suffered for making errors in predicting yâ² instead of y; as a rule, ây(yâ²) = 0 iff y = yâ². Then, for a data distribution p(x, y), the learning criterion is deï¬ned as minimization of the expected loss
Ep(x,y)pw(yâ²|x) [ây(yâ²)] = X x,y p(x, y) X yâ²âY(x) ây(yâ²)pw(yâ² | x). (1)
Assume further that output structures given inputs are distributed according to an underlying Gibbs distribution (a.k.a. conditional exponential or log-linear model)
x) = exp(wâ¤Ï(x, y))/Zw(x),
| | 1601.04468#10 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 11 | x) = exp(wâ¤Ï(x, y))/Zw(x),
|
Rd is a joint feature representation of inputs and outputs, w where Ï : weight vector, and Zw(w) is a normalization constant. X Ã Y â â Rd is a
Rd is a joint feature representation of inputs and outputs, w
The natural rule for prediction or inference is according to the minimum Bayes risk prin- ciple
Ëyw(x) = arg min yâY(x) X yâ²âY(x) ây(yâ²)pw(yâ² x). | (2)
This requires an evaluation of ây(yâ²) over the full output space, which is standardly avoided in practice by performing inference according to a maximum a posteriori (MAP) criterion (which equals criterion (2) for the special case of ây(yâ²) = 1[y = yâ²] where 1[s] evaluates to 1 if statement s is true, 0 otherwise)
Ëyw(x) = arg max yâY(x) = arg max yâY(x) pw(y x) | wâ¤Ï(x, y). (3) | 1601.04468#11 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 14 | While being continuous and differentiable, the expected loss criterion is typically non- convex. For example, in SMT, expected loss training for the standard task loss BLEU leads to highly non-convex optimization problems. Despite of this, most approaches rely on gradient- descent techniques for optimization (see Och (2003), Smith and Eisner (2006), He and Deng (2012), Auli et al. (2014), Wuebker et al. (2015), inter alia) by following the opposite direction of the gradient of (4):
E Ëp(x,y)pw(yâ²|x) [ây(yâ²)] = E Epw (yâ²|x)[ây(yâ²)Ï(x, yâ²)] Ëp(x,y)h Ëp(x,y)pw(yâ²|x)hây(yâ²)(Ï(x, yâ²) = E â Epw (yâ²|x)[ây(yâ²)] Epw (yâ²|x)[Ï(x, yâ²)]i â Epw (yâ²|x)[Ï(x, yâ²)])i.
â
# 4 Bandit Structured Prediction | 1601.04468#14 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 15 | â
# 4 Bandit Structured Prediction
Bandit feedback in structured prediction means that the gold standard output structure y, with respect to which the objective function is evaluated, is not revealed to the learner. Thus we can neither calculate the gradient of the objective function (4) nor evaluate the task loss â as in the full information case. A solution to this problem is to pass the evaluation of the loss function to the user, i.e, we access the loss directly through user feedback without assuming existence of a ï¬xed reference y. We indicate this by dropping the subscript y in â(yâ²). Assuming a ï¬xed, | 1601.04468#15 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 17 | â
â
unknown distribution p(x) over input structures, we can formalize the following new objective for expected loss minimization in a bandit setup
J(w) = Ep(x)pw (yâ²|x) [â(yâ²)] = X x p(x) X yâ²âY(x) â(yâ²)pw(yâ² | x). (5)
Optimization of this objective is then as follows:
1. We assume a sequence of input structures xt, t = 1, . . . , T that are generated by a ï¬xed, unknown distribution p(x).
2. We use a Gibbs distribution estimate as a sampling distribution to perform simultaneous exploration / exploitation on output structures (Abernethy and Rakhlin, 2009).
3. We use feedback to the sampled output structures to construct a parameter update rule that is an unbiased estimate of the true gradient of objective (5).
# 4.1 Algorithm | 1601.04468#17 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 18 | 3. We use feedback to the sampled output structures to construct a parameter update rule that is an unbiased estimate of the true gradient of objective (5).
# 4.1 Algorithm
Algorithm 1 implements these ideas as follows: We assume as input a given learning rate sched- ule (line 1) and a deterministic initialization w0 of the weight vector (line 2). For each random i.i.d. input structure xt, we calculate the expected feature count (line 5). This can be done ex- actly, provided the underlying graphical model permits a tractable calculation, or for intractable models, with MCMC sampling. We then sample an output structure Ëyt from the Gibbs model (line 6). If the number of output options is small, this is done by sampling from a multinomial distribution. Otherwise, we use a Perturb-and-MAP approach (Papandreou and Yuille, 2011), restricted to unary potentials, to obtain an approximate Gibbs sample without waiting for the MC chain to mix. Finally, an update in the negative direction of the instantaneous gradient, evaluated at the input structure xt (line 8), is performed. | 1601.04468#18 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 19 | Intuitively, the algorithm compares the sampled feature vector to the average feature vector, and performs a step into the opposite direction of this difference, the more so the higher the loss of the sampled structure is. In the extreme case, if the sampled structure is correct (â(Ëyt) = 0), no update is performed.
# 4.2 Stochastic Approximation Analysis
The construction of the update in Algorithm 1 as a stochastic realization of the true gradient allows us to analyze the algorithm as a stochastic approximation algorithm. We show how our case can be ï¬t in the pseudogradient adaptation framework of Polyak and Tsypkin (1973) which gives asymptotic guarantees for non-convex and convex objectives. They characterize an
iterative process
wt+1 = wt γt st (6)
â
0 is a learning rate, wt and st are vectors in Rd with ï¬xed w0, and the distribution where γt of st depends on w0, . . . , wt. For a given lower bounded and differentiable function J(w) with Lipschitz continuous gradient
â
â¥
J(w + wâ²) J(w) L wâ² , (7)
# kâ
â â
# k â¤
# k
# k | 1601.04468#19 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 20 | â
â¥
J(w + wâ²) J(w) L wâ² , (7)
# kâ
â â
# k â¤
# k
# k
the vector st in process (6) is said to be a pseudogradient of J(w) if
J(wt)â¤E[st]
0, (8)
â
â¥
where the expectation is taken over all sources of randomness. Intuitively, the pseudogradient st is on average at an acute angle with the true gradient, meaning that st is on average a direction of decrease of the functional J(w).
In order to show convergence of the iterative process (6), besides conditions (7) and (8), only mild conditions on boundedness of the pseudogradient
E[ 2] < st (9)
, â and on the use of a decreasing learning rate satisfying
# k
# k
â â γt ⥠0, X t=0 γt = , â X t=0 γ2 t < , â (10)
are necessary. Under the exclusion of trivial solutions such as st = 0, the following convergence assertion can be made: Theorem 1 (Polyak and Tsypkin (1973), Thm. 1) Under conditions (7)â(10), for any w0 in process (6): | 1601.04468#20 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 21 | J(wt) â J â a.s., and lim tââ â J(wt)â¤E(st) = 0.
The signiï¬cance of the theorem is that its conditions can be checked easily, and it applies to a wide range of cases, including non-convex functions, in which case the convergence point J â is a critical point of J(w).
note that we can deï¬ne our functional J(w) with respect to expectations over the full space of X as J(w) = Ep(x)pw (yâ²|x)[â(yâ²)]. This means, convergence of the algorithm can be understood directly as a generalization result that extends to unseen data. In order to show this result, we have to verify conditions (7)â(10). It is easy to show that condition (7) holds for our functional J(w). Next we match the update in Algorithm 1 to a vector
Epwt (yâ²|xt)[Ï(xt, yâ²)]). st = â(Ëyt)(Ï(xt, Ëyt)
â
Taking the expectation of st yields Ep(x)pwt (yâ²|x)[st] = satisï¬ed by J(wt) such that condition (8) is â | 1601.04468#21 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 23 | Algorithm Structured Dueling Bandits
1: Input: γ, δ, w0 2: for t = 0, . . . , T do Observe xt 3: Sample unit vector ut uniformly Set wâ² Compare â(Ëywt (xt)) to â(Ëywâ² if wâ²
# t = wt + δut
5: 6:
t wins then wt+1 = wt + γut
8: 9:
Weyl = We + YUE
else
# t (xt))
10:
10:
# wt+1 = wt
# 5 Structured Dueling Bandits
For purposes of comparison, we present an extension of Yue and Joachims (2009)âs dueling bandits algorithm to structured prediction problems. The original algorithm is not speciï¬cally designed for structured prediction problems, but it is generic enough to be applicable to such problems when the quality of a parameter vector can be proxied through loss evaluation of an inferred structure. | 1601.04468#23 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 24 | The Structured Dueling Bandits algorithm compares a current weight vector wt with a neighboring point wâ² t along a direction ut, performing exploration (controlled by δ, line 5) by probing random directions, and exploitation (controlled by γ, line 8) by taking a step into the winning direction. The comparison step in line 6 is adapted to structured prediction from the original algorithm of Yue and Joachims (2009) by comparing the quality of wt and wâ² t via an evaluation of the losses â(Ëywt (xt)) and â(Ëywâ² t (xt)) of the structured arms corresponding to MAP prediction (3) under wt and wâ²
Further, note that the Structured Dueling Bandit algorithm requires access to a two-point feedback instead of a one-point feedback as in case of Bandit Structured Prediction (Algo- rithm 1). It has been shown that two-point feedback leads to convergence results that are close to those for learning from full information Agarwal et al. (2010). However, two-point feedback is twice as expensive as one-point feedback, and most importantly, such feedback might not be elicitable from users in real-world situations where feedback is limited by time- and resource- constraints. This limits the range of applications of Dueling Bandits to real-world interactive scenarios.
# 6 Experiments | 1601.04468#24 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 25 | # 6 Experiments
Our experimental design follows the standard of simulating bandit feedback by evaluating task loss functions against gold standard structures without revealing them to the learner. We com- pare the proposed Structured Bandit Prediction algorithm to Structured Dueling Bandits, and report results by test set evaluations of the respective loss functions under MAP inference. Fur- thermore, we evaluate models at different iterations according to their loss on the test set in order to visualize the empirical convergence behavior of the algorithms.
All experiments with bandit algorithms perform online learning for parameter estimation, and apply early stopping to choose the last model in a learning sequence for online-to-batch conversion at test time. Final results for bandit algorithms are averaged over 5 independent runs.
BLEU loss used in SMT. The setup is a reranking approach to SMT domain adaptation where the k-best list of an out-of-domain model is re-ranked (without re-decoding) based on bandit feedback from infull information bandit information in-domain SMT out-domain SMT DuelingBandit BanditStruct 0.2854 0.2579 0.2731±0.001 0.2705±0.001
Table 1: Corpus BLEU (under MAP decoding) on test set for SMT domain adaptation from Europarl to NewsCommentary by k-best reranking. | 1601.04468#25 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 26 | Table 1: Corpus BLEU (under MAP decoding) on test set for SMT domain adaptation from Europarl to NewsCommentary by k-best reranking.
0.275 0.270 0.265 Dueling BanditStruct out-domain SMT 0.260 0.255 50 100 150 200 250 iterationÃ5000 300 350 400 450 500
# U E L B
s u p r o c
Figure 1: Corpus-BLEU on test set for early stopping at different iterations for the SMT task.
domain data. This can also be seen as a simulation of personalized machine translation where a given large SMT system is adapted to a user solely by single-point user feedback to predicted structures.
We use the data from the WMT 2007 shared task for domain adaptation experiments in a popular benchmark setup from Europarl to NewsCommentary for French-to-English (Koehn and Schroeder, 2007; Daum´e and Jagarlamudi, 2011). We tokenized and lowercased our data using the moses toolkit, and prepared word alignments by fast align (Dyer et al., 2013). The SMT setup is phrase-based translation using non-unique 5,000-best lists from moses (Koehn et al., 2007) and a 4-gram language model (Heaï¬eld et al., 2013). | 1601.04468#26 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 27 | The out-of-domain baseline SMT model is trained on 1.6 million parallel Europarl data and includes the English side of Europarl and in-domain NewsCommentary in the language model. The model uses 15 dense features (6 lexicalized reordering features, 1 distortion, 1 out- of-domain and 1 in-domain language model, 1 word penalty, 5 translation model features) that are tuned with MERT (Och, 2003) on a dev set of Europarl data (dev2006, 2,000 sentences). The full-information in-domain SMT model gives an upper bound by MERT tuning the out-of- domain model on in-domain development data (nc-dev2007, 1,057 sentences). MERT runs for both baseline models were repeated 7 times and median results are reported.
Learning under bandit feedback started at the learned weights of the out-of-domain median model. It uses the parallel NewsCommentary data (news-commentary, 43,194 sentences) to simulate bandit feedback, by evaluating the sampled translation against the gold standard BLEU (by ï¬ooring zero n-gram reference using as loss function â a smoothed per-sentence 1
â | 1601.04468#27 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 28 | â
counts to 0.01). The meta-parameters of Dueling Bandits and Bandit Structured Prediction were BLEU on a small in-domain dev adjusted by online optimization of cumulative per-sentence 1 set (nc-devtest2007, 1,064 parallel sentences). The ï¬nal results are obtained by online-to- batch conversion where the model trained for 100 epochs on 43,194 in-domain training data is evaluated on a separate in-domain test set (nc-test2007, 2,007 sentences). | 1601.04468#28 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 29 | Table 1 shows that the results for Bandit Structured Prediction and Dueling Bandits are very close, however, both are signiï¬cant improvements over the out-of-domain SMT model that even includes an in-domain language model. We show the standard evaluation of the corpus- BLEU metric evaluated under MAP inference. The range of possible improvements is given by the difference of the BLEU score of the in-domain model and the BLEU score of the out-of- domain model â nearly 3 BLEU points. Bandit learning can improve the out-of-domain baseline by about 1.26 BLEU points (Bandit Structured Prediction) and by about 1.52 BLEU points (Du- eling Bandits). All result differences are statistically signiï¬cant at a p-value of 0.0001, using an Approximate Randomization test (Riezler and Maxwell, 2005; Clark et al., 2011). Figure 1 shows that per-sentence BLEU is a difï¬cult metric to provide single-point feedback, yield- ing a non-smooth progression of loss values against iterations for Bandit Structured Prediction. The progression of loss values is smoother and empirical convergence speed is faster for Du- eling Bandits since it can exploit preference judgements instead of having to trust real-valued feedback.
# 7 Discussion | 1601.04468#29 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 30 | # 7 Discussion
We presented an approach to Bandit Structured Prediction that is able to learn from feedback in form of an evaluation of a task loss function for single predicted structures. Our experimental evaluation showed promising results, both compared to Structured Dueling Bandits that employ two-point feedback, and compared to full information scenarios where the correct structure is revealed.
Our approach shows its strength where correct structures are unavailable and two-point feedback is infeasible. In future work we would like to apply bandit learning to scenarios with limited human feedback such as the interactive SMT applications discussed above. In such scenarios, per-sentence BLEU might not be the best metric to quantify feedback. We will instead investigate feedback based on HTER (Snover et al., 2006), or based on judgements according to Likert scales (Likert, 1932).
# Acknowledgements
This research was supported in part by DFG grant RI-2221/2-1 âGrounding Statistical Machine Translation in Perception and Actionâ.
# References
Abernethy, J. and Rakhlin, A. (2009). An efï¬cient bandit algorithm for âT regret in online multiclass prediction? In Conference on Learning Theory (COLT), Montreal, Canada. | 1601.04468#30 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 31 | Agarwal, A., Dekel, O., and Xiao, L. (2010). Optimal algorithms for online convex optimization with multi-point bandit feedback. In Conference on Learning Theory (COLT), Haifa, Israel.
Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire, R. E. (2002). The nonstochastic multi- armed bandit problem. SIAM Journal on Computing, 32(1):48â77.
Auli, M., Galley, M., and Gao, J. (2014). Large-scale expected BLEU training of phrase-based reordering models. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
Bach, F. and Moulines, E. (2011). Non-asymptotic analysis of stochastic approximation algo- rithms for machine learning. In Advances in Neural Information Processing Systems (NIPS), Granada, Spain.
Bach, F. and Moulines, E. (2013). Non-strongly-convex smooth stochastic approximation with (1/n). In Advances in Neural Information Processing Systems (NIPS), convergence rate Lake Tahoe, CA, USA. O | 1601.04468#31 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 32 | Bertoldi, N., Simianer, P., Cettolo, M., W¨aschle, K., Federico, M., and Riezler, S. (2014). Online adaptation to post-edits for phrase-based statistical machine translation. Machine Translation, 29:309â339.
Bertsekas, D. P. and Tsitsiklis, J. N. (1996). Neuro-Dynamic Programming. Athena Scientiï¬c.
Bottou, L. (2004). Stochastic learning. In Bousquet, O. and von Luxburg, U., editors, Advanced Lectures on Machine Learning, pages 146â168.
Branavan, S., Chen, H., Zettlemoyer, L. S., and Barzilay, R. (2009). Reinforcement learning for mapping instructions to actions. In Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, Suntec, Singapore.
Chang, K.-W., Krishnamurthy, A., Agarwal, A., Daume, H., and Langford, J. (2015). Learning to search better than your teacher. In International Conference on Machine Learning (ICML), Lille, France. | 1601.04468#32 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 33 | Chapelle, O., Manavaglu, E., and Rosales, R. (2014). Simple and scalable response prediction for display advertising. ACM Transactions on Intelligent Systems and Technology, 5(4).
Cho, E., F¨ugen, C., Hermann, T., Kilgour, K., Mediani, M., Mohr, C., Niehues, J., Rottman, K., Saam, C., St¨uker, S., and Waibel, A. (2013). A real-world system for simultaneous translation of German lectures. In Interspeech, Lyon, France.
Chu, W., Li, L., Reyzin, L., and Schapire, R. E. (2011). Contextual bandits with linear payoff functions. In International Conference on Artiï¬cial Intelligence and Statistics (AISTATS), Fort Lauderdale, FL, USA.
Clark, J., Dyer, C., Lavie, A., and Smith, N. (2011). Better hypothesis testing for statistical machine translation: Controlling for optimizer instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACLâ11), Portland, OR. | 1601.04468#33 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 34 | Daum´e, H. and Jagarlamudi, J. (2011). Domain adaptation for machine translation by min- In Meeting of the Association for Computational Linguistics: Human ing unseen words. Language Technologies (ACL-HLT), Portland, OR, USA.
Denkowski, M., Dyer, C., and Lavie, A. (2014). Learning from post-editing: Online model adaptation for statistical machine translation. In Conference of the European Chapter of the Association for Computational Linguistics (EACL), Gothenburg, Sweden.
Dyer, C., Chahuneau, V., and Smith, N. A. (2013). A simple, fast, and effective reparameter- ization of IBM Model 2. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), Atlanta, GA, USA.
Flaxman, A. D., Kalai, A. T., and McMahan, H. B. (2005). Online convex optimization in the bandit setting: gradient descent without a gradient. In ACM-SIAM Symposium on Discrete Algorithms (SODA), Philadelphia, PA. | 1601.04468#34 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 35 | Gimpel, K. and Smith, N. A. (2010). Softmax-margin training for structured log-linear models. Technical Report CMU-LTI-10-008, Carnegie Mellon University, Pittsburgh, PA, USA.
Goldwasser, D. and Roth, D. (2013). Learning from natural instructions. Machine Learning, 94(2):205â232.
Green, S., Wang, S. I., Chuang, J., Heer, J., Schuster, S., and Manning, C. D. (2014). Human effort and machine learnability in computer aided translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar.
He, X. and Deng, L. (2012). Maximum expected BLEU training of phrase and lexicon transla- tion models. In Meeting of the Association for Computational Linguistics (ACL), Jeju Island, Korea.
Heaï¬eld, K., Pouzyrevsky, I., Clark, J. H., and Koehn, P. (2013). Scalable modiï¬ed Kneser- Ney language model estimation. In Meeting of the Association for Computational Linguistics (ACL), Soï¬a, Bulgaria. | 1601.04468#35 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 36 | Koehn, P., Hoang, H., Birch, A., Callison-Birch, C., Federico, M., Bertoldi, N., Cowan, B., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., and Herbst, E. (2007). In ACL Demo and Poster Moses: Open source toolkit for statistical machine translation. Sessions, Prague, Czech Republic.
Koehn, P. and Schroeder, J. (2007). Experiments in domain adaptation for statistical machine translation. In Workshop on Statistical Machine Translation, Prague, Czech Republic.
Langford, J. and Zhang, T. (2007). The epoch-greedy algorithm for multi-armed bandits with side information. In Advances in Neural Information Processing Systems (NIPS). Vancouver, Canada.
Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 140:5â55.
Nemirovski, A., Juditsky, A., Lan, G., and Shapiro, A. (2009). Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574â1609. | 1601.04468#36 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.