doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1505.00855
5
Artists use different concepts to describe paintings. In particular, stylistic elements, such as space, texture, form, shape, color, tone and line are used. Other principles in- clude movement, unity, harmony, variety, balance, contrast, proportion, and pattern. To this might be added physical attributes, like brush strokes as well as subject matter and other descriptive concepts [13]. For the task of computer analyses of art, researchers have engineered and investi- gated various visual features3 that encode some of these artistic concepts, in particular brush strokes and color, which are encoded as low-level features such as texture statis- tics and color histograms (e.g. [19, 20]). Color and texture are highly prone to variations 3 In contrast to art disciplines, in the fields of computer vision and machine learning, researchers use the term“visual features” to denote statistical measurements that are extracted from images for the task of classification. In this paper we stick to this typical terminology. Large-scale Classification of Fine-Art Paintings
1505.00855#5
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00853
6
# 2.3. Parametric Rectified Linear Unit Parametric rectified linear is proposed by (He et al., 2015). The authors reported its performance is much better than ReLU in large scale image classification task. It is the same as leaky ReLU (Eqn.2) with the exception that ai is learned in the training via back propagation. # 2.4. Randomized Leaky Rectified Linear Unit Randomized Leaky Rectified Linear is the randomized version of leaky ReLU. It is first proposed and used in Kaggle NDSB Competition. The highlight of RReLU is that in training process, aji is a random number sampled from a uniform distribution U (l, u). Formally, we have: t \ { \ \ i Leaky ReLU/PReLU Randomized Leaky ReLU Figure 1: ReLU, Leaky ReLU, PReLU and RReLU. For PReLU, ai is learned and for Leaky ReLU ai is fixed. For RReLU, aji is a random variable keeps sam- pling in a given range, and remains fixed in testing. Seyi ifr, >0 . Yai = (a, if aj; <0, (3)
1505.00853#6
Empirical Evaluation of Rectified Activations in Convolutional Network
In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU). We evaluate these activation function on standard image classification task. Our experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results. Thus our findings are negative on the common belief that sparsity is the key of good performance in ReLU. Moreover, on small scale dataset, using deterministic negative slope or learning it are both prone to overfitting. They are not as effective as using their randomized counterpart. By using RReLU, we achieved 75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble.
http://arxiv.org/pdf/1505.00853
Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li
cs.LG, cs.CV, stat.ML
null
null
cs.LG
20150505
20151127
[ { "id": "1502.03167" }, { "id": "1501.04587" }, { "id": "1502.01852" } ]
1505.00855
6
Task Name List of Members Style Abstract Expressionism(1); Action Painting(2); Analytical Cubism(3); Art Nouveau- Modern Art(4); Baroque(S); Color Field Painting(6); Contemporary Realism(7); Cubism(8); Early Renaissance(9); Expressionism(10); Fauvism(11); High Renaissance(12); Impressionism(13); Mannerism-Late-Renaissance(14); Minimalism(15); Primitivism-Naive Art(16); New Realism(17); Northern Renaissance(18); Pointillism(19); Pop Art(20); Post Impressionism(21); Realism(22); Rococo(23); Romanticism(24); Symbolism(25); Synthetic Cubism(26); Ukiyo-e(27) Genre Abstract painting(1); Cityscape(2); Genre painting(3); Illustration(4); Landscape(5); Nude painting(6); Portrait(7); Religious painting(8); Sketch and Study(9); Still Life(10) Artist Albrecht Durer(1); Boris Kustodiev(2); Camille Pissarro(3); Childe Hassam(4); Claude Monet(5); Edgar
1505.00855#6
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00853
7
Seyi ifr, >0 . Yai = (a, if aj; <0, (3) where aji ∼ U (l, u), l < u and l, u ∈ [0, 1) (4) In the test phase, we take average of all the aji in training as in the method of dropout (Srivastava et al., 2014) , and thus set aji to l+u to get a deterministic 2 result. Suggested by the NDSB competition winner, aji is sampled from U (3, 8). We use the same configu- ration in this paper. In test time, we use: # 2.1. Rectified Linear Unit yji = xji l+u 2 (5) Rectified Linear is first used in Restricted Boltzmann Machines(Nair & Hinton, 2010). Formally, rectified linear activation is defined as: Xi w= {5 ifa; >0 ifa; <0. (1) # 3. Experiment Settings
1505.00853#7
Empirical Evaluation of Rectified Activations in Convolutional Network
In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU). We evaluate these activation function on standard image classification task. Our experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results. Thus our findings are negative on the common belief that sparsity is the key of good performance in ReLU. Moreover, on small scale dataset, using deterministic negative slope or learning it are both prone to overfitting. They are not as effective as using their randomized counterpart. By using RReLU, we achieved 75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble.
http://arxiv.org/pdf/1505.00853
Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li
cs.LG, cs.CV, stat.ML
null
null
cs.LG
20150505
20151127
[ { "id": "1502.03167" }, { "id": "1501.04587" }, { "id": "1502.01852" } ]
1505.00855
7
Durer(1); Boris Kustodiev(2); Camille Pissarro(3); Childe Hassam(4); Claude Monet(5); Edgar Degas(6); Eugene Boudin(7); Gustave Dore(8); Ilya Repin(9); van Aivazovsky(10); van Shishkin(11); John Singer Sargent(12); Marc Chagall(13); Martiros Saryan(14); Nicholas Roerich(15); Pablo Picasso(16); Paul Cezanne(17); Pierre-Auguste Renoir(18); Pyotr Konchalovsky(19); Raphael Kirchner(20); Rembrandt(21); Salvador Dali(22); Vincent van Gogh(23)
1505.00855#7
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00853
8
Xi w= {5 ifa; >0 ifa; <0. (1) # 3. Experiment Settings We evaluate classification performance on same con- volutional network structure with different activa- tion functions. Due to the large parameter search- ing space, we use two state-of-art convolutional net- work structure and same hyper parameters for differ- ent activation setting. All models are trained by using CXXNET2. 1Kaggle National Data Science Bowl Competition: https://www.kaggle.com/c/datasciencebowl 2CXXNET: https://github.com/dmlc/cxxnet Empirical Evaluation of Rectified Activations in Convolutional Network # 3.1. CIFAR-10 and CIFAR-100 The CIFAR-10 and CIFAR-100 dataset (Krizhevsky & Hinton, 2009) are tiny nature image dataset. CIFAR- 10 datasets contains 10 different classes images and CIFAR-100 datasets contains 100 different classes. Each image is an RGB image in size 32x32. There are 50,000 training images and 10,000 test images. We use raw images directly without any pre-processing and augmentation. The result is from on single view test without any ensemble.
1505.00853#8
Empirical Evaluation of Rectified Activations in Convolutional Network
In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU). We evaluate these activation function on standard image classification task. Our experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results. Thus our findings are negative on the common belief that sparsity is the key of good performance in ReLU. Moreover, on small scale dataset, using deterministic negative slope or learning it are both prone to overfitting. They are not as effective as using their randomized counterpart. By using RReLU, we achieved 75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble.
http://arxiv.org/pdf/1505.00853
Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li
cs.LG, cs.CV, stat.ML
null
null
cs.LG
20150505
20151127
[ { "id": "1502.03167" }, { "id": "1501.04587" }, { "id": "1502.01852" } ]
1505.00855
8
Table 1: List of Styles, Genres and Artists in our collection of fine-art paintings. Num- bers in the parenthesis are index of the row/column in confusion matrices 5, 6& 7 ac- cordingly. during the digitization of paintings; color is also affected by a painting’s age. The effect of digitization on the computational analysis of paintings is investigated in great depth by Polatkan et al. [24]. This highlights the need to carefully design visual features that are suitable for the analysis of paintings. Clearly, it would be a cumbersome process to engineer visual features that encode all the aforementioned artistic concepts. Recent advances in computer vision, using deep neural networks, showed the advantage of “learning” the features from data instead of engineering such features. However, It would also be impractical to learn visual features that encode such artistic concepts, since that would require extensive annotation of these concepts in each image within a large training and testing dataset. Obtaining such annotations require expertise in the field of art history that can not be achieved with typical crowed-sourcing annotators.
1505.00855#8
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00853
9
The network structure is shown in Table 1. It is taken from Network in Network(NIN)(Lin et al., 2013). Input Size NIN 32 × 32 32 × 32 32 × 32 32 × 32 16 × 16 16 × 16 16 × 16 16 × 16 16 × 16 8 × 8 8 × 8 8 × 8 8 × 8 8 × 8 10 or 100 5x5, 192 1x1, 160 1x1, 96 3x3 max pooling, /2 dropout, 0.5 5x5, 192 1x1, 192 1x1, 192 3x3,avg pooling, /2 dropout, 0.5 3x3, 192 1x1, 192 1x1, 10 8x8, avg pooling, /1 softmax We refer the network and augmentation setting from team AuroraXie4, one of competition winners. The network structure is shown in Table 5. We only use single view test in our experiment, which is different to original multi-view, multi-scale test.
1505.00853#9
Empirical Evaluation of Rectified Activations in Convolutional Network
In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU). We evaluate these activation function on standard image classification task. Our experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results. Thus our findings are negative on the common belief that sparsity is the key of good performance in ReLU. Moreover, on small scale dataset, using deterministic negative slope or learning it are both prone to overfitting. They are not as effective as using their randomized counterpart. By using RReLU, we achieved 75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble.
http://arxiv.org/pdf/1505.00853
Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li
cs.LG, cs.CV, stat.ML
null
null
cs.LG
20150505
20151127
[ { "id": "1502.03167" }, { "id": "1501.04587" }, { "id": "1502.01852" } ]
1505.00855
9
Given the aforementioned challenges to engineering or learning suitable visual fea- tures for painting, in this paper we follow an alternative strategy. We mainly investigate different state-of-the-art visual elements, ranging from low-level elements to semantic- level elements. We then use metric learning to achieve optimal similarity metrics be- tween paintings that are optimized for specific prediction tasks, namely style, genre, and artist classification. We chose these tasks to optimize and evaluate the metrics since, ul- timately, the goal of any art recommendation system would be to retrieve artworks that are similar along the directions of these high-level semantic concepts. Moreover, anno- tations for these tasks are widely available and more often agreed-upon by art historians and critics, which facilitates training and testing the metrics. In this paper we investigate a large space of visual features and learning methodolo- gies for the aforementioned prediction tasks. We propose and compare three learning methodologies to optimize such tasks. We present results of a comprehensive compara- tive study that spans four state-of-the-art visual features, five metric learning approaches and the proposed three learning methodologies, evaluated on the aforementioned three artistic prediction tasks. 3 4 Babak Saleh, Ahmed Elgammal # 2 Related Work
1505.00855#9
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00853
10
Input Size NDSB Net 70 × 70 70 × 70 70 × 70 35 × 35 35 × 35 35 × 35 35 × 35 17 × 17 17 × 17 17 × 17 17 × 17 17 × 17 17 × 17 17 × 17 8 × 8 8 × 8 8 × 8 8 × 8 8 × 8 8 × 8 12544 × 1 1024 × 1 1024 × 1 121 3x3, 32 3x3, 32 3x3, max pooling, /2 3x3, 64 3x3, 64 3x3, 64 3x3, max pooling, /2 split: branch1 — branch 2 3x3, 96 — 3x3, 96 3x3, 96 — 3x3, 96 3x3, 96 — 3x3, 96 3x3, 96 channel concat, 192 3x3, max pooling, /2 3x3, 256 3x3, 256 3x3, 256 3x3, 256 3x3, 256 SPP (He et al., 2014) {1, 2, 4} flatten fc1 fc2 softmax Table 1. CIFAR-10/CIFAR-100 network structure. Each layer is a convolutional layer if not otherwise specified. Ac- tivation function is followed by each convolutional layer.
1505.00853#10
Empirical Evaluation of Rectified Activations in Convolutional Network
In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU). We evaluate these activation function on standard image classification task. Our experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results. Thus our findings are negative on the common belief that sparsity is the key of good performance in ReLU. Moreover, on small scale dataset, using deterministic negative slope or learning it are both prone to overfitting. They are not as effective as using their randomized counterpart. By using RReLU, we achieved 75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble.
http://arxiv.org/pdf/1505.00853
Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li
cs.LG, cs.CV, stat.ML
null
null
cs.LG
20150505
20151127
[ { "id": "1502.03167" }, { "id": "1501.04587" }, { "id": "1502.01852" } ]
1505.00855
10
3 4 Babak Saleh, Ahmed Elgammal # 2 Related Work On the subject of painting, computers have been used for a diverse set of tasks. Tra- ditionally, image processing techniques have been used to provide art historians with quantification tools, such as pigmentation analysis, statistical quantification of brush strokes, etc. We refer the reader to [28, 5] for comprehensive surveys on this subject. Several studies have addressed the question of which features should be used to encode information in paintings. Most of the research concerning the classification of paintings utilizes low-level features encoding color, shadow, texture, and edges. For ex- ample Lombardi [20] has presented a study of the performance of these types of features for the task of artist classification among a small set of artists using several supervised and unsupervised learning methodologies. In that paper the style of the painting was identified as a result of recognizing the artist.
1505.00855#10
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00853
11
In CIFAR-100 experiment, we also tested RReLU on Batch Norm Inception Network (Ioffe & Szegedy, 2015). We use a subset of Inception Network which is started from inception-3a module. This network achieved 75.68% test accuracy without any ensemble or multiple view test 3. # 3.2. National Data Science Bowl Competition The task for National Data Science Bowl competition is to classify plankton animals from image with award of $170k. There are 30,336 labeled gray scale images in 121 classes and there are 130,400 test data. Since the test set is private, we divide training set into two parts: 25,000 images for training and 5,336 images for validation. The competition uses multi-class log-loss to evaluate classification performance. 3CIFAR-100 Reproduce code: https://github. com/dmlc/mxnet/blob/master/example/notebooks/ cifar-100.ipynb Table 2. National Data Science Bowl Competition Net- work. All layers are convolutional layers if not otherwise specified. Activation function is followed by each convolu- tional layer. # 4. Result and Discussion
1505.00853#11
Empirical Evaluation of Rectified Activations in Convolutional Network
In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU). We evaluate these activation function on standard image classification task. Our experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results. Thus our findings are negative on the common belief that sparsity is the key of good performance in ReLU. Moreover, on small scale dataset, using deterministic negative slope or learning it are both prone to overfitting. They are not as effective as using their randomized counterpart. By using RReLU, we achieved 75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble.
http://arxiv.org/pdf/1505.00853
Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li
cs.LG, cs.CV, stat.ML
null
null
cs.LG
20150505
20151127
[ { "id": "1502.03167" }, { "id": "1501.04587" }, { "id": "1502.01852" } ]
1505.00855
11
Since brushstrokes provide a signature that can help identify the artist, designing visual features that encode brushstrokes has been widely adapted.(e.g. [25, 18, 22, 15, 6, 19]). Typically, texture statistics are used for that purpose. However, as mentioned earlier, texture features are highly affected by the digitization resolution. Researchers also investigated the use of features based on local edge orientation histograms, such as SIFT [21] and HOG [10]. For example, [12] used SIFT features within a Bag-of-words pipeline to discriminate among a set of eight artists. Arora et al. [3] presented a comparative study for the task of style classification, which evaluated low-level features, such as SIFT and Color SIFT [1], versus semantic- level features, namely Classemes [29], which encodes object presence in the image. It was found that semantic-level features significantly outperform low-level features for this task. However the evaluation was conducted on a small dataset of 7 styles, with 70 paintings in each style. Carneiro et al [9] also concluded that low-level texture and color features are not effective because of inconsistent color and texture patterns that describe the visual classes in paintings.
1505.00855#11
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00853
12
# 4. Result and Discussion Table 3 and 4 show the results of CIFAR-10/CIFAR- 100 dataset, respectively. Table 5 shows the NDSB result. We use ReLU network as baseline, and com- pare the convergence curve with other three activa- tions pairwisely in Fig. 2, 3 and 4, respectively. All these three leaky ReLU variants are better than base- line on test set. We have the following observations based on our experiment: 1. Not surprisingly, we find the performance of nor- mal leaky ReLU (a = 100) is similar to that of ReLU, but very leaky ReLU with larger a = 5.5 is much better. 4Winning Doc of AuroraXie: https://github.com/ auroraxie/Kaggle-NDSB Empirical Evaluation of Rectified Activations in Convolutional Network 2. On training set, the error of PReLU is always the lowest, and the error of Leaky ReLU and RReLU are higher than ReLU. It indicates that PReLU may suffer from severe overfitting issue in small scale dataset.
1505.00853#12
Empirical Evaluation of Rectified Activations in Convolutional Network
In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU). We evaluate these activation function on standard image classification task. Our experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results. Thus our findings are negative on the common belief that sparsity is the key of good performance in ReLU. Moreover, on small scale dataset, using deterministic negative slope or learning it are both prone to overfitting. They are not as effective as using their randomized counterpart. By using RReLU, we achieved 75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble.
http://arxiv.org/pdf/1505.00853
Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li
cs.LG, cs.CV, stat.ML
null
null
cs.LG
20150505
20151127
[ { "id": "1502.03167" }, { "id": "1501.04587" }, { "id": "1502.01852" } ]
1505.00855
12
More recently, Saleh et al [26] used metric learning approaches for finding influence paths between painters based on their paintings. They evaluated three metric learning approaches to optimize a metric over low-level HOG features. In contrast to that work, the evaluation presented in this paper is much wider in scope since we address three tasks (style, genre and artist prediction), we cover features spanning from low-level to semantic-level and we evaluate five metric learning approaches. Moreover, The dataset of [26] has only 1710 images from 66 artists, while we conducted our experiments on 81,449 images painted by 1119 artists. Bar et al [4] proposed an approach for style clas- sification based on features obtained from a convolution neural network pre-trained on an image categorization task. In contrast we show that we can achieve better results with much lower dimensional features that are directly optimized for style and genre clas- sification. Lower dimensionality of the features is preferred for indexing large image collections. # 3 Methodology In this section we explain the methodology that we follow to find the most appropriate combination of visual features and metrics that produce accurate similarity measureLarge-scale Classification of Fine-Art Paintings
1505.00855#12
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00853
13
reasons of their superior performances still lack rigor- ous justification from theoretic aspect. Also, how the activations perform on large scale data is still need to be investigated. This is an open question worth pur- suing in the future. 3. The superiority of RReLU is more significant than that on CIFAR-10/CIFAR-100. We conjec- ture that it is because the in the NDSB dataset, the training set is smaller than that of CIFAR- 10/CIFAR-100, but the network we use is even bigger. This validates the effectiveness of RReLU when combating with overfitting. # Acknowledgement We would like to thank Jason Rolfe from D-Wave sys- tem for helpful discussion on test network for random- ized leaky ReLU. # References 4. For RReLU, we still need to investigate how the randomness influences the network training and testing process. Girshick, Ross, Donahue, Jeff, Darrell, Trevor, and Malik, Jitendra. Rich feature hierarchies for accu- rate object detection and semantic segmentation. In CVPR, pp. 580–587, 2014.
1505.00853#13
Empirical Evaluation of Rectified Activations in Convolutional Network
In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU). We evaluate these activation function on standard image classification task. Our experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results. Thus our findings are negative on the common belief that sparsity is the key of good performance in ReLU. Moreover, on small scale dataset, using deterministic negative slope or learning it are both prone to overfitting. They are not as effective as using their randomized counterpart. By using RReLU, we achieved 75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble.
http://arxiv.org/pdf/1505.00853
Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li
cs.LG, cs.CV, stat.ML
null
null
cs.LG
20150505
20151127
[ { "id": "1502.03167" }, { "id": "1501.04587" }, { "id": "1502.01852" } ]
1505.00855
13
repeated for All Metrics (Projection Matrices G) Kee’ ‘ H i I I i i I I i i I I i z Classification | » Based i q i I I i i I I i i I I i I } ‘on Concept © Fig. 2: Illustration of our second methodology - Feature Fusion. ments. We acquire these measurements to mimic the art historian’s ability to categorize paintings based on their style, genre and the artist who made it. In the first step, we extract visual features from the image. These visual features range from low-level (e.g. edges) to high-level (e.g. objects in the painting). More importantly, in the next step we learn how to adjust these features for different classification tasks by learning the ap- propriate metrics. Given the learned metric we are able to project paintings from a high dimensional space of raw visual information to a meaningful space with much lower dimensionality. Additionally, learning a classifier in this low-dimensional space can be easily scaled up for large collections.
1505.00855#13
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00853
14
Activation ReLU Leaky ReLU, a = 100 Leaky ReLU, a = 5.5 PReLU RReLU (yji = xji/ l+u 2 ) Training Error Test Error 0.00318 0.0031 0.00362 0.00178 0.00550 0.1245 0.1266 0.1120 0.1179 0.1119 Glorot, Xavier, Bordes, Antoine, and Bengio, Yoshua. In Proceedings of Deep sparse rectifier networks. the 14th International Conference on Artificial In- telligence and Statistics. JMLR W&CP Volume, vol- ume 15, pp. 315–323, 2011. Table 3. Error rate of CIFAR-10 Network in Network with different activation function Activation ReLU Leaky ReLU, a = 100 Leaky ReLU, a = 5.5 PReLU RReLU (yji = xji/ l+u 2 ) Training Error Test Error 0.1356 0.11552 0.08536 0.0633 0.1141 0.429 0.4205 0.4042 0.4163 0.4025
1505.00853#14
Empirical Evaluation of Rectified Activations in Convolutional Network
In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU). We evaluate these activation function on standard image classification task. Our experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results. Thus our findings are negative on the common belief that sparsity is the key of good performance in ReLU. Moreover, on small scale dataset, using deterministic negative slope or learning it are both prone to overfitting. They are not as effective as using their randomized counterpart. By using RReLU, we achieved 75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble.
http://arxiv.org/pdf/1505.00853
Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li
cs.LG, cs.CV, stat.ML
null
null
cs.LG
20150505
20151127
[ { "id": "1502.03167" }, { "id": "1501.04587" }, { "id": "1502.01852" } ]
1505.00855
14
In the rest of this section: First, we introduce our collection of fine-art paintings and explain what are the tasks that we target in this work. Later, we explore methodologies that we consider in this work to find the most accurate system for aforementioned tasks. Finally, we explain different types of visual features that we use to represent images of paintings and discuss metric learning approaches that we applied to find the proper notion of similarity between paintings. # 3.1 Dataset and Proposed Tasks In order to gather our collection of fine-art paintings, we used the publicly available dataset of ”Wikiart paintings”4; which, to the best of our knowledge, is the largest on- line public collection of digitized artworks. This collection has images of 81,449 fine- art paintings from 1,119 artists ranging from fifteen centuries to contemporary artists. These paintings are from 27 different styles (Abstract, Byzantine, Baroque, etc.) and 45 different genres (Interior, Landscape, etc.) Previous work [26, 9] used different re- sources and made smaller collections with limited variability in terms of style, genre and artists. The work of [4] is the closest to our work in terms of data collection proce- dure, but the number of images in their collection is half of ours.
1505.00855#14
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00853
15
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Spatial pyramid pooling in deep convo- lutional networks for visual recognition. In ECCV, pp. 346–361, 2014. He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. arXiv preprint arXiv:1502.01852, 2015. Table 4. Error rate of CIFAR-100 Network in Network with different activation function Activation ReLU Leaky ReLU, a = 100 Leaky ReLU, a = 5.5 PReLU RReLU (yji = xji/ l+u 2 ) Train Log-Loss Val Log-Loss 0.8092 0.7846 0.7831 0.7187 0.8090 0.7727 0.7601 0.7391 0.7454 0.7292 Ioffe, Sergey and Szegedy, Christian. Batch nor- malization: Accelerating deep network training by arXiv preprint reducing internal covariate shift. arXiv:1502.03167, 2015.
1505.00853#15
Empirical Evaluation of Rectified Activations in Convolutional Network
In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU). We evaluate these activation function on standard image classification task. Our experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results. Thus our findings are negative on the common belief that sparsity is the key of good performance in ReLU. Moreover, on small scale dataset, using deterministic negative slope or learning it are both prone to overfitting. They are not as effective as using their randomized counterpart. By using RReLU, we achieved 75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble.
http://arxiv.org/pdf/1505.00853
Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li
cs.LG, cs.CV, stat.ML
null
null
cs.LG
20150505
20151127
[ { "id": "1502.03167" }, { "id": "1501.04587" }, { "id": "1502.01852" } ]
1505.00855
15
We target automatic classification of paintings based on their style, genre and artist using visual features that are automatically extracted using computer vision algorithms. Each of these tasks has its own challenges and limitations. For example, there are large 4 http://www.wikiart.org/ 5 6 Babak Saleh, Ahmed Elgammal variations in terms of visual appearances in paintings from one specific style. However, this variation is much more limited for paintings by one artist. These larger intra-class variations suggests that style classification based on visual features is more challenging than artist classification. For each of the tasks we selected a subset of the data that ensure enough samples for training and testing. In particular for style classification we use a subset of the date with 27 styles where each style has at least 1500 paintings with no restriction on genre or artists, with a total of 78,449 images. For genre classification we use a subset with 10 genre classes, where each genre has at least 1500 paintings with no restriction of style or genre, with a total of 63,691 images. Similarly for artist classification we use a subset of 23 artists, where each of them has at least 500 paintings, with a total of 18,599 images. Table 1 lists the set of style, genre, and artist labels.
1505.00855#15
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00853
16
Krizhevsky, Alex and Hinton, Geoffrey. Learning mul- tiple layers of features from tiny images. Computer Science Department, University of Toronto, Tech. Rep, 1(4):7, 2009. Table 5. Multi-classes Log-Loss of NDSB Network with dif- ferent activation function # 5. Conclusion Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geof- frey E. Imagenet classification with deep convolu- In NIPS, pp. 1097–1105, tional neural networks. 2012. In this paper, we analyzed four rectified activation functions using various network architectures on three datasets. Our findings strongly suggest that the most popular activation function ReLU is not the end of story: Three types of (modified) leaky ReLU all con- sistently outperform the original ReLU. However, the Lin, Min, Chen, Qiang, and Yan, Shuicheng. Network in network. arXiv preprint arXiv:1312.4400, 2013. Maas, Andrew L, Hannun, Awni Y, and Ng, An- drew Y. Rectifier nonlinearities improve neural net- work acoustic models. In ICML, volume 30, 2013.
1505.00853#16
Empirical Evaluation of Rectified Activations in Convolutional Network
In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU). We evaluate these activation function on standard image classification task. Our experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results. Thus our findings are negative on the common belief that sparsity is the key of good performance in ReLU. Moreover, on small scale dataset, using deterministic negative slope or learning it are both prone to overfitting. They are not as effective as using their randomized counterpart. By using RReLU, we achieved 75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble.
http://arxiv.org/pdf/1505.00853
Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li
cs.LG, cs.CV, stat.ML
null
null
cs.LG
20150505
20151127
[ { "id": "1502.03167" }, { "id": "1501.04587" }, { "id": "1502.01852" } ]
1505.00855
16
# 3.2 Classification Methodology In order to classify paintings based on their style, genre or artist we followed three methodologies. Metric Learning: First, as depicted in figure 1, we extract visual features from im- ages of paintings. For each of these prediction tasks, we learn a similarity metric op- timized for it, i.e. style-optimized metric, genre-optimized metric and artist-optimized metric. Each metric induces a projector to a corresponding feature space optimized for the corresponding task. Having the metric learned, we project the raw visual features into the new optimized feature space and learn classifiers for the corresponding predic- tion task. For that purpose we learn a set of one-vs-all SVM classifiers for each of the labels in table 1 for each of the tasks. While our first strategy focuses on classification based on combinations of a metric and a visual feature, the next two methodologies that we followed fuse different features or different metrics.
1505.00855#16
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00853
17
Empirical Evaluation of Rectified Activations in Convolutional Network ReLU Train ReLU Val Leaky ReLU,a=100 Train ReLU Train ReLU Val Leaky ReLU,a=100 Train ReLU Train ReLU Val Leaky ReLU,a=5.5 Train ReLU Train ReLU Val PReLU Train PReLU Val ReLU Train ReLU Val RReLU,[3,8] Train RReLU,[3.8] Val 100 200 30 00-150 r 0 Epoch Epoch 100 30200250 r Epoch ReLU Train ReLU Val Leaky ReLU,a=5.5 Train 100 30 r Epoch ReLU Train ReLU Val PReLU Train PReLU Val 200 00-150 0 Epoch ReLU Train ReLU Val RReLU,[3,8] Train RReLU,[3.8] Val 100 30200250 r Epoch Figure 2: Convergence curves for training and test sets of different activations on CIFAR-10 Network in Network.
1505.00853#17
Empirical Evaluation of Rectified Activations in Convolutional Network
In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU). We evaluate these activation function on standard image classification task. Our experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results. Thus our findings are negative on the common belief that sparsity is the key of good performance in ReLU. Moreover, on small scale dataset, using deterministic negative slope or learning it are both prone to overfitting. They are not as effective as using their randomized counterpart. By using RReLU, we achieved 75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble.
http://arxiv.org/pdf/1505.00853
Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li
cs.LG, cs.CV, stat.ML
null
null
cs.LG
20150505
20151127
[ { "id": "1502.03167" }, { "id": "1501.04587" }, { "id": "1502.01852" } ]
1505.00855
17
Feature fusion: The second methodology that we used for classification is depicted in figure 2. In this case, we extract different types of visual features (four types of features as will explained next). Based on the prediction task (e.g. style) we learn the metric for each type of feature as before. After projecting these features separately, we concatenate them to make the final feature vector. The classification will be based on training classifiers using these final features. This feature fusion is important as we want to capture different types of visual information by using different types of features. Also concatenating all features together and learn a metric on top of this huge feature vector is computationally intractable. Because of this issue, we learn metrics on feature separately and after projecting features by these metrics, we can concatenate them for classification purposes.
1505.00855#17
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00853
18
ReLU Train ReLU Val Error ReLU Train ReLU Val PReLU Train PReLU Val 10 30 200 250 ot Epoch 10 o.s| o.8| 07] 5 0.6| 5 os| o.a| 0.3| o2| oul ReLU Train ReLU Val RReLU,[3,8] Train RReLU,[3.8] Val 0 100 50 Epoch 10 ReLU Train ReLU Val ReLU Train ReLU Val PReLU Train PReLU Val o.s| o.8| 07] Error 5 0.6| 5 os| o.a| 0.3| o2| oul ReLU Train ReLU Val RReLU,[3,8] Train RReLU,[3.8] Val 10 30 200 250 0 ot Epoch 100 50 Epoch Figure 3: Convergence curves for training and test sets of different activations on CIFAR-100 Network in Network. ReLU Train ReLU Val Leaky ReLU, Leaky ReLU, 100 150 Epoch 200 Bo 300 ReLU Train ReLU Val Leaky ReLU, Leaky ReLU, ‘030100 150 Epoch 200-250-300 ‘030100380 Epoch 200-230-300
1505.00853#18
Empirical Evaluation of Rectified Activations in Convolutional Network
In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU). We evaluate these activation function on standard image classification task. Our experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results. Thus our findings are negative on the common belief that sparsity is the key of good performance in ReLU. Moreover, on small scale dataset, using deterministic negative slope or learning it are both prone to overfitting. They are not as effective as using their randomized counterpart. By using RReLU, we achieved 75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble.
http://arxiv.org/pdf/1505.00853
Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li
cs.LG, cs.CV, stat.ML
null
null
cs.LG
20150505
20151127
[ { "id": "1502.03167" }, { "id": "1501.04587" }, { "id": "1502.01852" } ]
1505.00855
18
Metric-fusion: The third methodology (figure 3) projects each visual features using multiple metrics (in our experiment we used five metrics as will be explained next) and then fuses the resulting optimized feature spaces to obtain a final feature vector for classification. This is an important strategy, because each one of the metric learning approaches use a different criteria to learn the similarity measurement. By learning all metrics individually (on the same type of feature), we make sure that we took into account all criteria (e.g. information theory along with neighbor hood analysis). Large-scale Classification of Fine-Art Paintings Fs | asec on Concept © | Fy Fs ' i { H H i { H H i { H H i Classification t H i { H H i { H H i { H H i Repeated for All Concepts # Fig. 3: Illustration of our third methodology– Metric Fusion. # 3.3 Visual Features
1505.00855#18
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00853
19
‘030100380 Epoch 200-230-300 ReLU Train ReLU Val RReLU,[3,8] Train RReLU,[3.8] Val ‘030100 150 Epoch 200-250-300 ReLU Train ReLU Val Leaky ReLU, Leaky ReLU, ReLU Train ReLU Val Leaky ReLU, Leaky ReLU, ReLU Train ReLU Val RReLU,[3,8] Train RReLU,[3.8] Val 100 150 Epoch 200 Bo 300 ‘030100 150 Epoch 200-250-300 ‘030100380 Epoch 200-230-300 ‘030100 150 Epoch 200-250-300 Figure 4: Convergence curves for training and test sets of different activations on NDSB Net. Nair, Vinod and Hinton, Geoffrey E. Rectified linear units improve restricted Boltzmann machines. In ICML, pp. 807–814, 2010. Erhan, Dumitru, Vanhoucke, Vincent, and Rabi- novich, Andrew. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014.
1505.00853#19
Empirical Evaluation of Rectified Activations in Convolutional Network
In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU). We evaluate these activation function on standard image classification task. Our experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results. Thus our findings are negative on the common belief that sparsity is the key of good performance in ReLU. Moreover, on small scale dataset, using deterministic negative slope or learning it are both prone to overfitting. They are not as effective as using their randomized counterpart. By using RReLU, we achieved 75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble.
http://arxiv.org/pdf/1505.00853
Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li
cs.LG, cs.CV, stat.ML
null
null
cs.LG
20150505
20151127
[ { "id": "1502.03167" }, { "id": "1501.04587" }, { "id": "1502.01852" } ]
1505.00855
19
Fig. 3: Illustration of our third methodology– Metric Fusion. # 3.3 Visual Features Visual features in computer vision literature are either engineered and extracted in an unsupervised way (e.g. HOG, GIST) or learned based on optimizing a specific task, typically categorization of objects or scenes (e.g. CNN-based features). This results in high-dimensional feature vectors that might not necessary correspond to nameable (semantic-level) characteristics of an image. Based on the ability to find a meaning, visual features can be categorized into low-level and high-level. Low-level features are visual descriptors that there is no explicit meaning for each dimension of them, while high-level visual features are designed to capture some notions (usually objects). For this work, we investigated some state-of-the-art representatives of these two categories: Low-level Features: On one hand, in order to capture low-level visual information we extracted GIST features [23], which are holistic features that are designed for scene categorization. GIST features provide a 512 real-valued representation that implicitly captures the dominant spatial structure of the image.
1505.00855#19
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00853
20
Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhi- heng, Karpathy, Andrej, Khosla, Aditya, Bernstein, Michael, Berg, Alexander C., and Fei-Fei, Li. Im- ageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 2015. doi: 10.1007/s11263-015-0816-y. Wang, Naiyan, Li, Siyi, Gupta, Abhinav, and Ye- ung, Dit-Yan. Transferring rich feature hierar- arXiv preprint chies for robust visual tracking. arXiv:1501.04587, 2015. Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958, 2014.
1505.00853#20
Empirical Evaluation of Rectified Activations in Convolutional Network
In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU). We evaluate these activation function on standard image classification task. Our experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results. Thus our findings are negative on the common belief that sparsity is the key of good performance in ReLU. Moreover, on small scale dataset, using deterministic negative slope or learning it are both prone to overfitting. They are not as effective as using their randomized counterpart. By using RReLU, we achieved 75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble.
http://arxiv.org/pdf/1505.00853
Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li
cs.LG, cs.CV, stat.ML
null
null
cs.LG
20150505
20151127
[ { "id": "1502.03167" }, { "id": "1501.04587" }, { "id": "1502.01852" } ]
1505.00855
20
Learned Semantic-level Features: On the other hand, for the purpose of seman- tic representation of the images, we extracted three object-based representation of the images: Classeme [29], Picodes [8], and CNN-based features [16]. In all these three features, each element of the feature vector represents the confidence of the presence of an object-category in the image, therefore they provide a semantic encoding of the images. However, for learning these features, the object-categories are generic and are not art-specific. First two features are designed to capture the presence of a set of basic- level object categories as following: a list of entry-level categories (e.g. horse and cross) is used for downloading a large collection of images from the web. For each image a comprehensive set of low-level visual features are extracted and one classifier is learned for each category. For a given test image, these classifiers are applied on the image and the responses (confidences) make the final feature vector. We followed the implemen- tation of [7] and for each image extracted a 2659 dimensional real-valued Classeme feature vector and a 2048 dimensional binary-value Picodes feature.
1505.00855#20
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
22
Fig. 4: PCA coefficients for CNN features output of these fully connected layers achieve a superior performance for the task of style classification of paintings. Following this observation we used the last layer of a pre-trained CNN [16] (1000 dimensional real-valued vectors) as another feature vector. # 3.4 Metric Learning The purpose of Metric Learning is to find some pair-wise real-valued function dj¢ (x, x’) which is non-negative, symmetric, obeys the triangle inequality and returns zero if and only if a and x’ are the same point. Training such a function in a general form can be seen as the following optimization problem: min M l(M, D) + λR(M ) (1) This optimization has two sides, first it tries to minimize the amount of loss l(M, D) by using metric M over data samples D while trying to adjust the model by the regular- ization term R(M ). The first term shows the accuracy of the trained metric and second one estimates its capability over new data and avoids overfitting. Based on the enforced constraints, the resulted metric can be linear or non-linear and depending on the amount of labels used for training, it can be supervised or unsupervised.
1505.00855#22
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
23
For consistency over the metric learning algorithms, we need to fix the notation first. We learn the matrix M that will be used in Generalized Mahalanobis Distance: dy (x, 2’) = \/(x — 2')'/M (a — x’), where M by definition is a positive semi-definite matrix and can be decomposed as M = G7G. We use this matrix G to project raw visual features. Measuring similarity in this projection space is simply computing the euclidean distance between two item. It is interesting that we can reduce the dimension of features during learning the metric when M is a low rank matrix. More importantly, there are significantly important information in the ground truth annotation associated with paintings that we use to learn a more reliable metric in a supervised fashion for both the linear and non-linear cases. We consider following approaches that differ based on the form of M or the amount of regularization. Large-scale Classification of Fine-Art Paintings
1505.00855#23
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
24
Fig. 5: Confusion matrix for Style classification. Confusions are meaningful only when seen in color. Neighborhood Component Analysis (NCA) The objective function of NCA is related to analyzing the nearest neighbors. The idea starts with projecting the data by matrix M and training a leave-one-out classifier. Then the probability of correctly clas- sifying x; is P; = Din=u; P,;, where P;; is the mean expected loss of classifying x; as a member of class j. Then this metric is learned by optimizing the following term: max), )>; P;. We can decompose M as L’ « L and choosing a rectangular L will result in a low-rank matrix 1/7. Although this method is easy to understand and implement, it is subject to local minimums. This happens due to the non-convexity of the proposed optimization problem. The next approach has the advantage of solving a convex optimization.
1505.00855#24
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
25
Large Margin Nearest Neighbors (LMNN) LMNN [32] is an approach for learning a Mahalanobis distance, which is widely used because of its global optimum solution and superior performance in practice. The learning of this metric involves a set of con- strains, all of which are defined locally. This means that LMNN enforces the k near- est neighbor of any training instance belonging to the same class (these instances are called “target neighbors”). This should be done while all the instances of other classes, referred as “impostors”, should be far from this point. For finding the target neighbors, Euclidean distance has been applied to each pair of samples, resulting in the following formulation: min M (1 − µ) (xi,xj )∈T d2 M (xi, xj) + µ i,j,k ηi,j,k M (xi, xk) − d2 M (xi, xj) ≥ 1 − ηi,j,k∀(xi, xj, xk) ∈ I. s.t. : d2
1505.00855#25
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
26
s.t. : d2 Where T stands for the set of Target neighbors and I represents Impostors. Since these constrains are locally defined, this optimization leads to a convex formulation and a global solution. This metric learning approach is related to Support Vector Machines (SVM) in principle, which theoretically engages its usage along with SVM for the task of classification. Due to the popularity of LMNN, different variations of it have been introduced, including a non-linear version called gb-LMNN [32] which we used in our experiments 9 Babak Saleh, Ahmed Elgammal
1505.00855#26
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
27
Fig. 6: Confusion matrix for Genre classification. Confusions are meaningful only when seen in color. as well. However its performance for classification tasks was worse that linear LMNN. We assume this poor performance is rooted in the nature of visual features that we extract for paintings. Boost Metric This approach is based on the fact that a positive semi-definite matrix can be decomposed into a linear combination of trace-one rank-one matrices. Shen et al [27] use this fact and instead of learning M , finds a set of weaker metrics that can be combined and give the final metric. They treat each of these matrices as a Weak Learner, which is used in the literature of Boosting methods. The resulting algorithm applies the idea of AdaBoost to Mahalanobis distance, which has been shown to be quiet efficient in practice. This method is particularly of our interest, because we can learn an individual met- ric for each style of paintings and finally merge these metrics to get a unique final metric. Theoretically the final metric can perform well to find similarities inside each style/genre of paintings as well.
1505.00855#27
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
28
Information Theory Metric Learning (ITML) This metric learning algorithm is based on Information theory rather than Mahalanobis distances. In other words the op- timization problem of learning a metric involves an information measure. Davis et al introduce the measure of LogDet divergence regularization between two matrices /, M’(can be interpreted as metrics). By using this measure, learning the metric can be represented by: pitin, , Da(M, M') + Ls t.: diy (ai,aj;) Sut «s(n 4) és. dip (ai, vj) > v — 4: jV(ai, aj) € D. Learning ITML via this formulation aims to satisfy a set of Similarity(S) and Dis- similarity(D) constrains while keeping the new metric M’ close to the initial metric Large-scale Classification of Fine-Art Paintings
1505.00855#28
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
29
Fig. 7: Confusion matrix for Artist classification. Confusions are meaningful only when seen in color. M . There are two key features of the LogDet divergence: 1) It is finite if and only if matrices are positive semi-definite(PSD), 2) This function is rank-preserving. These properties indicate that if we start learning the metric IM’ by setting the initial matrix MV as identity matrix(/), ITML returns a metric that is from the same rank and is very similar to the Euclidean distance. Although this iterative process converges to a global minimum which performs well in practice, it is very sensitive to the choice of initialization of metric(M ). Metric Learning for Kernel Regression (MLKR) Similar to NCA objective function, which minimizes the classification error; Weinberger and Tesauro [31] learn a metric by optimizing the leave-one-out error for the task of kernel regression. In kernel regression, there is an essential need for proper distances between points that will be used for weighting sample data. MLKR learn this distance by minimizing the leave-one-out error for regression on training data. Although this metric learning method is designed for kernel regression, the resulted distance function can be used in variety of tasks. # 4 Experiments # 4.1 Experimental Setting
1505.00855#29
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
30
# 4 Experiments # 4.1 Experimental Setting Visual Features As we explained in section 3, we extract GIST features as low-level visual features and Classeme, Picodes and CNN-based features as the high-level se- mantic features. We followed the original implementation of Oliva and Torralba [23] to get a 512 dimensional feature vector. For Classeme and Picodes we used the implemen- tation of Bergamo et al [29], resulting in 2659 dimensional Classeme features and 2048 dimensional Picodes features. We used the implementation of Vedaldi and Lenc [30] to extract 1000 dimensional feature vectors of the last layer of CNN. Object-based representations of the images produce feature vectors that are much higher in dimensionality than GIST descriptors. In the sake of a fair comparison of all types of features for the task of metric learning, we transformed all feature vectors to have the same size as GIST (512 dimensional). We did this by applying Principle Com- ponent Analysis (PCA) for each type and projecting the original features onto the first 11 Babak Saleh, Ahmed Elgammal Metric / Features GIST Classemes Picodes CNN Dim. Baseline Boost ITML LMNN MLKR NCA Table 2: Accuracy for the task of style classification
1505.00855#30
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
31
Metric / Features GIST Classemes Picodes CNN Dim. Baseline Boost ITML LMNN MLKR NCA Table 2: Accuracy for the task of style classification 512 eigenvectors (with biggest eigenvalues). In order to verify the quality of projec- tion, we looked at the corresponding coefficients of eigenvalues for PCA projections. Independent of feature type, the value of these coefficients drops significantly after the first 500 eigenvectors. For example, figure 4 plots these coefficients of PCA projec- tion for CNN features. Summation of the first 500 coefficients is 95.88% of the total summation. This shows that our projections (with 512 eigenvectors) captures the true underlying space of the original features. Using these reduced features speeds up the metric learning process as well.
1505.00855#31
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
32
Metric Learning We used implementation of [32] to learn LMNN metric(both ver- sion of linear and non-linear) and MLKR 5. For the BoostMetric we slightly adjusted the implementation of [27]. For NCA we adopted its implementation by Fowlkes6 to work on large scale feature vectors smoothly. For the case of ITML metric learning, we followed the original implementation of authors with the default setting. For the rest of methods, parameters are chosen through a grid search that finds the minimum nearest neighbor classification. Regarding the training time, learning the ITML metric was the fastest and learning NCA and LMNN were the slowest ones. Due to computa- tional constrains we set the parameters of LMNN metric to reduce the size of features to 100. NCA metric reduces the dimension of features to the number of categories for each tasks: 27 for style classification, 23 for artist classification and 10 for genre classi- fication. We randomly picked 3000 samples, which we used for metric learning. These samples follow the same distribution as original data and are not used for classification experiments. # 4.2 Classification Experiments
1505.00855#32
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
33
# 4.2 Classification Experiments For the purpose of metric learning, we conducted experiments with labels for three dif- ferent tasks of style, genre and artist prediction. In following sections we investigate the performance of these metrics on different features for classification of aforementioned concepts. We learned all the metrics in section3 for all 27 styles of paintings in our dataset (e.g. Expressionism, Realism, etc.). However, we did not use all the genres for learning metrics. In fact in our dataset we have 45 genres, some of which have less than 20 images. This makes the metric learning impractical and highly biased toward genres # 5 http://www.cse.wustl.edu/ kilian/index.html 6 http://www.ics.uci.edu/ fowlkes/ Large-scale Classification of Fine-Art Paintings Metric / Features GIST Classemes Picodes CNN Dim. Baseline Boost ITML LMNN MLKR NCA Table 3: Accuracy for the task of genre classification
1505.00855#33
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
34
Metric / Features GIST Classemes Picodes CNN Dim. Baseline Boost ITML LMNN MLKR NCA Table 3: Accuracy for the task of genre classification with larger number of paintings. Because of this issue, we focus on 10 genres with more than 1500 paintings. These genres are listed in table 1. In all experiments we conducted 3 fold cross validation and reported the average accuracy over all partitions. We found the best value for penalty term in SVM (which is equal to 10) by three fold cross validation. In the next three sections, we explain settings and findings for each task independently.
1505.00855#34
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
35
Style Classification Table 2 contains the result (accuracy percentage) of style classifi- cation (SVM) after applying different metrics on a set of features. Columns correspond to different features and rows are different metrics that are used for projecting fea- tures before learning style classifiers. In order to quantify the improvement by learning similarity metrics, we conducted a baseline experiment (first row in the table) as the following: For each type of features, we learn a set of one-vs-all classifiers on raw fea- ture vectors. Generally Boost metric learning and ITML approaches give the highest in accuracy for the task of style classification over different visual features. However the greatest improvement over the baseline is gained by application of Boost metric on Classeme features. We visualized the confusion matrix for the task of style classifica- tion, when we learn Boost metric on Classeme features.
1505.00855#35
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
36
Figure 5 shows this matrix, where red represents higher values. Further analysis of some confusions that are captured in this matrix result in interesting findings. In the rest of this paragraph we explain some of these cases. First, we found that there is a big confusion between “Abstract expressionism” (first row) and “Action paintings” (second column). Art historians verify the fact that this confusion is meaningful and somehow expected. “Action painting” is a type or subgenre of “abstract expressionism” and are characterized by paintings created through a much more active process– drips, flung paint, stepping on the canvas.
1505.00855#36
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
37
Another confusion happens between “Expressionism” (column 10) and “Fauvism” (row 11), which is actually expected based on art history literature. “Mannerism” (row 14) is a style of art during the (late)“Renaissance” (column 12), where they show un- usual effect in scale and are less naturalistic than “Early Renaissance”. This similarity between “Mannerism” (row 14) and “Renaissance” (column 12) is captured by our sys- tem as well where results in confusion during style classification. “Minimalism” (col- umn 15) and “Color field paintings”(6th row) are mostly confused with each other. We can agree on this finding as we look at members of these styles and figure out the simi- larity in terms of simple form and distribution of colors. Lastly some of the confusions are completely acceptable based on the origins of these styles (art movements) that are 13 Babak Saleh, Ahmed Elgammal Metric / Features GIST Classemes Picodes CNN Dim. Baseline Boost ITML LMNN MLKR NCA Table 4: Accuracy for the task of artist classification
1505.00855#37
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
38
Metric / Features GIST Classemes Picodes CNN Dim. Baseline Boost ITML LMNN MLKR NCA Table 4: Accuracy for the task of artist classification noted in art history literature. For example, “Renaissance”(column 18) and “Early Re- naissance”(row 9); “Post Impressionism” (column 21) and “Impressionism”(row 13); “Cubism” (8th row) and “Synthetic Cubism” (column 26). Synthetic cubism is the later act of cubism with more color continued usage of collage and pasted papers, but less linear perspective than cubism.
1505.00855#38
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
39
Genre Classification We narrowed down the list of all genres in our dataset (45 in to- tal) to get a reasonable number of samples for each genre (10 selected genres are listed in table 1). We trained ten one-vs-all SVM classifiers and compare their performance in Table 3. In this table columns represent different features and rows are different metric that we used to compute the distance. As table 3 shows we achieved the best perfor- mance for genre classification by learning Boost metric on top of Classeme features. Generally the performance of these classifiers are better than classifiers that we trained for style classification. This is expected as the number of genres is less than the number of styles in our dataset.
1505.00855#39
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
40
Figure 6 shows the confusion matrix for classification of genre by learning Boost metric, when we used Classeme features. Investigating the confusions that we find in this matrix, reveals interesting results. For example, our system confuses “Landscape” (5th row) with “Cityspace” (2nd column) and “Genre paintings” (3rd column). How- ever, this confusion is expected as art historians can find common elements in these genres. On one hand “Landscape” paintings usually show rivers, mountains and valleys and there is no significant figure in them; frequently very similar to “Genre paintings” as they capture daily life. The difference appears in the fact that despite the “Genre paintings”, “Landscape” paintings are idealized. On the other hand, “Landscape” and “Cityspace” paintings are very similar as both have open space and use realistic color tonalities.
1505.00855#40
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
41
Artist Classification For the task of the artist classification, we trained one-vs-all SVM classifiers for each of 23 artists. For each test image, we determine its artist by finding the classifier that produces the maximum confidence. Table 4 shows the performance of different combinations of features and metrics for this task. In general learning Boost metric improves artist classification better than all other metrics, except the case of CNN features where learning ITML metric gained the best performance. We plotted the confusion matrix of this classification task in figure 7. In this plot, some confusions between artists are clearly reasonable. We investigated two cases: Large-scale Classification of Fine-Art Paintings Task / Features GIST Classemes Picodes CNN 21.99 Style 47.05 Genre 33.62 Artist Table 5: Classification performance for metric fusion methodology.
1505.00855#41
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
42
First case, “Claude Monet”(5th row) and “Camille Pissaro”(3rd column). Both of these Impressionist artists who lived in the late nineteen and early twentieth centuries. Interestingly, based on art history literature Monet and Pissaro became friends when they both attended the ”Acad´emie Suisse” in Paris. This friendship lasted for a long time and resulted in some noticeable interactions between them. Second case, paint- ings of “Childe Hassam”(4th row) are mostly confused with ones from “Monet”(5th column). This confusion is acceptable as Hassam is an American Impressionist, who declared himself as being influenced by French Impressionists. Hassam called himself an “Extreme Impressionist”, who painted some flag-themed artworks similar to Monet. By looking at reported performances in tables 2- 4, we conclude that, all three clas- sification tasks can benefit from learning the appropriate metric. This means that we can improve the accuracy of baseline classification by learning metrics independent of the type of visual feature or the concept that we
1505.00855#42
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
43
the appropriate metric. This means that we can improve the accuracy of baseline classification by learning metrics independent of the type of visual feature or the concept that we are classifying painting based on. Ex- perimental results show that, independent of the task, NCA and MLKR approaches are performing worse than other metrics. Additionally, Boost metric always gives the best or the second best results for all classification tasks.
1505.00855#43
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
44
Regarding analysis of importance of features, we can verify that Classeme and Pi- code features are better image representations for classification purposes. Based on these classification experiments, we claim that Classemes and Picodes features per- form better than CNN features. This is rooted in the fact that amount of supervision for training Classeme and Picodes is more than CNN training. Also, unlike Classeme and Picodes, CNN feature is designed to categorize the object insides a given bounding box. However, in the case of paintings we cannot assume that all the bounding boxes around the objects are given.
1505.00855#44
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
45
Integration of Features and Metrics We investigated the performance of different metric learning approaches and visual features individually. In the next step, we find out the best performance for aforementioned classification tasks by combining different visual features. Toward this goal, we followed two strategies. First, for a given metric, we project visual features by applying the metric and concatenate these projected visual features together. Second, we fixed the type of visual feature that we use and project it with the application of different metrics and concatenate these projections all together. Having this larger feature vectors (either of two strategies), we train SVM classifiers for three tasks of Style, Genre and Artist classification. Table 6 shows the results of these experiments where we followed the earlier strategy and table 5 shows the results of the later case. In general we get better results by fixing the metric and concatenating the projected feature vectors (first strategy). The work of Bar et al [4] is the most similar to ours and we compare our final re- sults of these experiments with their reported performance. [4] only performed the task 15 16 Babak Saleh, Ahmed Elgammal
1505.00855#45
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
46
15 16 Babak Saleh, Ahmed Elgammal ‘Concept / Metric|Boost/ITML]LMNN|MKLR|NCA Style 41.74]45.05 |45.97 [38.91 |40.61 Genre 58.51 |60.28 |58.48 [55.79 /54.82 Artist 61.24 |60.46 |63.06 [53.19 /55.83 Concept / Metric Boost ITML LMNN MKLR NCA 41.74 45.05 45.97 38.91 40.61 Style 58.51 60.28 58.48 55.79 54.82 Genre 61.24 60.46 63.06 53.19 55.83 Artist Table 6: Classification results for feature fusion methodology.
1505.00855#46
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
47
of style classification on half of the images in our dataset and achieved the accuracy of 43% by using two variations of PiCoDes features and two layers of CNN. However we outperform their approach by achieving 45.97 % accuracy for the task of style classi- fication when we used LMNN metric to project GIST, Classeme, PiCoDes and CNN features and concatenate them all together as it is reported in the third column of table 6. Our contribution goes beyond outperforming state-of-the-art by learning a more compact feature representation. In this work, our best performance for style classifica- tion happens when we concatenate four 100-dimensional feature vectors. This results in a 400 dimensional feature vectors that we train SVM classifiers on top of them. How- ever [4] extract a 3882 dimensional feature vector to their best reported performance. As a result we not only outperform the state-of-the-art, but presented a better image representation that reduces the amount of space by 90%. Our efficient feature vector is an extremely useful image representation that gains the best classification accuracy and we consider its application for the task of image retrieval as future work.
1505.00855#47
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
48
To qualitatively evaluate extracted visual features and learned metrics, we did a pro- totype image search task. As the feature fusion with application of LMNN metric gives the best performance for style classification, we used this setting as our similarity mea- surement model. Figure 8 shows some sample output of this image search task. For each pair, the image on the left is the query image, which we find the closest match(image on the right) to it based on LMNN and feature fusion. However we force the system to pick the closest match that does not belong to the same style as the query image. This verifies that although we learn the metric based on style labels, the learned projection can find similarity across styles. # 5 Conclusion and Future Works
1505.00855#48
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
49
# 5 Conclusion and Future Works In this paper we investigated the applicability of metric learning approaches and perfor- mance of different visual features for learning similarity in a collection of fine-art paint- ings. We implemented meaningful metrics for measuring similarity between paintings. These metrics are learned in a supervised manner to put paintings from one concept close to each other and far from others. In this work we used three concepts: Style, Genre and Artist. We used these learned metrics to transform raw visual features into another space that we can significantly improve the performance for three important tasks of Style, Genre and Artist classification. We conducted our comparative experi- ments on the largest publicly available dataset of fine-art paintings to evaluate the per- formance for the aforementioned tasks. We conclude that: Large-scale Classification of Fine-Art Paintings – Classeme features show the superior performance for all three tasks of Style, Genre or Artist classification. This superior performance is independent of the type of metric that has been learned. – In the case of working on individual type of visual features, Boost metric and In- formation Theoretic Metric Learning(ITML) approaches improve the accuracy of classification tasks across all features.
1505.00855#49
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
50
– For the case of using different types of features all together(feature fusion), Large- Margin Nearest-Neighbor(LMNN) metric learning achieves the best performance for all classification experiments. – By learning LMNN metric on Classeme features, we find an optimized representa- tion that not only outperforms state-of-the art for the task of style classification, but reduce the size of feature vector by 90%. We consider verification of applicability of this representation for the task of image retrieval and recommendation systems as future work. As other future works we would like to learn metrics based on other annotation(e.g. time period). 17 Babak Saleh, Ahmed Elgammal
1505.00855#50
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
52
Art name Artist Style Art name Artist Style The marble staircase! Christoffer Corner Corneli which leads up.“ Wihelm Neoclassicism |Paleissingel Strat Vreedenburgh Impressionism Rew Eckersberg In Amsterdam . Countryside and Brickworks at | Camille Pissarro | Pointillism Eragny Church | Camille Pissarro | Impressionism ragny and Farm At the races Edgar Degas | Impressionism Bayan Via tssnesy || Reenter View towards the see ._ | Tivadar Kosztka Post- . Sacrificial stone in Aan port of Paul Klee Cubism Baalbek Csontvary Impressionism Hammamet Ladies ina row | Walasse Ting Pop Art The four Apostles) Albrecht Durer Neste Renaissance Communion of Alexey Madonna Mannerism dying Venetsianov Realism Enthroned and | Rosso Fiorentino (Late ten saints Renaissance) 9 Alexey ‘ Adoration of the Bartolome iti Venetsianov Realism shepherds Esteban Murillo Baroque area the woes Camille Corot Realism aac: "| Max Slevogt | Impressionism Areetlifnitite Little Russian ox countryside, near|
1505.00855#52
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
54
Lake Geneva from} edinand Hodler [Rese .___ | winter. A coast of | Arkhip Kuindzhi | Impressionism Chexbres Impressionism the sea St. Jacques leads to martyrdom Andrea Mantegna High St. James the Great ig! on his way to Renaissance | execution (painted on: 1448) # Andrea Mantegna # Ear arly Renaissance Table 7: Annotation of paintings in Figure 8. Each row corresponds to one pair of im- ages, labeled with the name of painting, its style and its artist. First six rows correspond to the six pairs on the left in Figure 8 and next six rows correspond to the pairs on the right. 19 # Bibliography [1] A. E. Abdel-Hakim and A. A. Farag. Csift: A sift descriptor with color invariant characteristics. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2006. [2] R. Arnheim. Visual thinking. Univ of California Press, 1969. [3] R. S. Arora and A. M. Elgammal. Towards automated classification of fine-art painting style: A comparative study. In ICPR, 2012.
1505.00855#54
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
55
painting style: A comparative study. In ICPR, 2012. [4] Y. Bar, N. Levy, and L. Wolf. Classification of artistic styles using binarized features derived from a deep neural network. 2014. [5] A. Bentkowska-Kafel and J. Coddington. Computer Vision and Image Analysis of Art: Proceedings of the SPIE Electronic Imaging Symposium, San Jose Conven- tion Center, 18-22 January 2010. PROCEEDINGS OF SPIE. 2010. [6] I. E. Berezhnoy, E. O. Postma, and H. J. van den Herik. Automatic extrac- tion of brushstroke orientation from paintings. Machine Vision and Applications, 20(1):1–9, 2009. [7] A. Bergamo and L. Torresani. Classemes and other classifier-based features for ef- ficient object categorization. IEEE Transactions on Pattern Analysis and Machine Intelligence, page 1, 2014. [8] A. Bergamo, L. Torresani, and A. W. Fitzgibbon. Picodes: Learning a compact code for novel-category recognition. In Advances in Neural Information Process- ing Systems, pages 2088–2096, 2011.
1505.00855#55
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
56
[9] G. Carneiro, N. P. da Silva, A. D. Bue, and J. P. Costeira. Artistic image classifi- cation: An analysis on the printart database. In ECCV, 2012. [10] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In International Conference on Computer Vision & Pattern Recognition, volume 2, pages 886–893, June 2005. [11] J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon. Information-theoretic metric learning. In ICML, 2007. [12] M. V. Fahad Shahbaz Khan, Joost van de Weijer. Who painted this painting? 2010. [13] L. Fichner-Rathus. Foundations of Art and Design. Clark Baxter, 2008. [14] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. In NIPS, 2004.
1505.00855#56
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
57
[15] C. R. Johnson, E. Hendriks, I. J. Berezhnoy, E. Brevdo, S. M. Hughes, I. Daubechies, J. Li, E. Postma, and J. Z. Wang. Image processing for artist iden- tification. Signal Processing Magazine, IEEE, 25(4):37–48, 2008. [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing sys- tems, pages 1097–1105, 2012. [17] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [18] J. Li and J. Z. Wang. Studying digital imagery of ancient paintings by mixtures of stochastic models. Image Processing, IEEE Transactions on, 13(3):340–353, 2004. Large-scale Classification of Fine-Art Paintings
1505.00855#57
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
58
Large-scale Classification of Fine-Art Paintings [19] J. Li, L. Yao, E. Hendriks, and J. Z. Wang. Rhythmic brushstrokes distinguish van gogh from his contemporaries: Findings via automated brushstroke extraction. IEEE Trans. Pattern Anal. Mach. Intell., 2012. [20] T. E. Lombardi. The classification of style in fine-art painting. ETD Collection for Pace University. Paper AAI3189084., 2005. [21] D. G. Lowe. Distinctive image features from scale-invariant keypoints. Comput. Vision, 2004. Int. J. [22] S. Lyu, D. Rockmore, and H. Farid. A digital technique for art authentication. Proceedings of the National Academy of Sciences of the United States of America, 101(49):17006–17010, 2004. [23] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representa- tion of the spatial envelope. IJCV, 2001.
1505.00855#58
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
59
[23] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representa- tion of the spatial envelope. IJCV, 2001. [24] G. Polatkan, S. Jafarpour, A. Brasoveanu, S. Hughes, and I. Daubechies. Detection In 16th IEEE International of forgery in paintings using supervised learning. Conference on Image Processing (ICIP), 2009. [25] R. Sablatnig, P. Kammerer, and E. Zolda. Hierarchical classification of paintings using face- and brush stroke models. 1998. [26] B. Saleh, K. Abe, and A. Elgammal. Knowledge discovery of artistic influences: A metric learning approach. In ICCC, 2014. [27] C. Shen, J. Kim, L. Wang, and A. van den Hengel. Positive semidefinite metric learning using boosting-like algorithms. Journal of Machine Learning Research, 13:1007–1036, 2012. [28] D. G. Stork. Computer vision and computer graphics analysis of paintings and drawings: An introduction to the literature. In Computer Analysis of Images and Patterns, pages 9–24. Springer, 2009.
1505.00855#59
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00855
60
[29] L. Torresani, M. Szummer, and A. Fitzgibbon. Efficient object category recogni- tion using classemes. In ECCV, 2010. [30] A. Vedaldi and K. Lenc. Matconvnet – convolutional neural networks for matlab. CoRR, abs/1412.4564, 2014. [31] K. Weinberger and G. Tesauro. Metric learning for kernel regression. In Eleventh international conference on artificial intelligence and statistics, pages 608–615, 2007. [32] K. Q. Weinberger and L. K. Saul. Distance metric learning for large margin nearest neighbor classification. JMLR, 2009. 21
1505.00855#60
Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature
In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
http://arxiv.org/pdf/1505.00855
Babak Saleh, Ahmed Elgammal
cs.CV, cs.IR, cs.LG, cs.MM
21 pages
null
cs.CV
20150505
20150505
[]
1505.00521
1
Ilya Sutskever2 Google Brain [email protected] # ABSTRACT The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete. # INTRODUCTION
1505.00521#1
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
2
# INTRODUCTION Graves et al. (2014b)’s Neural Turing Machine (NTM) is model that learns to interact with an external memory that is differentiable and continuous. An external memory extends the capabilities of the NTM, allowing it to solve tasks that were previously unsolvable by conventional machine learning methods. In general, it appears that ML models become This is the source of the NTM’s expressive power. significantly more powerful if they are able to learn to interact with external interfaces. There exist a vast number of Interfaces that could be used with our models. For example, the Google search engine is an example of such Interface. The search engine consumes queries (which are actions), and outputs search results. However, the search engine is not differentiable, and the model interacts with the Interface using discrete actions. This work examines the feasibility of learning to interact with discrete Interfaces using the reinforce algorithm.
1505.00521#2
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
3
Discrete Interfaces cannot be trained directly with standard backpropagation because they are not dif- ferentiable. It is most natural to learn to interact with discrete Interfaces using Reinforcement Learning methods. In this work, we consider an Input Tape and a Memory Tape interface with discrete access. Our concrete proposal is to use the Reinforce algorithm to learn where to access the discrete interfaces, and to use the backpropagation algorithm to determine what to write to the memory and to the output. We call this model the RL–NTM. Discrete Interfaces are computationally attractive because the cost of accessing a discrete Interface is often independent of its size. It is not the case for the continuous Interfaces, where the cost of access scales linearly with size. It is a significant disadvantage since slow models cannot scale to large difficult In addition, an output Interface that lets problems that require intensive training on large datasets. the model decide when it wants to make a prediction allows the model’s runtime to be in principle unbounded. If the model has an output interface of this kind together with an interface to an unbounded memory, the model becomes Turing complete.
1505.00521#3
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
4
We evaluate the RL-NTM on a number of simple algorithmic tasks. The RL-NTM succeeds on problems such as copying an input several times to the output tape (the “repeat copy” task from Graves et al. (2014b)), reversing a sequence, and a few more tasks of comparable difficulty. However, its success is highly dependent on the architecture of the “controller”. We discuss this in more details in Section 8. 1Work done while the author was at Google. 2Both authors contributed equally to this work. 1 # Under review as a conference paper at ICLR 2016 Finally, we found it non-trivial to correctly implement the RL-NTM due its large number of interacting components. We developed a simple procedure to numerically check the gradients of the Reinforce algorithm (Section 5). The procedure can be applied to problems unrelated to NTMs, and is of the independent interest. The code for this work can be found at https://github.com/ilyasu123/rlntm. # 2 THE MODEL
1505.00521#4
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
5
# 2 THE MODEL Many difficult tasks require a prolonged, multi-step interaction with an external environment. Examples of such environments include computer games (Mnih et al., 2013), the stock market, an advertisement system, or the physical world (Levine et al., 2015). A model can observe a partial state from the environment, and influence the environment through its actions. This is seen as a general reinforcement leaning problem. However, our setting departs from the classical RL, i.e. we have a freedom to design tools available to solve a given problem. Tools might cooperate with the model (i.e. backpropagation through memory), and the tools specify the actions over the environment. We formalize this concept under the name Interface–Controller interaction. The external environment is exposed to the model through a number of Interfaces, each with its own API. For instance, a human perceives the world through its senses, which include the vision Interface and the touch Interface. The touch Interface provides methods for contracting the various muscles, and methods for sensing the current state of the muscles, pain level, temperature and a few others. In this work, we explore a number of simple Interfaces that allow the controller to access an input tape, a memory tape, and an output tape.
1505.00521#5
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
6
The part of the model that communicates with Interfaces is called the Controller, which is the only part of the system which learns. The Controller can have prior knowledge about behavior of its Interfaces, but it is not the case in our experiments. The Controller learns to interact with Interfaces in a way that allows it to solve a given task. Fig. 1 illustrates the complete Interfaces–Controller abstraction. Input Interface Output Interface Memory Interface input position increment -1 0 1 Target prediction to output symbol or not? 0 1 memory address increment -1 0 1 new memory value vector Controller Output Controller Output Past State Controller Future State Past State LSTM Future State Controller Input Controller Input Input Interface Output Interface Memory Interface Current Input Current Memory An abstract Interface–Controller model Our model as an Interface–Controller
1505.00521#6
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
7
Figure 1: (Left) The Interface–Controller abstraction, (Right) an instantiation of our model as an Interface– Controller. The bottom boxes are the read methods, and the top are the write methods. The RL–NTM makes discrete decisions regarding the move over the input tape, the memory tape, and whether to make a prediction at a given timestep. During training, the model’s prediction is compared with the desired output, and is used to train the model when the RL-NTM chooses to advance its position on the output tape; otherwise it is ignored. The memory value vector is a vector of content that is stored in the memory cell. We now describe the RL–NTM. As a controller, it uses either LSTM, direct access, or LSTM (see sec. 8.1 for a definition). It has a one-dimensional input tape, a one-dimensional memory, and a one- dimensional output tape as Interfaces. Both the input tape and the memory tape have a head that reads the Tape’s content at the current location. The head of the input tape and the memory tape can move in any direction. However, the output tape is a write-only tape, and its head can either stay at the current position or move forward. Fig. 2 shows an example execution trace for the entire RL–NTM on the reverse task (sec. 6).
1505.00521#7
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
8
At the core of the RL–NTM is an LSTM controller which receives multiple inputs and has to generate multiple outputs at each timestep. Table 1 summarizes the controller’s inputs and outputs, and the way in which the RL–NTM is trained to produce them. The objective function of the RL–NTM is the expected log probability of the desired outputs, where the expectation is taken over all possible sequences of actions, weighted with probability of taking these actions. Both backpropagation and Reinforce maximize this objective. Backpropagation maximizes the log probabilities of the model’s predictions, while the reinforce algorithm influences the probabilities of action sequences. 2 # Under review as a conference paper at ICLR 2016
1505.00521#8
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
9
2 # Under review as a conference paper at ICLR 2016 Output @ |Nenory wh) Output @ |Nenoryatr| (Output @ |Nemory +4) [output > |Menory s+] utout 9 |Menory wi) . u raul) ia emety -§ Goh g ath. SS ah 6 vor « bbe tL itr tai het Lonel tetell TL fae t || n ah tL hE fie Memoryi) |nput Wor] [Memory *l] | Input 4°] [Memory «|| Input 490] Memory «BK ] Input 49H] Memory We«|[ Input 490] Emty nidsan 2 . Z tb t ts ty ts Time cap Fre Raden sete
1505.00521#9
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
10
Figure 2: Execution of RL–NTM on the ForwardReverse task. At each timestep, the RL-NTM con- sumes the value of the current input tape, the value of the current memory cell, and a representation of all the actions that have been taken in the previous timestep (not marked on the figures). The RL- NTM then outputs a new value for the current memory cell (marked with a star), a prediction for the next target symbol, and discrete decisions for changing the positions of the heads on the various tapes. The RL-NTM learns to make discrete decisions using the Reinforce algorithm, and learns to produce continuous outputs using backpropagation. The global objective can be written formally as: n Ss Dreinforce (1, @2, --.,@n|) p> log (pop (Yili... +, 2s, 01,-.- i, 8)| [a1,42,...,anJEAt i=l A† represents the space of sequences of actions that lead to the end of episode. The probabilities in the above equation are parametrized with a neural network (the Controller). We have marked with preinforce the part of the equation which is learned with Reinforce. pbp indicates the part of the equation optimized with the classical backpropagation.
1505.00521#10
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
11
Interface Read Write Training Type Input Tape Output Tape Head Head Content window of values surrounding the current position ∅ ∅ distribution over [−1, 0, 1] distribution over [0, 1] distribution over output vocabulary Reinforce Reinforce Backpropagation Memory Tape Miscellaneous Head Content window of memory values surrounding the current address all actions taken in the previous time step distribution over [−1, 0, 1] vector of real values to store ∅ Reinforce Backpropagation ∅ Table 1: Table summarizes what the Controller reads at every time step, and what it has to produce. The “training” column indicates how the given part of the model is trained.
1505.00521#11
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
12
Table 1: Table summarizes what the Controller reads at every time step, and what it has to produce. The “training” column indicates how the given part of the model is trained. The RL–NTM receives a direct learning signal only when it decides to make a prediction. If it chooses to not make a prediction at a given timestep, then it will not receive a direct learning signal. Theoretically, we can allow the RL–NTM to run for an arbitrary number of steps without making any prediction, hoping that after sufficiently many steps, it would decide to make a prediction. Doing so will also provide the RL–NTM with arbitrary computational capability. However, this strategy is both unstable and computationally infeasible. Thus, we resort to limiting the total number of computational steps to a fixed upper bound, and force the RL–NTM to predict the next desired output whenever the number of remaining desired outputs is equal to the number of remaining computational steps. # 3 RELATED WORK This work is the most similar to the Neural Turing Machine Graves et al. (2014b). The NTM is an ambitious, computationally universal model that can be trained (or “automatically programmed”) with the backpropagation algorithm using only input-output examples.
1505.00521#12
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
13
Following the introduction NTM, several other memory-based models have been introduced. All of them can be seen as part of a larger community effort. These models are constructed according to the Interface–Controller abstraction (Section 2). Neural Turing Machine (NTM) (Graves et al., 2014a) has a modified LSTM as the Controller, and the following three Interfaces: a sequential input, a delayed Output, and a differentiable Memory. 3 # Under review as a conference paper at ICLR 2016
1505.00521#13
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
14
3 # Under review as a conference paper at ICLR 2016 Weakly supervised Memory Network (Sukhbaatar et al., 2015) uses a feed forward network as the Controller, and has a differentiable soft-attention Input, and Delayed Output as Interfaces. Stack RNN (Joulin & Mikolov, 2015) has a RNN as the Controller, and the sequential input, a differen- tiable memory stack, and sequential output as Interfaces. Also uses search to improve its performance. Neural DeQue (Grefenstette et al., 2015) has a LSTM as the Controller, and a Sequential Input, a differentiable Memory Queue, and the Sequential Output as Interfaces. Our model fits into the Interfaces–Controller abstraction. It has a direct access LSTM as the Controller (or LSTM or feed forward network), and its three interfaces are the Input Tape, the Memory Tape, and the Output Tape. All three Interfaces of the RL–NTM are discrete and cannot be trained only with backpropagation.
1505.00521#14
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
15
This prior work investigates continuous and differentiable Interfaces, while we consider discrete In- terfaces. Discrete Interfaces are more challenging to train because backpropagation cannot be used. However, many external Interfaces are inherently discrete, even though humans can easily use them (apparently without using continuous backpropagation). For instance, one interacts with the Google search engine with discrete actions. This work examines the possibility of learning models that interact with discrete Interfaces with the Reinforce algorithm. The Reinforce algorithm (Williams, 1992) is a classical RL algorithm, which has been applied to the broad spectrum of planning problems (Peters & Schaal, 2006; Kohl & Stone, 2004; Aberdeen & Baxter, 2002). In addition, it has been applied in object recognition to implement visual attention (Mnih et al., 2014; Ba et al., 2014). This work uses Reinforce to train an attention mechanism: we use it to train how to access the various tapes provided to the model.
1505.00521#15
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
16
The RL–NTM can postpone prediction for an arbitrary number of timesteps, and in principle has access to the unbounded memory. As a result, the RL-NTM is Turing complete in principle. There have been very few prior models that are Turing complete Schmidhuber (2012; 2004). Although our model is Turing complete, it is not very powerful because it is very difficult to train, and our model can solve only relatively simple problems. Moreover, the RL–NTM does not exploit Turing completeness, as none of tasks that it solves require superlinear runtime to be solved. # 4 THE REINFORCE ALGORITHM
1505.00521#16
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
17
# 4 THE REINFORCE ALGORITHM Notation Let A be a space of actions, and At be a space of all sequences of actions that cause an episode to end (so At c A*). Anaction at time-step ¢ is denoted by a;. We denote time at the end of episode by T (this is not completely formal as some episodes can vary in time). Let a1,, stand for a sequence of actions [a1,@2,..., a4]. Let r(a1,,) denote the reward achieved at time t, having executed the sequence of ac- tions a.,, and R(aj,r) is the cumulative reward, namely R(ax:7) = an r(a1:t)- Let po (ar|1.(1-1)) be a parametric conditional probability of an action a; given all previous actions @1.;_1). Finally, po is a policy parametrized by 6. This work relies on learning discrete actions with the Reinforce algorithm (Williams, 1992). We now describe this algorithm in detail. Moreover, the supplementary materials include descriptions of tech- niques for reducing variance of the gradient estimators.
1505.00521#17
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
18
The goal of reinforcement learning is to maximize the sum of future rewards. The Reinforce algorithm (Williams, 1992) does so directly by optimizing the parameters of the policy pθ(at|a1:(t−1)). Reinforce follows the gradient of the sum of the future rewards. The objective function for episodic reinforce can be expressed as the sum over all sequences of valid actions that cause the episode to end: J(θ) = pθ(a1, a2, . . . , aT )R(a1, a2, . . . , aT ) = pθ(a1:T )R(a1:T ) [a1,a2,...,aT ]∈A† a1:T ∈A† This sum iterates over sequences of all possible actions. This set is usually exponential or even infinite, so it cannot be computed exactly and cheaply for most of problems. However, it can be written as 4 # Under review as a conference paper at ICLR 2016 expectation, which can be approximated with an unbiased estimator. We have that:
1505.00521#18
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
19
4 # Under review as a conference paper at ICLR 2016 expectation, which can be approximated with an unbiased estimator. We have that: = Ss po(ar.r)R(ar.7) = ay:7€At Eur~pe > r(a12) = t=1 T Eay~po(a:)Eas~po(aslar) +++ Ear~polar|arcr—1y) >)" (@1:t) t=1 J(θ) = The last expression suggests a procedure to estimate J(θ): simply sequentially sample each at from the model distribution pθ(at|a1:(t−1)) for t from 1 to T . The unbiased estimator of J(θ) is the sum of r(a1:t). This gives us an algorithm to estimate J(θ). However, the main interest is in training a model to maximize this quantity. The reinforce algorithm maximizes J(0) by following the gradient of it: AJ(0)= S> [Aopo(ar-r)] R(a-r) ∂θJ(θ) = a1:T ∈A†
1505.00521#19
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
20
∂θJ(θ) = a1:T ∈A† However, the above expression is a sum over the set of the possible action sequences, so it cannot be computed directly for most At. Once again, the Reinforce algorithm rewrites this sum as an expectation that is approximated with sampling. It relies on the equation: 0 f (0) = f(@) “ee = f(0)Oo[log f (0)]. This identity is valid as long as f(x) 4 0. As typical neural network parametrizations of distributions assign non-zero probability to every action, this condition holds for f = pg. We have that: S> [Aope(arr)] Rar) = [ar.rJeAt = Ss po(a1:r) [Oo log po(ai:r)| (arr) a1.r €At n = Ss polar) |) dp log po(aslar.¢—1))] R(a-r) aypeAt t=1 T = Eay~po(ar)Ea2~po(aslar) +++ Ear~po(arlarr1)| >, 00 log po(ailar-e—1)] [ 9) r(ar)] t=1 t=1 ∂θJ(θ) =
1505.00521#20
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
21
∂θJ(θ) = The last expression gives us an algorithm for estimating 0gJ(@). We have sketched it at the left side of the Figure|3} It’s easiest to describe it with respect to computational graph behind a neural network. Reinforce can be implemented as follows. A neural network outputs: 1; = log pg (az |a1.(¢1)). Sequen- tially sample action a, from the distribution e’*, and execute the sampled action a,. Simultaneously, experience a reward r(aj,,). Backpropagate the sum of the rewards al r(a1:4) to the every node 0p log po (ar1:(-1))We have derived an unbiased estimator for the sum of future rewards, and the unbiased estimator of its gradient. However, the derived gradient estimator has high variance, which makes learning difficult. RL–NTM employs several techniques to reduce gradient estimator variance: (1) future rewards back- propagation, (2) online baseline prediction, and (3) offline baseline prediction. All these techniques are crucial to solve our tasks. We provide detailed description of techniques in the Supplementary material.
1505.00521#21
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
22
Finally, we needed a way of verifying the correctness of our implementation. We discovered a technique that makes it possible to easily implement a gradient checker for nearly any model that uses Reinforce. Following Section 5 describes this technique. # 5 GRADIENT CHECKING The RL–NTM is complex, so we needed to find an automated way of verifying the correctness of our implementation. We discovered a technique that makes it possible to easily implement a gradient checker for nearly any model that uses Reinforce. This discovery is an independent contribution of this 5 # Under review as a conference paper at ICLR 2016 Reinforce Gradient Checking of Reinforce sample(t) , def sample(time=t): {For rowsiin the minibatch 1 (a.@2,...ar] = Al j tetum a 1 'Loop until the end of the episode Execute in the environment ‘Accumulate reward T \ [Deer (41:2) |p0(ar-r) Backpropagate 06 log pe(at|41:(¢-1)) samplett) def sample(time=t) sample from Po(at|ai-(t-1)) Execute in the environment !Loop until the end of the episode Y ‘Accumulate reward we (ais) Backpropagate Oo log po(aela1:(4—-1))
1505.00521#22
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
23
Figure 3: Figure sketches algorithms: (Left) the reinforce algorithm, (Right) gradient checking for the reinforce algorithm. The red color indicates necessary steps to override the reinforce to become the gradient checker for the reinforce. work. This Section describes the gradient checking for any implementation of the reinforce algorithm that uses a general function for sampling from multinomial distribution. The reinforce gradient verification should ensure that expected gradient over all sequences of actions matches the numerical derivative of the expected objective. However, even for a tiny problem, we would need to draw billions of samples to achieve estimates accurate enough to state if there is match or mis- match. Instead, we developed a technique which avoids sampling, and allows for gradient verification of reinforce within seconds on a laptop. First, we have to reduce the size of our a task to make sure that the number of possible actions is manageable (e.g., < 104). This is similar to conventional gradient checkers, which can only be applied to small models. Next, we enumerate all possible sequences of actions that terminate the episode. By definition, these are precisely all the elements of A†.
1505.00521#23
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
24
The key idea is the following: we override the sampling function which turns a multinomial distribu- tion into a random sample with a deterministic function that deterministically chooses actions from an appropriate action sequence from At, while accumulating their probabilities. By calling the modified sampler, it will produce every possible action sequence from A‘ exactly once. For efficiency, it is desirable to use a single minibatch whose size is ##A'. The sampling function needs to be adapted in such a way, so that it incrementally outputs the appropriate sequence from At as we repeatedly call the sampling function. At the end of the minibatch, the sampling function will have access to the total probability of each action sequence ([], 79 (a+|a1:e-1)), which in turn can be used to exactly compute J(6) and its derivative. To compute the derivative, the reinforce gradient produced by each sequence a1. € At should be weighted by its probability pg (a1.r). We summarize this procedure on Figure[3]
1505.00521#24
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
25
The gradient checking is critical for ensuring the correctness of our implementation. While the basic reinforce algorithm is conceptually simple, the RL–NTM is fairly complicated, as reinforce is used to train several Interfaces of our model. Moreover, the RL–NTM uses three separate techniques for reducing the variance of the gradient estimators. The model’s high complexity greatly increases the probability of a code error. In particular, our early implementations were incorrect, and we were able to fix them only after implementing gradient checking. # 6 TASKS This section defines tasks used in the experiments. Figure 4 shows exemplary instantiations of our tasks. Table 2 summarizes the Interfaces that are available for each task. 6 # Under review as a conference paper at ICLR 2016 Task Interface | tyoutTape Memory Tape Copy v x DuplicatedInput v x Reverse v x RepeatCopy v x ForwardReverse x v Table 2: This table marks the available Interfaces for each task. The difficulty of a task is dependent on the type of Interfaces available to the model. Copy DuplicatedInput Reverse RepeatCopy ForwardReverse
1505.00521#25
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
26
Figure 4: This Figure presents the initial state for every task. The yellow box indicates the starting position of the reading head over the Input Interface. The gray characters on the Output Tape represent the target symbols. Our tasks involve reordering symbols, and and the symbols xi have been picked uniformly from the set of size 30. Copy. A generic input is x1x2x3 . . . xC∅ and the desired output is x1x2 . . . xC∅. Thus the goal is to repeat the input. The length of the input sequence is variable and is allowed to change. The input sequence and the desired output both terminate with a special end-of-sequence symbol ∅. DuplicatedInput. A generic input has the form x1x1x1x2x2x2x3 . . . xC−1xCxCxC∅ while the desired output is x1x2x3 . . . xC∅. Thus each input symbol is replicated three times, so the RL-NTM must emit every third input symbol. Reverse. A generic input is x1x2 . . . xC−1xC∅ and the desired output is xCxC−1 . . . x2x1∅. RepeatCopy. is
1505.00521#26
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
27
. . . xC−1xC∅ and the desired output is xCxC−1 . . . x2x1∅. RepeatCopy. is x1x2 . . . xCx1 . . . xCx1 . . . xC∅, where the number of copies is given by m. Thus the goal is to copy the input m times, where m can be only 2 or 3. ForwardReverse. The task is identical to Reverse, but the RL-NTM is only allowed to move its input tape pointer forward. It means that a perfect solution must use the NTM’s external memory.
1505.00521#27
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
28
is mx1x2x3 . . . xC∅ and # 7 CURRICULUM LEARNING Humans and animals learn much better when the examples are not randomly presented but organized in a meaningful order which illustrates gradually more concepts, and gradually more complex ones. . . . and call them “curriculum learning”. Bengio et al. (2009) We were unable to solve tasks with RL–NTM by training it on the difficult instances of the problems (where difficult usually means long). To succeed, we had to create a curriculum of tasks of increasing complexity. We verified that our tasks were completely unsolvable (in an all-or-nothing sense) for all but the shortest sequences when we did not use a curriculum. In our experiments, we measure the complexity c of a problem instance by the maximal length of the desired output to typical inputs. During training, we maintain a distribution over the task complexity. We shift the distribution over the task complexities whenever the performance of the RL–NTM exceeds a threshold. Then, our model focuses on more difficult problem instances as its performance improves. Probability 10% 25% 65% Procedure to pick complexity d uniformly at random from the possible task complexities. uniformly from [1, C + e] d = D + e.
1505.00521#28
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
29
Probability 10% 25% 65% Procedure to pick complexity d uniformly at random from the possible task complexities. uniformly from [1, C + e] d = D + e. Table 3: The curriculum learning distribution, indexed by C. Here e is a sample from a geometric 2 , i.e., p(e = k) = 1 distribution whose success probability is 1 2k . 7 # Under review as a conference paper at ICLR 2016 The distribution over task complexities is indexed with an integer c, and is defined in Table 3. While we have not tuned the coefficients in the curriculum learning setup, we experimentally verified that it is critical to always maintain non-negligible mass over the hardest difficulty levels (Zaremba & Sutskever, 2014). Removing it makes the curriculum much less effective. Whenever the average zero-one-loss (normalized by the length of the target sequence) of our RL–NTM decreases below 0.2, we increase c by 1. We kept doing so until c reaches its maximal allowable value. Finally, we enforced a refractory period to ensure that successive increments of C are separated by at least 100 parameter updates, since we encountered situations where C increased in rapid succession which consistently caused learning to fail. # 8 CONTROLLERS
1505.00521#29
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
30
# 8 CONTROLLERS The success of reinforcement learning training highly depends on the complexity of the controller, and its ease of training. It’s common to either limit number of parameters of the network, or to constraint it by initialization from pretrained model on some other task (for instance, object recognition network for robotics). Ideally, models should be generic enough to not need such “tricks”. However, still some tasks require building task specific architectures. Figure 6: The direct access controller. Figure 5: LSTM as a controller. This work considers two controllers. The first is a LSTM (Fig. 5), and the second is a direct access controller (Fig. 6). LSTM is a generic controller, that in principle should be powerful enough to solve any of the considered tasks. However, it has trouble solving many of them. Direct access controller, is a much better fit for symbol rearrangement tasks, however it’s not a generic solution. 8.1 DIRECT ACCESS CONTROLLER
1505.00521#30
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
31
8.1 DIRECT ACCESS CONTROLLER All the tasks that we consider involve rearranging the input symbols in some way. For example, a typical task is to reverse a sequence (section 6 lists the tasks). For such tasks, the controller would benefit from a built-in mechanism for directly copying an appropriate input to memory and to the output. Such a mechanism would free the LSTM controller from remembering the input symbol in its control variables (“registers”), and would shorten the backpropagation paths and therefore make learning easier. We implemented this mechanism by adding the input to the memory and the output, and also adding the memory to the output and to the adjacent memories (figure 6), while modulating these additive contribution by a dynamic scalar (sigmoid) which is computed from the controller’s state. This way, the controller can decide to effectively not add the current input to the output at a given timestep. Unfortunately the necessity of this architectural modification is a drawback of our implementation, since it is not domain independent and would therefore not improve the performance of the RL–NTM on many tasks of interest. Controller | TstM Direct Access Task Copy v v Duplicatedinput Reverse ForwardReverse RepeatCopy xxx KAKA Table 4: Success of training on various task for a given controller. 8
1505.00521#31
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]
1505.00521
32
Table 4: Success of training on various task for a given controller. 8 # Under review as a conference paper at ICLR 2016 # 9 EXPERIMENTS We presents results of training RL–NTM on all aforementioned tasks. The main drawback of our experiments is in the lack of comparison to the other models. However, the tasks that we consider have to be considered in conjunction with available Interfaces, and other models haven’t been considered with the same set of interfaces. The statement, “this model solves addition” is difficult to assess, as the way that digits are delivered defines task difficulty. The closest model to ours is NTM, and the shared task that they consider is copying. We are able to generalize with copying to an arbitrary length. However, our Interfaces make this task very simple. Table 4 summarizes results.
1505.00521#32
Reinforcement Learning Neural Turing Machines - Revised
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
http://arxiv.org/pdf/1505.00521
Wojciech Zaremba, Ilya Sutskever
cs.LG
null
null
cs.LG
20150504
20160112
[ { "id": "1503.01007" }, { "id": "1506.02516" }, { "id": "1504.00702" }, { "id": "1503.08895" } ]