doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1512.00567 | 26 | type conv conv conv padded pool conv conv conv 3ÃInception 5ÃInception 2ÃInception pool linear softmax patch size/stride or remarks 3Ã3/2 3Ã3/1 3Ã3/1 3Ã3/2 3Ã3/1 3Ã3/2 3Ã3/1 As in ï¬gure 5 As in ï¬gure 6 As in ï¬gure 7 8 à 8 logits classiï¬er input size 299Ã299Ã3 149Ã149Ã32 147Ã147Ã32 147Ã147Ã64 73Ã73Ã64 71Ã71Ã80 35Ã35Ã192 35Ã35Ã288 17Ã17Ã768 8Ã8Ã1280 8 à 8 à 2048 1 à 1 à 2048 1 à 1 à 1000
Table 1. The outline of the proposed network architecture. The output size of each module is the input size of the next one. We are using variations of reduction technique depicted Figure 10 to reduce the grid sizes between the Inception blocks whenever ap- plicable. We have marked the convolution with 0-padding, which is used to maintain the grid size. 0-padding is also used inside those Inception modules that do not reduce the grid size. All other layers do not use padding. The various ï¬lter bank sizes are chosen to observe principle 4 from Section 2. | 1512.00567#26 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 27 | However, we have observed that the quality of the network is relatively stable to variations as long as the principles from Section 2 are observed. Although our network is 42 layers deep, our computation cost is only about 2.5 higher than that of GoogLeNet and it is still much more efï¬cient than VGGNet.
# 7. Model Regularization via Label Smoothing
Here we propose a mechanism to regularize the classiï¬er layer by estimating the marginalized effect of label-dropout during training. | 1512.00567#27 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 28 | # 7. Model Regularization via Label Smoothing
Here we propose a mechanism to regularize the classiï¬er layer by estimating the marginalized effect of label-dropout during training.
For each training example x, our model computes the probability of each label k ⬠{1...K}: p(k|z) = sees: Here, z; are the /ogits or unnormalized log- probabilities. Consider the ground-truth distribution over labels q(k|x) for this training example, normalized so that >, a(k|z) = 1. For brevity, let us omit the dependence of p and q on example x. We define the loss for the ex- ample as the cross entropy: £ = â ian log(p(k))q(k). Minimizing this is equivalent to maximizing the expected log-likelihood of a label, where the label is selected accord- ing to its ground-truth distribution q(k). Cross-entropy loss is differentiable with respect to the logits z;, and thus can be used for gradient training of deep models. The gradient has a rather simple form: we = p(k) â q(k), which is bounded between â1 and 1.
Consider the case of a single ground-truth label y, so that q(y) = 1 and q(k) = 0 for all k # y. In this case, | 1512.00567#28 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 29 | Consider the case of a single ground-truth label y, so that q(y) = 1 and q(k) = 0 for all k # y. In this case,
minimizing the cross entropy is equivalent to maximizing the log-likelihood of the correct label. For a particular ex- ample x with label y, the log-likelihood is maximized for q(k) = dx,y, where 54,4 is Dirac delta, which equals 1 for k = y and 0 otherwise. This maximum is not achievable for finite z, but is approached if zy >> zz for all k A y â that is, if the logit corresponding to the ground-truth la- bel is much great than all other logits. This, however, can cause two problems. First, it may result in over-fitting: if the model learns to assign full probability to the ground- truth label for each training example, it is not guaranteed to generalize. Second, it encourages the differences between the largest logit and all others to become large, and this, combined with the bounded gradient a, reduces the abil- ity of the model to adapt. Intuitively, this happens because the model becomes too confident about its predictions. | 1512.00567#29 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 30 | We propose a mechanism for encouraging the model to be less confident. While this may not be desired if the goal is to maximize the log-likelihood of training labels, it does regularize the model and makes it more adaptable. The method is very simple. Consider a distribution over labels u(k), independent of the training example x, and a smooth- ing parameter ¢. For a training example with ground-truth label y, we replace the label distribution q(k|) = dx, with
qd (k\x) = (1 â â¬)dx,y + eu(k)
which is a mixture of the original ground-truth distribution q(k|x) and the fixed distribution u(k), with weights 1 â ⬠and ¢, respectively. This can be seen as the distribution of the label k obtained as follows: first, set it to the ground- truth label k = y; then, with probability â¬, replace k with a sample drawn from the distribution u(k). We propose to use the prior distribution over labels as u(k). In our exper- iments, we used the uniform distribution u(k) = 1/K, so that
⬠KE 1(k) = (1 â¬)bky +
We refer to this change in ground-truth label distribution as label-smoothing regularization, or LSR. | 1512.00567#30 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 31 | ⬠KE 1(k) = (1 â¬)bky +
We refer to this change in ground-truth label distribution as label-smoothing regularization, or LSR.
Note that LSR achieves the desired goal of preventing the largest logit from becoming much larger than all others. Indeed, if this were to happen, then a single g(k) would approach 1 while all others would approach 0. This would result in a large cross-entropy with q/(k) because, unlike q(k) = dp,y, all q/(k) have a positive lower bound.
Another interpretation of LSR can be obtained by con- sidering the cross entropy:
K H(q,p)=- So log p(k)q'(k) = (1-e)H(q, p)+eH (u, p) k=1
Thus, LSR is equivalent to replacing a single cross-entropy loss H(q, p) with a pair of such losses H(q, p) and H(u, p). | 1512.00567#31 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 32 | Thus, LSR is equivalent to replacing a single cross-entropy loss H(q, p) with a pair of such losses H(q, p) and H(u, p).
The second loss penalizes the deviation of predicted label distribution p from the prior wu, with the relative weight -. Note that this deviation could be equivalently captured by the KL divergence, since H(u,p) = Dxz(ullp) + H(u) and H(u) is fixed. When w is the uniform distribution, H(u,p) is a measure of how dissimilar the predicted dis- tribution p is to uniform, which could also be measured (but not equivalently) by negative entropy âH(p); we have not experimented with this approach.
In our ImageNet experiments with K = 1000 classes, we used u(k) = 1/1000 and ⬠= 0.1. For ILSVRC 2012, we have found a consistent improvement of about 0.2% ab- solute both for top-1 error and the top-5 error (cf. Table{3).
# 8. Training Methodology | 1512.00567#32 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 33 | # 8. Training Methodology
We have trained our networks with stochastic gradient utilizing the TensorFlow [1] distributed machine learning system using 50 replicas running each on a NVidia Kepler GPU with batch size 32 for 100 epochs. Our earlier experi- ments used momentum with a decay of 0.9, while our best models were achieved using RMSProp with de- cay of 0.9 and « = 1.0. We used a learning rate of 0.045, decayed every two epoch using an exponential rate of 0.94. In addition, gradient clipping with threshold 2.0 was found to be useful to stabilize the training. Model evalua- tions are performed using a running average of the parame- ters computed over time.
# 9. Performance on Lower Resolution Input
A typical use-case of vision networks is for the the post- classiï¬cation of detection, for example in the Multibox [4] context. This includes the analysis of a relative small patch of the image containing a single object with some context. The tasks is to decide whether the center part of the patch corresponds to some object and determine the class of the object if it does. The challenge is that objects tend to be relatively small and low-resolution. This raises the question of how to properly deal with lower resolution input. | 1512.00567#33 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 34 | The common wisdom is that models employing higher resolution receptive ï¬elds tend to result in signiï¬cantly im- proved recognition performance. However it is important to distinguish between the effect of the increased resolution of the ï¬rst layer receptive ï¬eld and the effects of larger model capacitance and computation. If we just change the reso- lution of the input without further adjustment to the model, then we end up using computationally much cheaper mod- els to solve more difï¬cult tasks. Of course, it is natural, that these solutions loose out already because of the reduced computational effort. In order to make an accurate assess- ment, the model needs to analyze vague hints in order to be able to âhallucinateâ the ï¬ne details. This is computa- tionally costly. The question remains therefore: how much
Receptive Field Size Top-1 Accuracy (single frame) 79 Ã 79 75.2% 151 Ã 151 76.4% 299 Ã 299 76.6%
Table 2. Comparison of recognition performance when the size of the receptive ï¬eld varies, but the computational cost is constant. | 1512.00567#34 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 35 | Table 2. Comparison of recognition performance when the size of the receptive ï¬eld varies, but the computational cost is constant.
does higher input resolution helps if the computational ef- fort is kept constant. One simple way to ensure constant effort is to reduce the strides of the ï¬rst two layer in the case of lower resolution input, or by simply removing the ï¬rst pooling layer of the network.
For this purpose we have performed the following three experiments:
1. 299 à 299 receptive ï¬eld with stride 2 and maximum pooling after the ï¬rst layer.
2. 151 à 151 receptive ï¬eld with stride 1 and maximum pooling after the ï¬rst layer.
3. 79 à 79 receptive ï¬eld with stride 1 and without pool- ing after the ï¬rst layer. | 1512.00567#35 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 36 | 3. 79 à 79 receptive ï¬eld with stride 1 and without pool- ing after the ï¬rst layer.
All three networks have almost identical computational cost. Although the third network is slightly cheaper, the cost of the pooling layer is marginal and (within 1% of the total cost of the)network. In each case, the networks were trained until convergence and their quality was measured on the validation set of the ImageNet ILSVRC 2012 classiï¬ca- tion benchmark. The results can be seen in table 2. Al- though the lower-resolution networks take longer to train, the quality of the ï¬nal result is quite close to that of their higher resolution counterparts.
However, if one would just naively reduce the network size according to the input resolution, then network would perform much more poorly. However this would an unfair comparison as we would are comparing a 16 times cheaper model on a more difï¬cult task.
Also these results of table 2 suggest, one might con- sider using dedicated high-cost low resolution networks for smaller objects in the R-CNN [5] context.
# 10. Experimental Results and Comparisons | 1512.00567#36 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 37 | # 10. Experimental Results and Comparisons
Table 3 shows the experimental results about the recog- nition performance of our proposed architecture (Inception- v2) as described in Section 6. Each Inception-v2 line shows the result of the cumulative changes including the high- lighted new modiï¬cation plus all the earlier ones. Label Smoothing refers to method described in Section 7. Fac- torized 7 à 7 includes a change that factorizes the ï¬rst 7 à 7 convolutional layer into a sequence of 3 à 3 convo- lutional layers. BN-auxiliary refers to the version in which
Network GoogLeNet [20] BN-GoogLeNet BN-Inception [7] Inception-v2 Inception-v2 RMSProp Inception-v2 Label Smoothing Inception-v2 Factorized 7 Ã 7 Inception-v2 BN-auxiliary Top-1 Error 29% 26.8% 25.2% 23.4% Top-5 Error 9.2% - 7.8 - 23.1% 6.3 22.8% 6.1 21.6% 5.8 21.2% 5.6% Cost Bn Ops 1.5 1.5 2.0 3.8 3.8 3.8 4.8 4.8 | 1512.00567#37 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 38 | Table 3. Single crop experimental results comparing the cumula- tive effects on the various contributing factors. We compare our numbers with the best published single-crop inference for Ioffe at al [7]. For the âInception-v2â lines, the changes are cumulative and each subsequent line includes the new change in addition to the previous ones. The last line is referring to all the changes is what we refer to as âInception-v3â below. Unfortunately, He et al [6] reports the only 10-crop evaluation results, but not single crop results, which is reported in the Table 4 below.
Network GoogLeNet [20] GoogLeNet [20] VGG [18] BN-Inception [7] PReLU [6] PReLU [6] Inception-v3 Inception-v3 Crops Evaluated 10 144 - 144 10 - 12 144 Top-1 Error 9.15% 7.89% 6.8% 5.82% 24.27% 7.38% 21.59% 5.71% 19.47% 4.48% 18.77% 4.2% Top-5 Error - - 24.4% 22% | 1512.00567#38 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 39 | Table 4. Single-model, multi-crop experimental results compar- ing the cumulative effects on the various contributing factors. We compare our numbers with the best published single-model infer- ence results on the ILSVRC 2012 classiï¬cation benchmark.
the fully connected layer of the auxiliary classiï¬er is also batch-normalized, not just the convolutions. We are refer- ring to the model in last row of Table 3 as Inception-v3 and evaluate its performance in the multi-crop and ensemble set- tings.
All our evaluations are done on the 48238 non- blacklisted examples on the ILSVRC-2012 validation set, as suggested by [16]. We have evaluated all the 50000 ex- amples as well and the results were roughly 0.1% worse in top-5 error and around 0.2% in top-1 error. In the upcom- ing version of this paper, we will verify our ensemble result on the test set, but at the time of our last evaluation of BN- Inception in spring [7] indicates that the test and validation set error tends to correlate very well. | 1512.00567#39 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 40 | Network VGGNet [18] GoogLeNet [20] PReLU [6] BN-Inception [7] Inception-v3 Models Evaluated 2 7 - 6 4 Crops Evaluated - 144 - 144 144 Top-1 Top-5 Error Error 23.7% 6.8% - 6.67% - 4.94% 20.1% 4.9% 17.2% 3.58%â
Table 5. Ensemble evaluation results comparing multi-model, multi-crop reported results. Our numbers are compared with the best published ensemble inference results on the ILSVRC 2012 classiï¬cation benchmark. âAll results, but the top-5 ensemble result reported are on the validation set. The ensemble yielded 3.46% top-5 error on the validation set.
# 11. Conclusions | 1512.00567#40 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 41 | # 11. Conclusions
We have provided several design principles to scale up convolutional networks and studied them in the context of the Inception architecture. This guidance can lead to high performance vision networks that have a relatively mod- est computation cost compared to simpler, more monolithic architectures. Our highest quality version of Inception-v3 reaches 21.2%, top-1 and 5.6% top-5 error for single crop evaluation on the ILSVR 2012 classiï¬cation, setting a new state of the art. This is achieved with relatively modest (2.5Ã) increase in computational cost compared to the net- work described in Ioffe et al [7]. Still our solution uses much less computation than the best published results based on denser networks: our model outperforms the results of He et al [6] â cutting the top-5 (top-1) error by 25% (14%) relative, respectively â while being six times cheaper com- putationally and using at least ï¬ve times less parameters (estimated). Our ensemble of four Inception-v3 models reaches 3.5% with multi-crop evaluation reaches 3.5% top- 5 error which represents an over 25% reduction to the best published results and is almost half of the error of ILSVRC 2014 winining GoogLeNet ensemble. | 1512.00567#41 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 42 | We have also demonstrated that high quality results can be reached with receptive ï¬eld resolution as low as 79 à 79. This might prove to be helpful in systems for detecting rel- atively small objects. We have studied how factorizing con- volutions and aggressive dimension reductions inside neural network can result in networks with relatively low computa- tional cost while maintaining high quality. The combination of lower parameter count and additional regularization with batch-normalized auxiliary classiï¬ers and label-smoothing allows for training high quality networks on relatively mod- est sized training sets.
# References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghe- mawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, | 1512.00567#42 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 43 | R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man´e, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Vi´egas, O. Vinyals, P. War- den, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. Tensor- Flow: Large-scale machine learning on heterogeneous sys- tems, 2015. Software available from tensorï¬ow.org.
[2] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen. Compressing neural networks with the hashing trick. In Proceedings of The 32nd International Conference on Machine Learning, 2015. | 1512.00567#43 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 45 | [5] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic In Proceedings of the IEEE Conference on segmentation. Computer Vision and Pattern Recognition (CVPR), 2014. [6] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectiï¬ers: Surpassing human-level performance on imagenet classiï¬cation. arXiv preprint arXiv:1502.01852, 2015. [7] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of The 32nd International Conference on Ma- chine Learning, pages 448â456, 2015.
[8] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classiï¬cation with con- In Computer Vision and Pat- volutional neural networks. tern Recognition (CVPR), 2014 IEEE Conference on, pages 1725â1732. IEEE, 2014. | 1512.00567#45 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 46 | Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
[10] A. Lavin. Fast algorithms for convolutional neural networks. arXiv preprint arXiv:1509.09308, 2015.
[11] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply- supervised nets. arXiv preprint arXiv:1409.5185, 2014. [12] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 3431â3440, 2015.
[13] Y. Movshovitz-Attias, Q. Yu, M. C. Stumpe, V. Shet, S. Arnoud, and L. Yatziv. Ontological supervision for ï¬ne grained classiï¬cation of street view storefronts. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1693â1702, 2015. | 1512.00567#46 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 47 | [14] R. Pascanu, T. Mikolov, and Y. Bengio. On the difï¬- culty of training recurrent neural networks. arXiv preprint arXiv:1211.5063, 2012.
[15] D. C. Psichogios and L. H. Ungar. Svd-net: an algorithm that automatically selects network structure. IEEE transac- tions on neural networks/a publication of the IEEE Neural Networks Council, 5(3):513â515, 1993.
[16] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. 2014.
[17] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A uni- ï¬ed embedding for face recognition and clustering. arXiv preprint arXiv:1503.03832, 2015.
[18] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. | 1512.00567#47 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 48 | [19] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Ma- chine Learning (ICML-13), volume 28, pages 1139â1147. JMLR Workshop and Conference Proceedings, May 2013.
[20] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â9, 2015.
[21] T. Tieleman and G. Hinton. Divide the gradient by a run- ning average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012. Accessed: 2015- 11-05.
[22] A. Toshev and C. Szegedy. Deeppose: Human pose estima- tion via deep neural networks. In Computer Vision and Pat- tern Recognition (CVPR), 2014 IEEE Conference on, pages 1653â1660. IEEE, 2014. | 1512.00567#48 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1511.09249 | 0 | 5 1 0 2
v o N 0 3 ] I A . s c [
1 v 9 4 2 9 0 . 1 1 5 1 : v i X r a
# On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models
Technical Report
J¨urgen Schmidhuber The Swiss AI Lab Istituto Dalle Molle di Studi sullâIntelligenza Artiï¬ciale (IDSIA) Universit`a della Svizzera italiana (USI) Scuola universitaria professionale della Svizzera italiana (SUPSI) Galleria 2, 6928 Manno-Lugano, Switzerland
30 November 2015
# Abstract | 1511.09249#0 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 1 | 30 November 2015
# Abstract
This paper addresses the general problem of reinforcement learning (RL) in partially observable environments. In 2013, our large RL recurrent neural networks (RNNs) learned from scratch to drive simulated cars from high-dimensional video input. However, real brains are more powerful in many ways. In particular, they learn a predictive model of their initially unknown environment, and somehow use it for abstract (e.g., hierarchical) planning and reasoning. Guided by algorithmic information theory, we describe RNN-based AIs (RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending sequences of tasks, some of them provided by the user, others invented by the RNNAI itself in a curious, playful fashion, to improve its RNN-based world model. Unlike our previous model-building RNN-based RL machines dating back to 1990, the RNNAI learns to actively query its model for abstract reasoning and planning and decision making, essentially âlearning to think.â The basic ideas of this report can be applied to many other cases where one RNN-like system exploits the algorithmic information content of another. They are taken from a grant proposal submitted in Fall 2014, and also explain concepts such as âmirror neurons.â Experimental results will be described in separate papers.
1
# Contents | 1511.09249#1 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 2 | Neural Networks (RNNs) in Partially Observable Environments 1.1 RL through Direct and Indirect Search in RNN Program Space . . . . . . . . . . . . 1.2 Deep Learning in NNs: Supervised & Unsupervised Learning (SL & UL) . . . . . . 1.3 Gradient Descent-Based NNs for RL . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Early RNN Controllers with Predictive RNN World Models . . . . . . . . . 1.3.2 Early Predictive RNN World Models Combined with Traditional RL . . . . . 1.4 Hierarchical & Multitask RL and Algorithmic Transfer Learning . . . . . . . . . . . 2.1 Basic AIT Argument 2.2 One RNN-Like System Actively Learns to Exploit Algorithmic Information of Another 2.3 Consequences of the AIT Argument for Model-Building Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Standard Activation Spreading in Typical RNNs . . . . | 1511.09249#2 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 3 | . . . . . . . . . . . . . . . . . . . . . . 3.1 Standard Activation Spreading in Typical RNNs . . . . . . . . . . . . . . . . . . . . 3.2 Alternating Training Phases for Controller C and World Model M . . . . . . . . . . 4.1 M âs Compression Performance on the History so far . . . . . . . . . . . . . . . . . 4.2 M âs Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 M may have a Built-In FNN Preprocessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 C as a Standard RL Machine whose States are M âs Activations . . . . . 5.2 C as an Evolutionary RL (R)NN whose Inputs are M âs Activations . . . . . 5.3 C Learns to Think with M : High-Level Plans and Abstractions 5.4 . . . . . . . . . . . . . . . . . . . . . . . Incremental / | 1511.09249#3 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 5 | # 1 Introduction to Reinforcement Learning (RL) with Recurrent Neural Networks (RNNs) in Partially Observable Environments1
General Reinforcement Learning (RL) agents must discover, without the aid of a teacher, how to interact with a dynamic, initially unknown, partially observable environment in order to maximize their expected cumulative reward signals, e.g., [123, 272, 310]. There may be arbitrary, a priori unknown delays between actions and perceivable consequences. The RL problem is as hard as any problem of computer science, since any task with a computable description can be formulated in the RL framework, e.g., [109].
To become a general problem solver that is able to run arbitrary problem-solving programs, the controller of a robot or an artiï¬cial agent must be a general-purpose computer [67, 35, 282, 194].
1Parts of this introduction are similar to parts of a much more extensive recent Deep Learning overview [245] which has many additional references.
2 | 1511.09249#5 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 6 | 1Parts of this introduction are similar to parts of a much more extensive recent Deep Learning overview [245] which has many additional references.
2
Artiï¬cial recurrent neural networks (RNNs) ï¬t this bill. A typical RNN consists of many simple, connected processors called neurons, each producing a sequence of real-valued activations. Input neurons get activated through sensors perceiving the environment, other neurons get activated through weighted connections or wires from previously active neurons, and some neurons may affect the environment by triggering actions. Learning or credit assignment is about ï¬nding real-valued weights that make the NN exhibit desired behavior, such as driving a car. Depending on the problem and how the neurons are connected, such behavior may require long causal chains of computational stages, where each stage transforms the aggregate activation of the network, often in a non-linear manner.
Unlike feedforward NNs (FNNs; [95, 23]) and Support Vector Machines (SVMs; [287, 253]), RNNs can in principle interact with a dynamic partially observable environment in arbitrary, com- putable ways, creating and processing memories of sequences of input patterns [258]. The weight matrix of an RNN is its program. Without a teacher, reward-maximizing programs of an RNN must be learned through repeated trial and error.
# 1.1 RL through Direct and Indirect Search in RNN Program Space | 1511.09249#6 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 7 | # 1.1 RL through Direct and Indirect Search in RNN Program Space
It is possible to train small RNNs with a few 100 or 1000 weights using evolutionary algorithms [200, 255, 105, 56, 68] to search the space of NN weights [165, 307, 44, 321, 180, 259, 320, 164, 173, 69, 71, 187, 121, 313, 66, 270, 269, 305], or through policy gradients (PGs) [314, 315, 316, 274, 18, 1, 63, 128, 313, 210, 192, 191, 256, 85, 312, 190, 82, 93][245, Sec. 6.6]. For example, our evolutionary algorithms outperformed traditional, Dynamic Programming [20]-based RL methods [272][245, Sec. 6.2] in partially observable environments, e.g., [72]. However, these techniques by themselves are insufï¬cient for solving complex control problems involving high-dimensional sensory inputs such as video, from scratch. The program search space for networks of the size required for these tasks is simply too large. | 1511.09249#7 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 8 | However, the search space can often be reduced dramatically by evolving compact encodings of neural networks (NNs), e.g., through Lindenmeyer Systems [115], graph rewriting [127], Cellular Encoding [83], HyperNEAT [268], and other techniques [245, Sec. 6.7]. In very general early work, we used universal assembler-like languages to encode NNs [235], later coefï¬cients of a Discrete Cosine Transform (DCT) [132]. The latter method, Compressed RNN Search [132], was used to successfully evolve RNN controllers with over a million weights (the largest ever evolved) to drive a simulated car in a video game, based solely on a high-dimensional video stream [132]âlearning both control and visual processing from scratch, without unsupervised pre-training of a vision system. This was the ï¬rst published Deep Learner to learn control policies directly from high-dimensional sensory input using RL.
One can further facilitate the learning task of controllers through certain types of supervised learn- ing (SL) and unsupervised learning (UL) based on gradient descent techniques. In particular, UL/SL can be used to compress the search space, and to build predictive world models to accelerate RL, as will be discussed later. But ï¬rst let us review the relevant NN algorithms for SL and UL. | 1511.09249#8 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 9 | # 1.2 Deep Learning in NNs: Supervised & Unsupervised Learning (SL & UL)
The term Deep Learning was ï¬rst introduced to Machine Learning in 1986 [49] and to NNs in 2000 [3, 244]. The ï¬rst deep learning NNs, however, date back to the 1960s [113, 245] (certain more recent developments are covered in a survey [139]).
To maximize differentiable objective functions of SL and UL, NN researchers almost invariably use backpropagation (BP) [125, 30, 52] in discrete graphs of nodes with differentiable activation functions [151, 265][245, Sec. 5.5]. Typical applications include BP in FNNs [297], or BP through time (BPTT) and similar methods in RNNs, e.g., [299, 317, 208][245]. BP and BPTT suffer from the
3 | 1511.09249#9 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 10 | 3
Fundamental Deep Learning Problem ï¬rst discovered and analyzed in my lab in 1991: with standard activation functions, cumulative backpropagated error signals decay exponentially in the number of layers, or they explode [98, 99]. Hence most early FNNs [297, 211] had few layers. Similarly, early RNNs [245, Sec. 5.6.1] could not generalize well under both short and long time lags between relevant events. Over the years, several ways of overcoming the Fundamental Deep Learning Problem have been explored. For example, deep stacks of unsupervised RNNs [228] or FNNs [13, 96, 139] help to accelerate subsequent supervised learning through BPTT [228, 230] or BP [96]. One can also âdistillâ or compress the knowledge of a teacher RNN into a student RNN by forcing the student to predict the hidden units of the teacher [228, 230]. | 1511.09249#10 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 11 | Long Short-Term Memory (LSTM; [101, 61, 77]) alleviates the Fundamental Deep Learning Prob- lem, and was the ï¬rst RNN architecture to win international contests (in connected handwriting), e.g., [79, 247][245]. Connectionist Temporal Classiï¬cation (CTC) [76] is a widely used gradient-based method for ï¬nding RNN weights that maximize the probability of teacher-provided label sequences, given (typically much longer and more high-dimensional) streams of real-valued input vectors. For example, CTC was used by Baidu to break an important speech recognition record [88]. Many re- cent state-of-the-art results in sequence processing are based on LSTM, which learned to control robots [159], and was used to set benchmark records in prosody contour prediction [55] (IBM), text- to-speech synthesis [54] (Microsoft), large vocabulary speech recognition [213] (Google), and ma- chine translation [271] (Google). CTC-trained LSTM greatly improved Google Voice [214] and is now available to over a billion smartphone users. Nevertheless, at least in some applications, other RNNs may sometimes | 1511.09249#11 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 13 | Todayâs faster computers, such as GPUs, mitigate the Fundamental Deep Learning Problem for FNNs [181, 34, 198, 38, 40]. In particular, many recent computer vision contests were won by fully supervised Max-Pooling Convolutional NNs (MPCNNs), which consist of alternating convo- lutional [58, 19] and max-pooling [296] layers topped off by standard fully connected output layers. All weights are trained by backpropagation [140, 199, 220, 245]. Ensembles [218, 28] of GPU- based MPCNNs [40, 41] achieved dramatic improvements of long-standing benchmark records, e.g., MNIST (2011), won numerous competitions [247, 38, 41, 39, 161, 42, 36, 134, 322, 37, 245], and achieved the ï¬rst human-competitive or even superhuman results on well-known benchmarks, e.g., [247, 42, 245]. There are many recent variations and improvements [64, 74, 124, 75, 277, 266, 245]. Supervised Transfer Learning from one dataset to another [32, 43] can speed up learning. A combination of Convolutional NNs (CNNs) and LSTM led to best results in automatic image caption generation [288].
# 1.3 Gradient Descent-Based NNs for RL | 1511.09249#13 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 14 | # 1.3 Gradient Descent-Based NNs for RL
Perhaps the most well-known RL application is Tesauroâs backgammon player [280] from 1994 which learned to achieve the level of human world champions, by playing against itself. It uses a reactive (memory-free) policy based on the simplifying assumption of Markov Decision Processes: the current input of the RL agent conveys all information necessary to compute an optimal next output event or decision. The policy is implemented as a gradient-based FNN trained by the method of temporal differences [272][245, Sec. 6.2]. During play, the FNN learns to map board states to predictions of expected cumulative reward, and selects actions leading to states with maximal predicted reward. A very similar approach (also based on over 20-year-old methods) employed a CNN (see Sec. 1.2) to play several Atari video games directly from 84Ã84 pixel 60 Hz video input [167], using Neural Fitted Q-Learning (NFQ) [201] based on experience replay (1991) [149]. Even better results were
4
achieved by using (slow) Monte Carlo tree planning to train comparatively fast deep NNs [86]. | 1511.09249#14 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 15 | 4
achieved by using (slow) Monte Carlo tree planning to train comparatively fast deep NNs [86].
Such FNN approaches cannot work in realistic partially observable environments where memories of previous inputs have to be stored for a priori unknown time intervals. This triggered work on partially observable Markov decision problems (POMDPs) [223, 222, 227, 204, 205, 206, 316, 148, 278, 122, 152, 25, 114, 160, 126, 308, 309, 183]. Traditional RL techniques [272][245, Sec. 6.2] based on Dynamic Programming [20] can be combined with gradient descent methods to train an RNN as a value-function approximator that maps entire event histories to predictions of expected cumulative reward [227, 148]. LSTM [101, 61, 189, 78, 77] (see Sec. 1.2) was used in this way for RL robots [12]. | 1511.09249#15 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 16 | Gradient-based UL may be used to reduce an RL controllerâs search space by feeding it only compact codes of high-dimensional inputs [118, 142, 46][245, Sec. 6.4]. For example, NFQ [201] was applied to real-world control tasks [138, 202] where purely visual inputs were compactly encoded in hidden layers of deep autoencoders [245, Sec. 5.7 and and 5.15]. RL combined with unsupervised learning based on Slow Feature Analysis [318, 131] enabled a humanoid robot to learn skills from raw video streams [154]. A RAAM RNN [193] was employed as a deep unsupervised sequence encoder for RL [65].
# 1.3.1 Early RNN Controllers with Predictive RNN World Models | 1511.09249#16 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 17 | One important application of gradient-based UL is to obtain a predictive world model, M , that a controller, C, may use to achieve its goals more efï¬ciently, e.g., through cheap, âmentalâ M -based trials, as opposed to expensive trials in the real world [301, 273]. The ï¬rst combination of an RL RNN C and an UL RNN M was ours and dates back to 1990 [223, 222, 226, 227], generalizing earlier similar controller/model systems (CM systems) based on FNNs [298, 179]; compare related work [177, 119, 301, 300, 209, 120, 178, 302, 73, 45, 144, 166, 153, 196, 60][245, Sec. 6.1]. M tries to learn to predict Câs inputs (including reward signals) from previous inputs and actions. M is also temporarily used as a surrogate for the environment: M and C form a coupled RNN where M âs outputs become inputs of C, whose outputs (actions) in turn become inputs of M . Now a gradient descent technique [299, 317, 208](see Sec. 1.2) can be used to learn and plan ahead by training C in | 1511.09249#17 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 18 | become inputs of M . Now a gradient descent technique [299, 317, 208](see Sec. 1.2) can be used to learn and plan ahead by training C in a series of M -simulated trials to produce output action sequences achieving desired input events, such as high real-valued reward signals (while the weights of M remain ï¬xed). An RL active vision system, from 1991 [249], used this basic principle to learn sequential shifts (saccades) of a fovea to detect targets in a visual scene, thus learning a rudimentary version of selective attention. | 1511.09249#18 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 19 | Those early CM systems, however, did not yet use powerful RNNs such as LSTM. A more fun- damental problem is that if the environment is too noisy, M will usually only learn to approximate the conditional expectations of predicted values, given parts of the history. In certain noisy environ- ments, Monte Carlo Tree Sampling (MCTS; [29]) and similar techniques may be applied to M to plan successful future action sequences for C. All such methods, however, are about simulating possible futures time step by time step, without proï¬ting from human-like hierarchical planning or abstract reasoning, which often ignores irrelevant details.
# 1.3.2 Early Predictive RNN World Models Combined with Traditional RL
In the early 1990s, an RNN M as in Sec. 1.3.1 was also combined [227, 150] with traditional temporal difference methods [122, 272][245, Sec. 6.2] based on the Markov assumption (Sec. 1.3). While M is processing the history of actions and observations to predict future inputs and rewards, the internal states of M are used as inputs to a temporal difference-based predictor of cumulative predicted reward, to be maximized through appropriate action sequences. One of our systems described in 1991 [227] actually collapsed the cumulative reward predictor into the predictive world model, M .
5 | 1511.09249#19 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 21 | Work on NN-based Hierarchical RL (HRL) without predictive world models has been published since the early 1990s. In particular, gradient-based subgoal discovery with RNNs decomposes RL tasks into subtasks for submodules [225]. Numerous alternative HRL techniques have been proposed [204, 206, 117, 279, 295, 171, 195, 50, 162, 51, 15, 215, 11, 260]. While HRL frameworks such as Feudal RL [48] and options [275, 16, 261] do not directly address the problem of automatic subgoal discovery, HQ-Learning [309] automatically decomposes problems in partially observable environments into sequences of simpler subtasks that can be solved by memoryless policies learnable by reactive sub- agents. Related methods include incremental NN evolution [70], hierarchical evolution of NNs [306, 285], and hierarchical Policy Gradient algorithms [63]. Recent HRL organizes potentially deep NN- based RL sub-modules into self-organizing, 2-dimensional motor control maps [203] inspired by neurophysiological ï¬ndings [81]. The methods above, however, assign credit in hierarchical fashion by limited ï¬xed schemes that are not themselves improved or adapted in problem-speciï¬c ways. The next sections will describe novel CM systems that overcome such drawbacks of above-mentioned methods. | 1511.09249#21 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 22 | General methods for incremental multitask RL and algorithmic transfer learning that are not NN-speciï¬c include the evolutionary ADATE system [182], the Success-Story Algorithm for Self- Modifying Policies running on general-purpose computers [233, 252, 251], and the Optimal Ordered Problem Solver [238], which learns algorithmic solutions to new problems by inspecting and ex- ploiting (in arbitrary computable fashion) solutions to old problems, in a way that is asymptotically time-optimal. And POWERPLAY [243, 267] incrementally learns to become a more and more general algorithmic problem solver, by continually searching the space of possible pairs of new tasks and modiï¬cations of the current solver, until it ï¬nds a more powerful solver that, unlike the unmodiï¬ed solver, solves all previously learned tasks plus the new one, or at least simpliï¬es/compresses/speeds up previous solutions, without forgetting any.
# 2 Algorithmic Information Theory (AIT) for RNN-based AIs | 1511.09249#22 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 23 | # 2 Algorithmic Information Theory (AIT) for RNN-based AIs
Our early RNN-based CM systems (1990) mentioned in Sec. 1.3.1 learn a predictive model of their initially unknown environment. Real brains seem to do so too, but are still far superior to present artiï¬cial systems in many ways. They seem to exploit the model in smarter ways, e.g., to plan action sequences in hierarchical fashion, or through other types of abstract reasoning, continually building on earlier acquired skills, becoming increasingly general problem solvers able to deal with a large number of diverse and complex tasks. Here we describe RNN-based Artiï¬cial Intelligences (RNNAIs) designed to do the same by âlearning to think.â2 | 1511.09249#23 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 24 | While FNNs are traditionally linked [23] to concepts of statistical mechanics and information theory [24, 257, 136], the programs of general computers such as RNNs call for the framework of Algorithmic Information Theory (AIT) [263, 130, 33, 145, 264, 147] (own AIT work: [234, 235, 236, 237, 238]). Given some universal programming language [67, 35, 282, 194] for a universal computer, the algorithmic information content or Kolmogorov complexity of some computable object is the length of the shortest program that computes it. Since any program for one computer can be translated into a functionally equivalent program for a different computer by a compiler program of constant size, the Kolmogorov complexity of most objects hardly depends on the particular computer used. Most computable objects of a given size, however, are hardly compressible, since there are only relatively few programs that are much shorter. Similar observations hold for practical variants
2The terminology is partially inspired by our RNNAISSANCE workshop at NIPS 2003 [246].
6
of Kolmogorov complexity that explicitly take into account program runtime [146, 6, 291, 147, 235, 237]. Our RNNAIs are inspired by the following argument.
# 2.1 Basic AIT Argument | 1511.09249#24 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 25 | # 2.1 Basic AIT Argument
According to AIT, given some universal computer, U , whose programs are encoded as bit strings, the mutual information between two programs p and q is expressed as K(q | p), the length of the shortest program ¯w that computes q, given p, ignoring an additive constant of O(1) depending on U (in practical applications the computation will be time-bounded [147]). That is, if p is a solution to problem P , and q is a fast (say, linear time) solution to problem Q, and if K(q | p) is small, and ¯w is both fast and much shorter than q, then asymptotically optimal universal search [146, 238] for a solution to Q, given p, will generally ï¬nd ¯w ï¬rst (to compute q and solve Q), and thus solve Q much faster than search for q from scratch [238].
# 2.2 One RNN-Like System Actively Learns to Exploit Algorithmic Informa- tion of Another | 1511.09249#25 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 26 | # 2.2 One RNN-Like System Actively Learns to Exploit Algorithmic Informa- tion of Another
The AIT argument 2.1 above has broad applicability. Let both C and M be RNNs or similar general parallel-sequential computers [229, 47, 175, 232, 231, 103, 80, 303]. M âs vector of learnable real- valued parameters wM is trained by any SL or UL or RL algorithm to perform a certain well-deï¬ned task in some environment. Then wM is frozen. Now the goal is to train Câs parameters wC by some learning algorithm to perform another well-deï¬ned task whose solution may share mutual algorithmic information with the solution to M âs task. To facilitate this, we simply allow C to learn to actively inspect and reuse (in essentially arbitrary computable fashion) the algorithmic information conveyed by M and wM . | 1511.09249#26 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 27 | Let us consider a trial during which C makes an attempt to solve its given task within a series of discrete time steps t = ta, ta + 1, . . . , tb. Câs learning algorithm may use the experience gathered during the trial to modify wC in order to improve Câs performance in later trials. During the trial, we give C an opportunity to explore and exploit or ignore M by interacting with it. In what follows, C(t), M (t), sense(t), act(t), query(t), answer(t), wM , wC denote vectors of real values; fC, fM denote computable [67, 35, 282, 194] functions. | 1511.09249#27 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 28 | At any time t, C(t) and M (t) denote Câs and M âs current states, respectively. They may represent current neural activations or fast weights [229, 232, 231] or other dynamic variables that may change during information processing. sense(t) is the current input from the environment (including reward signals if any); a part of C(t) encodes the current output act(t) to the environment, another a memory of previous events (if any). Parts of C(t) and M (t) intersect in the sense that both C(t) and M (t) also encode Câs current query(t) to M , and M âs current answer(t) to C (in response to previous queries), thus representing an interface between C and M .
M (ta) and C(ta) are initialized by default values. For ta ⤠t < tb,
C(t + 1) = fC(wC, C(t), M (t), sense(t), wM ) | 1511.09249#28 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 29 | C(t + 1) = fC(wC, C(t), M (t), sense(t), wM )
with learnable parameters wC; act(t) is a computable function of C(t) and may inï¬uence in(t + 1), and M (t + 1) = fM (C(t), M (t), wM ) with ï¬xed parameters wM . So both M (t + 1) and C(t + 1) are computable functions of previous events including queries and answers transmitted through the learnable fC.
According to the AIT argument, provided that M conveys substantial algorithmic information about Câs task, and the trainable interface fC between C and M allows C to address and extract and exploit this information quickly, and wC is small compared to the ï¬xed wM , the search space of Câs
7
learning algorithm (trying to find a good woe through a series of trials) should be much smaller than the one of a similar competing system Câ that has no opportunity to query / but has to learn the task from scratch. | 1511.09249#29 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 30 | For example, suppose that M has learned to represent (e.g., through predictive coding [228, 248]) videos of people placing toys in boxes, or to summarize such videos through textual outputs. Now suppose Câs task is to learn to control a robot that places toys in boxes. Although the robotâs actuators may be quite different from human arms and hands, and although videos and video-describing texts are quite different from desirable trajectories of robot movements, M is expected to convey algo- rithmic information about Câs task, perhaps in form of connected high-level spatio-temporal feature detectors representing typical movements of hands and elbows independent of arm size. Learning a wC that addresses and extracts this information from M and partially reuses it to solve the robotâs task may be much faster than learning to solve the task from scratch without access to M . The setups of Sec. 5.3 are special cases of the general scheme in the present Sec. 2.2.
# 2.3 Consequences of the AIT Argument for Model-Building Controllers | 1511.09249#30 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 31 | # 2.3 Consequences of the AIT Argument for Model-Building Controllers
The simple AIT insight above suggests that in many partially observable environments it should be possible to greatly speed up the program search of an RL RNN, C, by letting it learn to access, query, and exploit in arbitrary computable ways the program of a typically much bigger gradient-based UL RNN, M , used to model and compress the RL agentâs entire growing interaction history of all failed and successful trials.
Note that the ¯w of Sec. 2.1 may implement all kinds of well-known, computable types of reason- ing, e.g., by hierarchical reuse of subprograms of p [238], by analogy, etc. That is, we may perhaps even expect C to learn to exploit M for human-like abstract thought.
Such novel CM systems will be a central topic of Sec. 5. Sec. 6 will also discuss exploration based on efï¬ciently improving M through C-generated experiments.
# 3 The RNNAI and its Holy Data | 1511.09249#31 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 33 | At the beginning of a given time step, ¢, there is a ânormalâ sensory input vector, in(t) ⬠Râ¢, and a reward input vector, r(t) ⬠Râ. For example, parts of in(t) may represent the pixel intensities of an incoming video frame, while components of r(t) may reflect external positive rewards, or neg- ative values produced by pain sensors whenever they measure excessive temperature or pressure. Let sense(t) ⬠Râ¢*+â denote the concatenation of the vectors in(t) and r(t). The total reward at time tis R(t) = 30%, ri(t). The total cumulative reward up to time t is CR(t) = 7'_, R(r). During time step t, the RNNAI produces an output action vector, out(t) ⬠R°, which may influence the en- vironment and thus future sense(r) for 7 > t. At any given time, the RNNAIâs goal is to maximize CR(tdeath).
Let all(t) â Rm+n+o denote the concatenation of sense(t) and out(t). Let H(t) denote the
sequence (all(1), all(2), . . . , all(t)) up to time t. | 1511.09249#33 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 34 | sequence (all(1), all(2), . . . , all(t)) up to time t.
To be able to retrain its components on all observations ever made, the RNNAI stores its entire, growing, lifelong sensory-motor interaction history H(·) including all inputs and actions and reward signals observed during all successful and failed trials [239, 240], including what initially looks like noise but later may turn out to be regular. This is normally not done, but is feasible today.
8
That is, all data is âholyâ, and never discarded, in line with what mathematically optimal general problem solvers should do [109, 237]. Remarkably, even human brains may have enough storage capacity to store 100 years of sensory input at a reasonable resolution [240].
# 3.1 Standard Activation Spreading in Typical RNNs | 1511.09249#34 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 35 | # 3.1 Standard Activation Spreading in Typical RNNs
Many RNN-like models can be used to build general computers, e.g., neural pushdown automata [47, 175], NNs with quickly modiï¬able, differentiable external memory based on fast weights [229], or closely related RNN-based meta-learners [232, 231, 103, 219]. Using sloppy but convenient termi- nology, we refer to all of them as RNNs. A typical implementation of M uses an LSTM network (see Sec. 1.2). If there are large 2-dimensional inputs such as video images, then they can be ï¬rst ï¬ltered through a CNN (compare Sec. 1.2 and 4.3) before fed into the LSTM. Such a CNN-LSTM combination is still an RNN.
Here we brieï¬y summarize information processing in standard RNNs. Using notation similar to the one of a previous survey [245, Sec. 2], let i, k, s denote positive integer variables assuming ranges implicit in the given contexts. Let nu, nw, T also denote positive integers. | 1511.09249#35 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 37 | The RNNâs behavior or program is determined by n,, real-valued, possibly modifiable, parameters or weights, w; (i = 1,...,n). During an episode of information processing (e.g., during a trial there is a partially causal sequence x,(s = 1,...,T) of real values called events. lex s is used in a way that is much more fine-grained than the one of the index t in Sec. a single time step may involve numerous events. Each x, is either an input set by the environment, or the activation of a unit that may directly depend on other x, (k < s) through a current NN topology-dependent set, in,, of indices k representing incoming causal connections or links. Let the function v encode topology information, and map such event index pairs, (k, s), to weight indices. For example, in the non-input case we may have x, = f,(net;) with real-valued nets = Vopein, TkWo(k,s) (additive case) or nets = [Tp ein, TkWo(k,s) (multiplicative case), where f, is a typically nonlinear real-valued activation function such as tanh. Other net functions combine additions and multiplications | 1511.09249#37 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 38 | (multiplicative case), where f, is a typically nonlinear real-valued activation function such as tanh. Other net functions combine additions and multiplications (113|{112]; many other activation functions are possible. The sequence, xs, may directly affect certain x,(k > s) through outgoing connections or links represented through a current set, out,, of indices k with s ⬠in;,. Some of the non-input events are called output events. Many of the x, may refer to different, time-varying activations of the same unit, e.g., in RNNs. During the episode, the same weight may get reused over and over again in topology-dependent ways. Such weight sharing across space and/or time may greatly reduce the NNâs descriptive complexity, which is the number of bits of information required to describe the NN (Sec. [4). Training algorithms for the RNNs of our RNNAIs will be discussed later. | 1511.09249#38 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 39 | # 3.2 Alternating Training Phases for Controller C and World Model M
Several novel implementations of C are described in Sec. 5. All of them make use of a variable size RNN called the world model, M , which learns to compactly encode the growing history, for example, through predictive coding, trying to predict (the expected value of) each input component, given the history of actions and observations. M âs goal is to discover algorithmic regularities in the data so far by learning a program that compresses the data better in a lossless manner. Example details will be speciï¬ed in Sec. 4.
9
Algorithm 1 Train C and M in Alternating Fashion | 1511.09249#39 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 40 | 9
Algorithm 1 Train C and M in Alternating Fashion
1. Initialize C and M and their weights. 2. Freeze M âs weights such that they cannot change while C learns. 3. Execute a new trial by generating a ï¬nite action sequence that prolongs the history of actions and observations. Actions may be due to C which may exploit M in various ways (see Sec. 5). Train Câs weights on the prolonged (and recorded) history to generate action sequences with higher expected reward, using methods of Sec. 5. 4. Unfreeze M âs weights, and re-train M in a âsleep phaseâ to better predict/compress the pro- longed history; see Sec. 4. 5. If no stopping criterion is met, goto 2.
Both C and M have real-valued parameters or weights that can be modiï¬ed to improve perfor- mance. To avoid instabilities, C and M are trained in alternating fashion, as in Algorithm 1.
# 4 The Gradient-Based World Model M | 1511.09249#40 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 41 | # 4 The Gradient-Based World Model M
A central objective of unsupervised learning is to compress the observed data [14, 228]. M âs goal is to compress the RL agentâs entire growing interaction history of all failed and successful trials [239, 241], e.g., through predictive coding [228, 248]. M has m + n + o input units to receive all(t) at time t < tdeath, and m + n output units to produce a prediction pred(t + 1) â Rm+n of sense(t + 1) [223, 226, 222, 227].
# 4.1 M âs Compression Performance on the History so far | 1511.09249#41 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 42 | # 4.1 M âs Compression Performance on the History so far
Let us address details of training M in a âsleep phaseâ of step 4 in algorithm[]] (The training of C will be discussed in Sec. [5]) Consider some M with given (typically suboptimal) weights and a default initialization of all unit activations. One example way of making MM compress the history (but not the only one) is the following. Given H(t), we can train M by replaying H(t) in semi-offline training, sequentially feeding all(1), all(2),...all(t) into Mâs input units in standard RNN fashion (Sec. 3.1). Given H(r) (r < t), M calculates pred(r + 1), a prediction of sense(r + 1). A standard error function to be minimized by gradient descent in /âs weights (Sec. E(t) = â71 ||pred(r + 1) â sense(r + 1)||?, the sum of the deviations of the predictions from the observations so far. | 1511.09249#42 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 43 | However, M âs goal is not only to minimize the total prediction error, E. Instead, to avoid the erroneous âdiscoveryâ of âregular patternsâ in irregular noise, we use AITâs sound way of dealing with overï¬tting [263, 130, 289, 207, 147, 84], and measure M âs compression performance by the number of bits required to specify M , plus the bits needed to encode the observed deviations from M âs predictions [239, 241]. For example, whenever M incorrectly predicts certain input pixels of a perceived video frame, those pixel values will have to be encoded separately, which will cost storage space. (In typical applications, M can only execute a ï¬xed number of elementary computations per time step to compress and decompress data, which usually has to be done online. That is, in general M will not reï¬ect the dataâs true Kolmogorov complexity [263, 130], but at best a time-bounded variant thereof [147].) | 1511.09249#43 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 44 | Let integer variables, bitsM and bitsH , denote estimates of the number of bits required to encode (by a ï¬xed algorithmic scheme) the current M , and the deviations of M âs predictions from the ob- servations on the current history, respectively. For example, to obtain bitsH , we may naively assume some simple, bell-shaped, zero-centered probability distribution Pe on the ï¬nite number of possible
10 | 1511.09249#44 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 45 | 10
real-valued prediction errors ei,Ï = (predi(Ï ) â sensei(Ï ))2 (in practical applications the errors will be given with limited precision), and encode each ei,Ï by âlogPe(ei,Ï ) bits [108, 257]. That is, large errors are considered unlikely and cost more bits than small ones. To obtain bitsM , we may naively multiply the current number of M âs non-zero modiï¬able weights by a small integer constant reï¬ecting the weight precision. Alternatively, we may assume some simple, bell-shaped, zero-centered proba- bility distribution, Pw, on the ï¬nite number of possible weight values (given with limited precision), and encode each wi by âlogPw(wi) bits. That is, large absolute weight values are considered un- likely and cost more bits than small ones [91, 294, 135, 97]. Both alternatives ignore the possibility that M âs entire weight matrix might be computable by a short computer program [235, 132], but have the advantage of being easy to calculate. Moreover, since M is a general computer itself, at least in principle it has a chance of learning equivalents of such short programs.
# 4.2 M âs Training | 1511.09249#45 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 46 | # 4.2 M âs Training
To decrease bitsM + bitsH , we add a regularizing term to E, to punish excessive complexity [4, 5, 91, 294, 155, 135, 97, 170, 169, 104, 290, 7, 290, 87, 286, 319, 100, 102].
Step 1 of algorithm 1 starts with a small M . As the history grows, to ï¬nd an M with small bitsM + bitsH , step 4 uses sequential network construction: it regularly changes M âs size by adding or pruning units and connections [111, 112, 8, 168, 59, 107, 304, 176, 141, 92, 143, 204, 53, 296, 106, 31, 57, 185, 283]. Whenever this helps (after additional training with BPTT of M âsee Sec. 1.2) to improve bitsM + bitsH on the history so far, the changes are kept, otherwise they are discarded. (Note that even animal brains grow and prune neurons.) | 1511.09249#46 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 47 | Given history H(t), instead of re-training M in a sleep phase (step 4 of algorithm 1) on all of H(t), we may re-train it on parts thereof, by selecting trials randomly or otherwise from H(t), and replay them to retrain M in standard fashion (Sec. 1.2). To do this, however, all of M âs unit activations need to be stored at the beginning of each trial. (M âs hidden unit activations, however, do not have to be stored if they are reset to zero at the beginning of each trial.)
# 4.3 M may have a Built-In FNN Preprocessor
To facilitate M âs task in certain environments, each frame of the sensory input stream (video, etc.) can ï¬rst be separately compressed through autoencoders [211] or autoencoder hierarchies [13, 21] based on CNNs or other FNNs (see Sec. 1.2) [42] used as sensory preprocessors to create less redun- dant sensory codes [118, 138, 142, 46]. The compressed codes are then fed into an RNN trained to predict not the raw inputs, but their compressed codes. Those predictions have to be decompressed again by the FNN, to evaluate the total compression performance, bitsM + bitsH , of the FNN-RNN combination representing M . | 1511.09249#47 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 48 | # 5 The Controller C Learning to Exploit RNN World Model M
Here we describe ways of using the world model, M , of Sec. 4 to facilitate the task of the RL con- troller, C. Especially the systems of Sec. 5.3 overcome drawbacks of early CM systems mentioned in Sec. 1.3.1, 1.3.2. Some of the setups of the present Sec. 5 can be viewed as special cases of the general scheme in Sec. 2.2.
11
# 5.1 C as a Standard RL Machine whose States are M âs Activations
We start with details of an approach whose principles date back to the early 1990s [227, 150] (Sec. 1.3.2). Given an RNN or RNN-like M as in Sec. 4, we implement C as a traditional RL machine [272][245, Sec. 6.2] based on the Markov assumption (Sec. 1.3). While M is processing the history of actions and observations to predict future inputs, the internal states of M are used as inputs to a predictor of cumulative expected future reward. | 1511.09249#48 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 49 | More speciï¬cally, in step 3 of algorithm 1, consider a trial lasting from time ta ⥠1 to tb ⤠tdeath. M is used as a preprocessor for C as follows. At the beginning of a given time step, t, of the trial (ta ⤠t < tb), let hidden(t) â Rh denote the vector of M âs current hidden unit activations (those units that are neither input nor output units). Let state(t) â R2m+2n+h denote the concatenation of sense(t), hidden(t) and pred(t). (In cases where M âs activations are reset after each trial, hidden(ta) and pred(ta) are initialized by default values, e.g., zero vectors.)
C is an RL machine with 2m + 2n + h-dimensional inputs and o-dimensional outputs. At time t, state(t) is fed into C, which then computes action out(t). Then M computes from sense(t), hidden(t) and out(t) the values hidden(t + 1) and pred(t + 1). Then out(t) is executed in the environment, to obtain the next input sense(t + 1). | 1511.09249#49 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 50 | The parameters or weights of C are trained to maximize reward by a standard RL method such as Q-learning or similar methods [17, 292, 293, 172, 254, 212, 262, 10, 122, 188, 157, 281, 26, 216, 197, 272, 311, 9, 163, 174, 22, 27, 2, 137, 276, 156, 284]. Note that most of these methods evaluate not only input events but pairs of input and output (action) events.
In one of the simplest cases, C is just a linear perceptron FNN (instead of an RNN like in the early system [227]). The fact that C has no built-in memory in this case is not a fundamental restriction since M is recurrent, and has been trained to predict not only normal sensory inputs, but also reward signals. That is, the state of M must contain all the historic information relevant to maximize future expected reward, provided the data history so far already contains the relevant experience, and M has learned to compactly extract and represent its regular aspects. | 1511.09249#50 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 51 | This approach is different from other, previous combinations of traditional RL [272][245, Sec. 6.2] and RNNs [227, 148, 12] which use RNNs only as value function approximators that directly predict cumulative expected reward, instead of trying to predict all sensations time step by time step. The CM system in the present section separates the hard task of prediction in partially observable environments from the comparatively simple task of RL under the Markovian assumption that the current input to C (which is M âs state) contains all information relevant for achieving the goal.
# 5.2 C as an Evolutionary RL (R)NN whose Inputs are M âs Activations | 1511.09249#51 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 52 | # 5.2 C as an Evolutionary RL (R)NN whose Inputs are M âs Activations
This approach is essentially the same as the one of Sec. 5.1, except that C is now an FNN or RNN trained by evolutionary algorithms [200, 255, 105, 56, 68] applied to NNs [165, 321, 180, 259, 72, 90, 89, 110, 94], or by policy gradient methods [314, 315, 316, 274, 18, 1, 63, 128, 313, 210, 192, 191, 256, 85, 312, 190, 82, 93][245, Sec. 6.6], or by Compressed NN Search; see Sec. 1. C has 2m + 2n + h input units and o output units. At time t, state(t) is fed into C, which computes out(t); then M computes hidden(t + 1) and pred(t + 1); then out(t) is executed to obtain sense(t + 1).
# 5.3 C Learns to Think with M : High-Level Plans and Abstractions | 1511.09249#52 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 53 | # 5.3 C Learns to Think with M : High-Level Plans and Abstractions
Our RNN-based CM systems of the early 1990s [223, 226](Sec. 1.3.1) could in principle plan ahead by performing numerous fast mental experiments on a predictive RNN world model, M , instead of time-consuming real experiments, extending earlier work on reactive systems without memory [301, 273]. However, this can work well only in (near-)deterministic environments, and, even there, M
12
would have to simulate many entire alternative futures, time step by time step, to ï¬nd an action sequence for C that maximizes reward. This method seems very different from the much smarter hierarchical planning methods of humans, who apparently can learn to identify and exploit a few relevant problem-speciï¬c abstractions of possible future events; reasoning abstractly, and efï¬ciently ignoring irrelevant spatio-temporal details.
We now describe a CM system that can in principle learn to plan and reason like this as well, according to the AIT argument (Sec. 2.1). This should be viewed as a main contribution of the present paper. See Figure 1. | 1511.09249#53 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 54 | Consider an RNN C (with typically rather small feasible search space) as in Sec. 5.2. We add standard and/or multiplicative learnable connections (Sec. 3.1) from some of the units of C to some of the units of the typically huge unsupervised M , and from some of the units of M to some of the units of C. The new connections are said to belong to C. C and M now collectively form a new RNN called CM , with standard activation spreading as in Sec. 3.1. The activations of M are initialized to default values at the beginning of each trial. Now CM is trained on RL tasks in line with step 3 of algorithm 1, using search methods such as those of Sec. 5.2 (compare Sec. 1). The (typically many) connections of M , however, do not changeâonly the (typically relatively few) connections of C do. What does that mean? It means that now Câs relatively small candidate programs are given time to âthinkâ by feeding sequences of activations into M , and reading activations out of M , before and while interacting with the environment. Since C and M are general computers, Câs programs may query, edit or invoke subprograms of M in arbitrary, computable ways through the | 1511.09249#54 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 55 | with the environment. Since C and M are general computers, Câs programs may query, edit or invoke subprograms of M in arbitrary, computable ways through the new connections. Given some RL problem, according to the AIT argument (Sec. 2.1), this can greatly accelerate Câs search for a problem-solving weight vector Ëw, provided the (time-bounded [147]) mutual algorithmic information between Ëw and M âs program is high, as is to be expected in many cases since M âs environment-modeling program should reï¬ect many regularities useful not only for prediction and coding, but also for decision making.3 | 1511.09249#55 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 56 | This simple but novel approach is much more general than previous computable, but restricted, ways of letting a feedforward C use a model M (Sec. 1.3.1)[301, 273][245, Sec. 6.1], by simulat- ing entire possible futures step by step, then propagating error signals or temporal difference errors backwards (see Section 1.3.1). Instead, we give Câs program search an opportunity to discover sophis- ticated computable ways of exploiting M âs code, such as abstract hierarchical planning and analogy- based reasoning. For example, to represent previous observations, an M implemented as an LSTM network (Sec. 1.2) will develop high-level, abstract, spatio-temporal feature detectors that may be ac- tive for thousands of time steps, as long as those memories are useful to predict (and thus compress) future observations [62, 61, 189, 79]. However, C may learn to directly invoke the corresponding âabstractâ units in M by inserting appropriate pattern sequences into M . C might then short-cut from there to typical subsequent abstract representations, ignoring the long input sequences normally re- quired to invoke them in M , thus quickly anticipating a few possible positive outcomes to be pursued (plus computable ways of achieving them), or negative outcomes to be avoided. | 1511.09249#56 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 57 | Note that M (and by extension M ) does not at all have to be a perfect predictor. For example, it wonât be able to predict noise. Instead M will have learned to approximate conditional expectations of future inputs, given the history so far. A naive way of exploiting M âs probabilistic knowledge would be to plan ahead through naive step-by-step Monte-Carlo simulations of possible M -predicted futures, to ï¬nd and execute action sequences that maximize expected reward predicted by those simulations. However, we wonât limit the system to this naive approach. Instead it will be the task of C to learn to address useful problem-speciï¬c parts of the current M , and reuse them for problem solving. Sure,
3 An alternative way of letting C learn to access the program of M is to add C-owned connections from the weights of M to units of C, treating the current weights of M as additional real-valued inputs to C. This, however, will typically result in a much larger search space for C. There are many other variants of the general scheme described in Sec. 2.2.
13 | 1511.09249#57 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 58 | 13
C will have to intelligently exploit M , which will cost bits of information (and thus search time for appropriate weight changes of C), but this is often still much cheaper in the AIT sense than learning a good C program from scratch, as in our previous non-RNN AIT-based work on algorithmic transfer learning [238], where self-invented recursive code for previous solutions sped up the search for code for more complex tasks by a factor of 1000.
Numerous topologies are possible for the adaptive connections from C to M , and back. Although in some applications C may ï¬nd it hard to exploit M , and might prefer to ignore M (by setting connections to and from M to zero), in some environments under certain CM topologies, C can greatly proï¬t from M . | 1511.09249#58 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 59 | While M âs weights are frozen in step 3 of algorithm 1, the weights of C can learn when to make C attend to history information represented by M âs state, and when to ignore such information, and instead use M âs innards in other computable ways. This can be further facilitated by introducing a special unit, Ëu, to C, where Ëu(t)all(t) instead of all(t) is fed into M at time t, such that C can easily (by setting Ëu(t) = 0) force M to completely ignore environmental inputs, to use M for âthinkingâ in other ways.
Should M later grow (or shrink) in step 4 of algorithm 1, in line with Sec. 4.2, C may in turn grow additional connections to and from M (or lose some) in the next incarnation of step 3.
# Incremental / Hierarchical / Multitask Learning of C with M | 1511.09249#59 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 60 | # Incremental / Hierarchical / Multitask Learning of C with M
A variant of the approach in Sec. 5.3 incrementally trains C on a never-ending series of tasks, con- tinually building on solutions to previous problems, instead of learning each new problem from scratch. In principle, this can be done through incremental NN evolution [70], hierarchical NN evo- lution [306, 285], hierarchical Policy Gradient algorithms [63], or asymptotically optimal ways of algorithmic transfer learning [238].
Given a new task and a C trained on several previous tasks, such hierarchical/incremental methods may freeze the current weights of C, then enlarge C by adding new units and connections which are trained on the new task. This process reduces the size of the search space for the new task, giving the new weights the opportunity to learn to use the frozen parts of C as subprograms. | 1511.09249#60 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 61 | Incremental variants of Compressed RNN Search [132] (Sec. 1) do not directly search in Câs potentially large weight space, but in the frequency domain by representing the weight matrix as a small set of Fourier-type coefï¬cients. By searching for new coefï¬cients to be added to already learned set responsible for solving previous problems, Câs weight matrix is ï¬ne tuned incrementally and indirectly (through superpositions). Given a current problem, in AIT-based OOPS style [238], we may impose growing run time limits on programs tested on C, until a solution is found.
# 6 Exploration: Rewarding C for Experiments that Improve M
Humans, even as infants, invent their own tasks in a curious and creative fashion, continually increas- ing their problem solving repertoire even without an external reward or teacher. They seem to get intrinsic reward for creating experiments leading to observations that obey a previously unknown law that allows for better compression of the observationsâcorresponding to the discovery of a temporar- ily interesting, subjectively novel regularity [224, 239, 241] (compare also [261, 184]). | 1511.09249#61 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 62 | For example, a video of 100 falling apples can be greatly compressed via predictive coding once the law of gravity is discovered. Likewise, the video-like image sequence perceived while mov- ing through an ofï¬ce can be greatly compressed by constructing an internal 3D model of the ofï¬ce space [243]. The 3D model allows for re-computing the entire high-resolution video from a compact
14
sequence of very low-dimensional eye coordinates and eye directions. The model itself can be spec- iï¬ed by far fewer bits of information than needed to store the raw pixel data of a long video. Even if the 3D model is not precise, only relatively few extra bits will be required to encode the observed deviations from the predictions of the model.
Even mirror neurons [129] are easily explained as by-products of history compression as in Sec. 3 and 4. They ï¬re both when an animal acts and when the animal observes the same action performed by another. Due to mutual algorithmic information shared by perceptions of similar actions performed by various animals, efï¬cient RNN-based predictive coding (Sec. 3, 4) proï¬ts from using the same feature detectors (neurons) to encode the shared information, thus saving storage space. | 1511.09249#62 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 63 | Given the C-M combinations of Sec. 5, we motivate C to become an efï¬cient explorer and an artiï¬cial scientist, by adding to its standard external reward (or ï¬tness) for solving user-given tasks another intrinsic reward for generating novel action sequences (= experiments) that allow M to im- prove its compression performance on the resulting data [239, 241].
At ï¬rst glance, repeatedly evaluating M âs compression performance on the entire history seems impractical. A heuristic to overcome this is to focus on M âs improvements on the most recent trial, while regularly re-training M on randomly selected previous trials, to avoid catastrophic forgetting. | 1511.09249#63 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 64 | A related problem is that Câs incremental program search may ï¬nd it difï¬cult to identify (and assign credit to) those parts of C responsible for improvements of a huge, black box-like, monolithic M . But we can implement M as a self-modularizing, computation cost-minimizing, winner-take-all RNN [221, 242, 267]. Then it is possible to keep track of which parts of M are used to encode which parts of the history. That is, to evaluate weight changes of M , only the affected parts of the stored history have to be re-tested [243]. Then Câs search can be facilitated by tracking which parts of C affected those parts of M . By penalizing Câs programs for the time consumed by such tests, the search for C is biased to prefer programs that conduct experiments causing data yielding quickly veriï¬able compression progress of M . That is, the program search will prefer to change weights of M that are not used to compress large parts of the history that are expensive to verify [242, 243]. The ï¬rst implementations of this simple principle were described in our work on the POWERPLAY framework [243, 267], which incrementally searches the space of | 1511.09249#64 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 65 | ï¬rst implementations of this simple principle were described in our work on the POWERPLAY framework [243, 267], which incrementally searches the space of possible pairs of new tasks and modiï¬cations of the current program, until it ï¬nds a more powerful program that, unlike the unmodi- ï¬ed program, solves all previously learned tasks plus the new one, or simpliï¬es/compresses/speeds up previous solutions, without forgetting any. Under certain conditions this can accelerate the acquisition of external reward speciï¬ed by user-deï¬ned tasks. | 1511.09249#65 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 66 | # 7 Conclusion
We introduced novel combinations of a reinforcement learning (RL) controller, C, and an RNN-based predictive world model, M . The most general CM systems implement principles of algorithmic [263, 130, 147] as opposed to traditional [24, 257] information theory. Here both M and C are RNNs or RNN-like systems. M is actively exploited in arbitrary computable ways by C, whose program search space is typically much smaller, and which may learn to selectively probe and reuse M âs internal programs to plan and reason. The basic principles are not limited to RL, but apply to all kinds of active algorithmic transfer learning from one RNN to another. By combining gradient-based RNNs and RL RNNs, we create a qualitatively new type of self-improving, general purpose, connectionist control architecture. This RNNAI may continually build upon previously acquired problem solving procedures, some of them self-invented in a way that resembles a scientistâs search for novel data with unknown regularities, preferring still-unsolved but quickly learnable tasks over others.
15
# References
[1] D. Aberdeen. Policy-Gradient Algorithms for Partially Observable Markov Decision Pro- cesses. PhD thesis, Australian National University, 2003. | 1511.09249#66 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 67 | 15
# References
[1] D. Aberdeen. Policy-Gradient Algorithms for Partially Observable Markov Decision Pro- cesses. PhD thesis, Australian National University, 2003.
[2] J. Abounadi, D. Bertsekas, and V. S. Borkar. Learning algorithms for Markov decision pro- cesses with average cost. SIAM Journal on Control and Optimization, 40(3):681â698, 2002.
[3] I. Aizenberg, N. N. Aizenberg, and J. Vandewalle. Multi-Valued and Universal Binary Neurons: Theory, Learning and Applications. Springer Science & Business Media, 2000.
[4] H. Akaike. Statistical predictor identiï¬cation. Ann. Inst. Statist. Math., 22:203â217, 1970.
[5] H. Akaike. A new look at the statistical model identiï¬cation. IEEE Transactions on Automatic Control, 19(6):716â723, 1974.
[6] A. Allender. Application of time-bounded Kolmogorov complexity in complexity theory. In O. Watanabe, editor, Kolmogorov complexity and computational complexity, pages 6â22. EATCS Monographs on Theoretical Computer Science, Springer, 1992. | 1511.09249#67 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 68 | [7] S. Amari and N. Murata. Statistical theory of learning curves under entropic loss criterion. Neural Computation, 5(1):140â153, 1993.
[8] T. Ash. Dynamic node creation in backpropagation neural networks. Connection Science, 1(4):365â375, 1989.
[9] L. Baird and A. W. Moore. Gradient descent for general reinforcement learning. In Advances in neural information processing systems 12 (NIPS), pages 968â974. MIT Press, 1999.
[10] L. C. Baird. Residual algorithms: Reinforcement learning with function approximation. In International Conference on Machine Learning, pages 30â37, 1995.
[11] B. Bakker and J. Schmidhuber. Hierarchical reinforcement learning based on subgoal discov- In F. G. et al., editor, Proc. 8th Conference on Intelligent ery and subpolicy specialization. Autonomous Systems IAS-8, pages 438â445, Amsterdam, NL, 2004. IOS Press.
[12] B. Bakker, V. Zhumatiy, G. Gruener, and J. Schmidhuber. A robot that reinforcement-learns to identify and memorize important previous observations. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2003, pages 430â435, 2003. | 1511.09249#68 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 69 | [13] D. H. Ballard. Modular learning in neural networks. In Proc. AAAI, pages 279â284, 1987.
[14] H. B. Barlow, T. P. Kaushal, and G. J. Mitchison. Finding minimum entropy codes. Neural Computation, 1(3):412â423, 1989.
[15] A. G. Barto and S. Mahadevan. Recent advances in hierarchical reinforcement learning. Dis- crete Event Dynamic Systems, 13(4):341â379, 2003.
[16] A. G. Barto, S. Singh, and N. Chentanez. Intrinsically motivated learning of hierarchical col- In Proceedings of International Conference on Developmental Learning lections of skills. (ICDL), pages 112â119. MIT Press, Cambridge, MA, 2004.
[17] A. G. Barto, R. S. Sutton, and C. W. Anderson. Neuronlike adaptive elements that can solve IEEE Transactions on Systems, Man, and Cybernetics, difï¬cult learning control problems. SMC-13:834â846, 1983.
16
[18] J. Baxter and P. L. Bartlett. 15(1):319â350, 2001. Inï¬nite-horizon policy-gradient estimation. J. Artif. Int. Res., | 1511.09249#69 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 70 | [19] S. Behnke. Hierarchical Neural Networks for Image Interpretation, volume LNCS 2766 of Lecture Notes in Computer Science. Springer, 2003.
[20] R. Bellman. Dynamic Programming. Princeton University Press, Princeton, NJ, USA, 1st edition, 1957.
[21] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspec- tives. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(8):1798â1828, 2013.
[22] D. P. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientiï¬c, 2001.
[23] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
[24] L. Boltzmann. In F. Hasen¨ohrl, editor, Wissenschaftliche Abhandlungen (collection of Boltz- mannâs articles in scientiï¬c journals). Barth, Leipzig, 1909.
[25] C. Boutilier and D. Poole. Computing optimal policies for partially observable Markov decision processes using compact representations. In Proceedings of the AAAI, Portland, OR, 1996. | 1511.09249#70 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 71 | [26] S. J. Bradtke, A. G. Barto, and L. P. Kaelbling. Linear least-squares algorithms for temporal difference learning. In Machine Learning, pages 22â33, 1996.
[27] R. I. Brafman and M. Tennenholtz. R-MAXâa general polynomial time algorithm for near- optimal reinforcement learning. Journal of Machine Learning Research, 3:213â231, 2002.
[28] L. Breiman. Bagging predictors. Machine Learning, 24:123â140, 1996.
[29] C. B. Browne, E. Powley, D. Whitehouse, S. M. Lucas, P. I. Cowling, P. Rohlfshagen, S. Tavener, D. Perez, S. Samothrakis, and S. Colton. A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in Games, 4(1):1â43, 2012.
[30] A. E. Bryson. A gradient method for optimizing multi-stage allocation processes. In Proc. Harvard Univ. Symposium on digital computers and their applications, 1961.
[31] N. Burgess. A constructive algorithm that converges for real-valued input patterns. Interna- tional Journal of Neural Systems, 5(1):59â66, 1994. | 1511.09249#71 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 72 | [31] N. Burgess. A constructive algorithm that converges for real-valued input patterns. Interna- tional Journal of Neural Systems, 5(1):59â66, 1994.
[32] R. Caruana. Multitask learning. Machine Learning, 28(1):41â75, 1997.
[33] G. J. Chaitin. On the length of programs for computing ï¬nite binary sequences. Journal of the ACM, 13:547â569, 1966.
[34] K. Chellapilla, S. Puri, and P. Simard. High performance convolutional neural networks for In International Workshop on Frontiers in Handwriting Recognition, document processing. 2006.
[35] A. Church. An unsolvable problem of elementary number theory. American Journal of Math- ematics, 58:345â363, 1936.
[36] D. C. Ciresan, A. Giusti, L. M. Gambardella, and J. Schmidhuber. Deep neural networks seg- ment neuronal membranes in electron microscopy images. In Advances in Neural Information Processing Systems (NIPS), pages 2852â2860, 2012.
17 | 1511.09249#72 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 73 | 17
[37] D. C. Ciresan, A. Giusti, L. M. Gambardella, and J. Schmidhuber. Mitosis detection in breast cancer histology images with deep neural networks. In Proc. MICCAI, volume 2, pages 411â 418, 2013.
[38] D. C. Ciresan, U. Meier, L. M. Gambardella, and J. Schmidhuber. Deep big simple neural nets for handwritten digit recogntion. Neural Computation, 22(12):3207â3220, 2010.
[39] D. C. Ciresan, U. Meier, L. M. Gambardella, and J. Schmidhuber. Convolutional neural net- work committees for handwritten character classiï¬cation. In 11th International Conference on Document Analysis and Recognition (ICDAR), pages 1250â1254, 2011.
[40] D. C. Ciresan, U. Meier, J. Masci, L. M. Gambardella, and J. Schmidhuber. Flexible, high performance convolutional neural networks for image classiï¬cation. In Intl. Joint Conference on Artiï¬cial Intelligence IJCAI, pages 1237â1242, 2011. | 1511.09249#73 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 74 | [41] D. C. Ciresan, U. Meier, J. Masci, and J. Schmidhuber. A committee of neural networks for In International Joint Conference on Neural Networks (IJCNN), trafï¬c sign classiï¬cation. pages 1918â1921, 2011.
[42] D. C. Ciresan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classiï¬cation. In IEEE Conference on Computer Vision and Pattern Recognition CVPR 2012, 2012. Long preprint arXiv:1202.2745v1 [cs.CV].
[43] D. C. Ciresan, U. Meier, and J. Schmidhuber. Transfer learning for Latin and Chinese char- In International Joint Conference on Neural Networks acters with deep neural networks. (IJCNN), pages 1301â1306, 2012.
[44] D. T. Cliff, P. Husbands, and I. Harvey. Evolving recurrent dynamical networks for robot control. In Artiï¬cial Neural Nets and Genetic Algorithms, pages 428â435. Springer, 1993.
[45] A. Cochocki and R. Unbehauen. Neural networks for optimization and signal processing. John Wiley & Sons, Inc., 1993. | 1511.09249#74 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 75 | [45] A. Cochocki and R. Unbehauen. Neural networks for optimization and signal processing. John Wiley & Sons, Inc., 1993.
Intrinsically motivated evolutionary search for vision-based reinforcement learning. In Proceedings of the 2011 IEEE Conference on Development and Learning and Epigenetic Robotics IEEE-ICDL-EPIROB, volume 2, pages 1â7. IEEE, 2011.
[47] S. Das, C. Giles, and G. Sun. Learning context-free grammars: Capabilities and limitations of a neural network with an external stack memory. In Proceedings of the The Fourteenth Annual Conference of the Cognitive Science Society, Bloomington, 1992.
[48] P. Dayan and G. Hinton. Feudal reinforcement learning. In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems (NIPS) 5, pages 271â278. Morgan Kaufmann, 1993.
[49] R. Dechter. Learning while searching in constraint-satisfaction problems. In Proceedings of AAAI-86, 1986.
[50] T. G. Dietterich. Hierarchical reinforcement learning with the MAXQ value function decom- position. J. Artif. Intell. Res. (JAIR), 13:227â303, 2000. | 1511.09249#75 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 76 | [51] K. Doya, K. Samejima, K. ichi Katagiri, and M. Kawato. Multiple model-based reinforcement learning. Neural Computation, 14(6):1347â1369, 2002.
18
[52] S. E. Dreyfus. The numerical solution of variational problems. Journal of Mathematical Anal- ysis and Applications, 5(1):30â45, 1962.
In R. P. Lippmann, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems (NIPS) 3, pages 190â196. Morgan Kaufmann, 1991.
[54] Y. Fan, Y. Qian, F. Xie, and F. K. Soong. TTS synthesis with bidirectional LSTM based recurrent neural networks. In Proc. Interspeech, 2014.
[55] R. Fernandez, A. Rendel, B. Ramabhadran, and R. Hoory. Prosody contour prediction with In Proc. Inter- Long Short-Term Memory, bi-directional, deep recurrent neural networks. speech, 2014.
[56] L. Fogel, A. Owens, and M. Walsh. Artiï¬cial Intelligence through Simulated Evolution. Wiley, New York, 1966. | 1511.09249#76 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 77 | [56] L. Fogel, A. Owens, and M. Walsh. Artiï¬cial Intelligence through Simulated Evolution. Wiley, New York, 1966.
[57] B. Fritzke. A growing neural gas network learns topologies. In G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, NIPS, pages 625â632. MIT Press, 1994.
[58] K. Fukushima. Neural network model for a mechanism of pattern recognition unaffected by shift in position - Neocognitron. Trans. IECE, J62-A(10):658â665, 1979.
[59] S. I. Gallant. Connectionist expert systems. Communications of the ACM, 31(2):152â169, 1988.
[60] S. Ge, C. C. Hang, T. H. Lee, and T. Zhang. Stable adaptive neural network control. Springer, 2010.
[61] F. A. Gers, J. Schmidhuber, and F. Cummins. Learning to forget: Continual prediction with LSTM. Neural Computation, 12(10):2451â2471, 2000. | 1511.09249#77 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 78 | [62] F. A. Gers, N. Schraudolph, and J. Schmidhuber. Learning precise timing with LSTM recurrent networks. Journal of Machine Learning Research, 3:115â143, 2002.
[63] M. Ghavamzadeh and S. Mahadevan. Hierarchical policy gradient algorithms. In Proceedings of the Twentieth Conference on Machine Learning (ICML-2003), pages 226â233, 2003.
[64] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. Technical Report arxiv.org/abs/1311.2524, UC Berkeley and ICSI, 2013.
[65] L. Gisslen, M. Luciw, V. Graziano, and J. Schmidhuber. Sequential constant size compres- sor for reinforcement learning. In Proc. Fourth Conference on Artiï¬cial General Intelligence (AGI), Google, Mountain View, CA, pages 31â40. Springer, 2011.
[66] T. Glasmachers, T. Schaul, Y. Sun, D. Wierstra, and J. Schmidhuber. Exponential natural evolution strategies. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), pages 393â400. ACM, 2010. | 1511.09249#78 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 79 | [67] K. G¨odel. ¨Uber formal unentscheidbare S¨atze der Principia Mathematica und verwandter Systeme I. Monatshefte f¨ur Mathematik und Physik, 38:173â198, 1931.
[68] D. E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning. Addison- Wesley, Reading, MA, 1989.
19
[69] F. J. Gomez. Robust Nonlinear Control through Neuroevolution. PhD thesis, Department of Computer Sciences, University of Texas at Austin, 2003.
[70] F. J. Gomez and R. Miikkulainen. Incremental evolution of complex general behavior. Adaptive Behavior, 5:317â342, 1997.
[71] F. J. Gomez and R. Miikkulainen. Active guidance for a ï¬nless rocket using neuroevolution. In Proc. GECCO 2003, Chicago, 2003.
[72] F. J. Gomez, J. Schmidhuber, and R. Miikkulainen. Accelerated neural evolution through cooperatively coevolved synapses. Journal of Machine Learning Research, 9(May):937â965, 2008. | 1511.09249#79 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 80 | [73] H. Gomi and M. Kawato. Neural network control for a closed-loop system using feedback- error-learning. Neural Networks, 6(7):933â946, 1993.
[74] I. J. Goodfellow, Y. Bulatov, J. Ibarz, S. Arnoud, and V. Shet. Multi-digit number recog- nition from street view imagery using deep convolutional neural networks. arXiv preprint arXiv:1312.6082 v4, 2014.
[75] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. In International Conference on Machine Learning (ICML), 2013.
[76] A. Graves, S. Fernandez, F. J. Gomez, and J. Schmidhuber. Connectionist temporal classiï¬ca- tion: Labelling unsegmented sequence data with recurrent neural nets. In ICMLâ06: Proceed- ings of the 23rd International Conference on Machine Learning, pages 369â376, 2006. | 1511.09249#80 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 81 | [77] A. Graves, M. Liwicki, S. Fernandez, R. Bertolami, H. Bunke, and J. Schmidhuber. A novel connectionist system for improved unconstrained handwriting recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(5), 2009.
[78] A. Graves and J. Schmidhuber. Framewise phoneme classiï¬cation with bidirectional LSTM and other neural network architectures. Neural Networks, 18(5-6):602â610, 2005.
[79] A. Graves and J. Schmidhuber. Ofï¬ine handwriting recognition with multidimensional recur- rent neural networks. In Advances in Neural Information Processing Systems (NIPS) 21, pages 545â552. MIT Press, Cambridge, MA, 2009.
[80] A. Graves, G. Wayne, and I. Danihelka. Neural Turing machines. Preprint arXiv:1410.5401, 2014.
[81] M. Graziano. The Intelligent Movement Machine: An Ethological Perspective on the Primate Motor System. Oxford University Press, USA, 2009. | 1511.09249#81 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 82 | [81] M. Graziano. The Intelligent Movement Machine: An Ethological Perspective on the Primate Motor System. Oxford University Press, USA, 2009.
[82] I. Grondman, L. Busoniu, G. A. D. Lopes, and R. Babuska. A survey of actor-critic reinforce- ment learning: Standard and natural policy gradients. Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, 42(6):1291â1307, Nov 2012.
[83] F. Gruau, D. Whitley, and L. Pyeatt. A comparison between cellular encoding and direct encoding for genetic neural networks. NeuroCOLT Technical Report NC-TR-96-048, ESPRIT Working Group in Neural and Computational Learning, NeuroCOLT 8556, 1996.
[84] P. D. Gr¨unwald, I. J. Myung, and M. A. Pitt. Advances in minimum description length: Theory and applications. MIT Press, 2005.
20 | 1511.09249#82 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.