doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1701.08718 | 32 | Ohi, diag[fâ(z,) |W, 23 The, Il a I] diagttâ @:)) (23) to<t<ty to<t<ty
Let us assume that the singular values of a matrix M are ordered as, Ï1(M) ⥠Ï2(M) ⥠· · · ⥠Ïn(M). Let α be an upper bound on the singular values of W, s.t. α ⥠Ï1(W), then the norm of the Jacobian will satisfy (Zilly et al., 2016),
Ohy, rg re Ilo, SIMI diag 2)I < « 01 (diaglt(2))), (24)
Pascanu et al. (2013b) showed that for || âht âhtâ1 || ⤠Ï1( âht âhtâ1 ) ⤠η < 1, the following inequality holds:
i Oh; hy | I] toStcty Sls II toStcty 1 <7 ti âto (25)
t0â¤tâ¤t1 Since η < 1 and the norm of the product of Jacobians grows exponentially on t1 â t0, the
norm of the gradients will vanish exponentially fast. | 1701.08718#32 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 33 | norm of the gradients will vanish exponentially fast.
Now consider the MANN where the contents of the memory are linear projections of the previous hidden states as described in Equation 2. Let us assume that both reading and writing operation use discrete addressing. Let the content read from the memory at time step t correspond to some memory location i:
rt = Mt[i] = Ahit, (26)
where hit corresponds to the hidden state of the controller at some previous timestep it. Now the hidden state of the controller in the external memory model can be written as,
zt = Whtâ1 + Vrt + Uxt, ht = f(zt). (27)
If the controller reads Mt[i] at time step t and its memory content is Ahit as described above, then the Jacobians associated with Equation 27 can be computed as follows:
# âht1 âht0
Ohy Ohy_1 II to<t<ty I] diag lw to<t<ty ti-1 k=to k<t*<ty Ohi, + diag[f"(z:,)|WA Qiito + Risto . + S°( J] diagitâ(z-)JW) diaglf(2x)/VA ahi, oh, oh,
12
# Memory Augmented Neural Networks with Wormhole Connections | 1701.08718#33 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 34 | 12
# Memory Augmented Neural Networks with Wormhole Connections
where Qt1t0 and Rt1t0 are deï¬ned as below,
Quito = [] dial @ Iw, (30) to<t<ty
to<t<ty â
â oy oy F) ahi, Rito = >> ( J] diaglfâ(ze-)]W) diagltâ(z,)|VA dh, 1 Uasli@a VAS. BL 0 0 k=to k<t*<t
As shown in Equation 29, Jacobians of the MANN can be rewritten as a summation of two matrices, Qt1t0 and Rt1t0. The gradients ï¬owing through Rt1t0 do not necessarily vanish through time, because it is the sum of jacobians computed over the shorter paths.
The norm of the Jacobian can be lower bounded as follows by using Minkowski inequality:
Ohy Oh, ll =I | (32) Ohi, U. Ohy,-1
= ||Qt1t0 + Rt1t0|| ⥠||Rt1t0|| â ||Qt1t0|| (33)
Assuming that the length of the dependency is very long ||Qt1t0|| would vanish to 0. Then we will have, | 1701.08718#34 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 35 | Assuming that the length of the dependency is very long ||Qt1t0|| would vanish to 0. Then we will have,
||Qt1t0 + Rt1t0|| ⥠||Rt1t0|| (34)
As one can see that the rate of the gradients vanishing through time depends on the length of the sequence passes through Rt1t0. This is typically lesser than the length of the sequence passing through Qt1t0. Thus the gradients vanish at lesser rate than in an RNN. In particular the rate would strictly depend on the length of the shortest paths from t1 to t0, because for the long enough dependencies, gradients through the longer paths would still vanish.
We can also derive an upper bound for norm of the Jacobian as follows:
Oh, Oh; I= I] - | (35 hin to<t<ty 1
= ||Qt1t0 + Rt1t0|| ⤠Ï1(Qt1t0 + Rt1t0) (36)
Using the result from (Loyka, 2015), we can lower bound Ï1(Qt1t0 + Rt1t0) as follows:
Ï1(Qt1t0 + Rt1t0) ⥠|Ï1(Qt1t0) â Ï1(Rt1t0)| (37) | 1701.08718#35 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 36 | Ï1(Qt1t0 + Rt1t0) ⥠|Ï1(Qt1t0) â Ï1(Rt1t0)| (37)
For long sequences we know that Ï1(Qt1t0) will go to 0 (see equation 25). Hence,
Ï1(Qt1t0 + Rt1t0) ⥠Ï1(Rt1t0) (38)
The rate at which Ï1(Rt1t0) reaches zero is strictly smaller than the rate at which Ï1(Qt1t0) reaches zero and with ideal memory access, it will not reach zero. Hence unlike vanilla RNNs, Equation 38 states that the upper bound of the norm of the Jacobian will not reach to zero for a MANN with ideal memory access.
13
Gulcehre, Chandar, and Bengio | 1701.08718#36 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 37 | 13
Gulcehre, Chandar, and Bengio
Theorem 1 Consider a memory augmented neural network with T memory cells for a sequence of length T , and each hidden state of the controller is stored in diï¬erent cells of the memory. If the prediction at time step t1 has only a long-term dependency to t0 and the prediction at t1 is independent from the tokens appear before t0, and the memory reading mechanism is perfect, the model will not suï¬er from vanishing gradients when we back-propagate from t1 to t0.2
If the input sequence has a longest-dependency to t0 from t1, we would only be Proof: interested in gradients propagating from t1 to t0 and the Jacobians from t1 to t0, i.e. âht1 . If âht0 the controller learns a perfect reading mechanism at time step t1 it would read memory cell where the hidden state of the RNN at time step t0 is stored at. Thus following the jacobians deï¬ned in the Equation 29, we can rewrite the jacobians as,
# âht1 âht0 | 1701.08718#37 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 38 | # âht1 âht0
dhi, Il ah, Diy, Bie, Oba can ah, =[ [[ diagtt@ iw) + >°( [] diagttâ(z-)/W) diaglfâ(z,)/VA ah. to<t<ty k=to k<t*<ty fo a dh + diaglf"(z,)|VA oh, (39)
In Equation 39, the first two terms might vanish as t; â to grows. However, the singular values of the third term do not change as t; â to grows. As a result, the gradients propagated from t, to to will not necessarily vanish through time. However, in order to obtain stable dynamics for the network, the initialization of the matrices, V and A is important.
This analysis highlights the fact that an external memory model with optimal read/write mechanism can handle long-range dependencies much better than an RNN. However, this is applicable only when we use discrete addressing for read/write operations. Both NTM and D-NTM still have to learn how to read and write from scratch which is a challenging optimization problem. For TARDIS tying the read/write operations make the learning to become much simpler for the model. In particular, the results of the Theorem 1 points the importance of coming up with better ways of designing attention mechanisms over the memory. | 1701.08718#38 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 39 | The controller of a MANN may not be able learn to use the memory eï¬ciently. For example, some cells of the memory may remain empty or may never be read. The controller can overwrite the memory cells which have not been read. As a result the information stored in those overwritten memory cells can be lost completely. However TARDIS avoids most of these issues by the construction of the algorithm.
# 6. On the Length of the Paths Through the Wormhole Connections
As we have discussed in Section 5, the rate at which the gradients vanish for a MANN depends on the length of the paths passing along the wormhole connections. In this section
2. Let us note that, unlike an Markovian n-gram assumption, here we assume that at each time step the n can be diï¬erent.
14
# Memory Augmented Neural Networks with Wormhole Connections
we will analyse those lengths in depth for untrained models such that the model will assign uniform probability to read or write all memory cells. This will give us a better idea on how each untrained model uses the memory at the beginning of the training. | 1701.08718#39 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 40 | A wormhole connection can be created by reading a memory cell and writing into the same cell in TARDIS. For example, in Figure 2, while the actual path from h4 to h0 is of length 4, memory cell a0 creates a shorter path of length 2 (h0 â h2 â h4). We call the length of the actual path as T and length of the shorter path created by wormhole connection as Tmem.
Consider a TARDIS model which has k cells in its memory. If TARDIS access each memory cell uniformly random, then the probability of accessing a random cell i, p[i] = i The expected length of the shorter path created by wormhole connections (Imem) would be proportional to the number of reads and writes into a memory cell. For TARDIS with reader choosing a memory cell uniformly random this would be Trem = an pli] = z â lat the end of the sequence. We verify this result by simulating the read and write heads of TARDIS as in Figure 3 a).
a)
# b)
Figure 3: In these ï¬gures we visualized the expected path length in the memory cells for a sequence of length 200, memory size 50 with 100 simulations. a) shows the results for the TARDIS and b) shows the simulation for a MANN with uniformly random read and write heads. | 1701.08718#40 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 41 | Now consider a MANN with separate read and write heads each accessing the memory in discrete and uniformly random fashion. Let us call it as uMANN. We will compute the expected length of the shorter path created by wormhole connections (Tmem) for uMANN. wr t and ww t are the read and write head weights, each sampled from a multinomial distribution with uniform probability for each memory cells respectively. Let jt be the index of the memory cell read at timestep t. For any memory cell i, len(·), deï¬ned below, is a recursive function that computes the length of the path created by wormhole connections in that cell.
yy flen(My-1 [si], é, 92) +1 if we] =1 len(M; [i], i, je) = { ten(My lil.) if wi] =0 (40)
t Ei,jt[len(Mt[i], i, jt)] will be T /k â 1 by induction for every memory cell. However, for proof assumes that when t is less than or equal to k,
15
# Gulcehre, Chandar, and Bengio | 1701.08718#41 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 42 | 15
# Gulcehre, Chandar, and Bengio
the length of all paths stored in the memory len(Mt[i], i, jt) should be 0. We have run simulations to compute the expected path length in a memory cell of uMANN as in Figure 3 (b).
This analysis shows that while TARDIS with uniform read head maintains the same expected length of the shorter path created by wormhole connections as uMANN, it completely avoids the reader/writer synchronization problem.
In expectation, Ï1(Rt1t0) will decay proportionally to Tmem whereas Ï1(Qt1t0) will decay proportional 3 to T . With ideal memory access, the rate at which Ï1(Rt1t0) reaches zero would be strictly smaller than the rate at which Ï1(Qt1t0) reaches zero. Hence, as per Equation 38, the upper bound of the norm of the Jacobian will vanish at a much smaller rate. However, this result assumes that the dependencies which the prediction relies are accessible through the memory cell which has been read by the controller. | 1701.08718#42 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 43 | Figure 4: Assuming that the prediction at t1 depends on the t0, a wormhole connection can shorten the path by creating a connection from t1 â m to t0 + n. A wormhole connection may not directly create a connection from t1 to t0, but it can create shorter paths which the gradients can ï¬ow without vanishing. In this ï¬gure, we consider the case where a wormhole connection is created from t1 â m to t0 + n. This connections skips all the tokens in between t1 â m and t0 + n. | 1701.08718#43 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 44 | In the more general case, consider a MANN with k ⥠T . The writer just ï¬lls in the memory cells in a sequential manner and the reader chooses a memory cell uniformly at random. Let us call this model as urMANN. Let us assume that there is a dependency between two timesteps t0 and t1 as shown in Figure 4. If t0 was taken uniformly between 0 and t1 â 1, then there is a probability 0.5 that the read address invoked at time t1 will be greater than or equal to t0 (proof by symmetry). In that case, the expected shortest path length through that wormhole connection would be (t1 â t0)/2, but this still would not scale well. If the reader is very well trained, it could pick exactly t0 and the path length will be 1. Let us consider all the paths of length less than or equal to k + 1 of the form in Figure 4. Also, let n ⤠k/2 and m ⤠k/2. Then, the shortest path from t0 to t1 now has length n + m + 1 ⤠k + 1, using a wormhole connection that connects the state at t0 + n with the state at t1 â m. There are O(k2) such paths that are | 1701.08718#44 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 45 | k + 1, using a wormhole connection that connects the state at t0 + n with the state at t1 â m. There are O(k2) such paths that are realized, but we leave the distribution of the length of that shortest path as an open question. However, the probability of hitting a very short path (of length less than or equal to k + 1) increases exponentially with k. Let the probability of the read at t1 â m to hit the interval (t0, t0 + k/2) be p. Then the probability | 1701.08718#45 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 46 | 3. Exponentially when the Equation 25 holds.
16
# Memory Augmented Neural Networks with Wormhole Connections
that the shorter paths over the last k reads hits that interval is 1 â (1 â p)k/2, where p is on the order of k/t1. On the other hand, the probability of not hitting that interval approaches to 0 exponentially with k.
Figure 4 illustrates how wormhole connections can creater shorter paths. In Figure 5 (b), we show that the expected length of the path travelled outside the wormhole connections obtained from the simulations decreases as the size of the memory decreases. In particular, for urMANN and TARDIS the trend is very close to exponential. As shown in Figure 5 (a), this also inï¬uences the total length of the paths travelled from timestep 50 to 5 as well. Writing into the memory by using weights sampled with uniform probability for all memory cells can not use the memory as eï¬ciently as other approaches that we compare to. In particular ï¬xing the writing mechanism seems to be useful.
Even if the reader does not manage to learn where to read, there are many "short paths" which can considerably reduce the eï¬ect of vanishing gradients.
a) b) | 1701.08718#46 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 47 | a) b)
Figure 5: We have run simulations for TARDIS, MANN with uniform read and write mechanisms (uMANN) and MANN with uniform read and write head is ï¬xed with a heuristic (urMANN). In our simulations, we assume that there is a dependency from timestep 50 to 5. We run 200 simulations for each one of them with diï¬erent memory sizes for each model. In plot a) we show the results for the expected length of the shortest path from timestep 50 to 5. In the plots, as the size of the memory gets larger for both models, the length of the shortest path decreases dramatically. In plot b), we show the expected length of the shortest path travelled outside the wormhole connections with respect to diï¬erent memory sizes. TARDIS seems to use the memory more eï¬ciently compared to other models in particular when the size of the memory is small by creating shorter paths.
# 7. On Generalization over the Longer Sequences
Graves et al. (2014) have shown that the LSTMs can not generalize well on the sequences longer than the ones seen during the training. Whereas a MANN such as an NTM or a D-NTM has been shown to generalize to sequences longer than the ones seen during the training set on a set of toy tasks. | 1701.08718#47 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 48 | We believe that the main reason of why LSTMs typically do not generalize to the sequences longer than the ones that are seen during the training is mainly because the hidden
17
Gulcehre, Chandar, and Bengio
state of an LSTM network utilizes an unbounded history of the input sequence and as a result, its parameters are optimized using the maximum likelihood criterion to ï¬t on the sequences with lengths of the training examples. However, an n-gram language model or an HMM does not suï¬er from this issue. In comparison, an n-gram LM would use an input context with a ï¬xed window size and an HMM has the Markov property in its latent space. As argued below, we claim that while being trained a MANN can also learn the ability to generalize for sequences with a longer length than the ones that appear in the training set by modifying the contents of the memory and reading from it. | 1701.08718#48 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 49 | A regular RNN will minimize the negative log-likelihood objective function for the targets yt by using the unbounded history represented with the hidden state of the RNN, and it will model the parametrized conditional distribution p(yt|ht; θ) for the prediction at timestep t and a MANN would learn p(yt|ht, rt; θ). If we assume that rt represents all the dependencies that yt depends on in the input sequence, we will have p(yt|ht, rt; θ) â p(yt|rt, xt; θ) where rt represents the dependencies in a limited context window that only contains paths shorter than the sequences seen during the training set. Due to this property, we claim that MANNs such as NTM, D-NTM or TARDIS can generalize to the longer sequences more easily. In our experiments on PennTreebank, we show that a TARDIS language model trained to minimize the log-likelihood for p(yt|ht, rt; θ) and on the test set both p(yt|ht, rt; θ) and p(yt|rt, xt; θ) for the same model yields to very close results. On the other hand, the fact that the best results on bAbI dataset | 1701.08718#49 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 50 | xt; θ) for the same model yields to very close results. On the other hand, the fact that the best results on bAbI dataset obtained in (Gulcehre et al., 2016) is with feedforward controller and similarly in (Graves et al., 2014) feedforward controller was used to solve some of the toy tasks also conï¬rms our hypothesis. As a result, what has been written into the memory and what has been read becomes very important to be able to generalize to the longer sequences. | 1701.08718#50 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 51 | # 8. Experiments
# 8.1 Character-level Language Modeling on PTB
As a preliminary study on the performance of our model we consider character-level language modelling. We have evaluated our models on Penn TreeBank (PTB) corpus (Marcus et al., 1993) based on the train, valid and test used in (Mikolov et al., 2012). On this task, we are using layer-normalization (Ba et al., 2016) and recurrent dropout (Semeniuta et al., 2016) as those are also used by the SOTA results on this task. Using layer-normalization and the recurrent dropout improves the performance signiï¬cantly and reduces the eï¬ects of overï¬tting. We train our models with Adam (Kingma and Ba, 2014) over the sequences of length 150. We show our results in Table 1. | 1701.08718#51 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 52 | In addition to the regular char-LM experiments, in order to conï¬rm our hypothesis regarding to the ability of MANNs generalizing to the sequences longer than the ones seen during the training. We have trained a language model which learns p(yt|ht, rt; θ) by using a softmax layer as described in Equation 11. However to measure the performance of p(yt|rt, xt; θ) on test set, we have used the softmax layer that gets into the auxiliary cost deï¬ned for the REINFORCE as in Equation 17 for a model trained with REINFORCE and with the auxiliary cost. As in Table 1, the modelâs performance by using p(yt|ht, rt; θ) is 1.26, however by using p(yt|ht, rt; θ) it becomes 1.28. This gap is small enough to conï¬rm our assumption that p(yt|ht, rt; θ) â p(yt|rt, xt; θ).
18
# Memory Augmented Neural Networks with Wormhole Connections | 1701.08718#52 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 53 | 18
# Memory Augmented Neural Networks with Wormhole Connections
Model CW-RNN (Koutnik et al., 2014) HF-MRNN (Sutskever et al., 2011) ME n-gram (Mikolov et al., 2012) BatchNorm LSTM (Cooijmans et al., 2016) Zoneout RNN (Krueger et al., 2016) LayerNorm LSTM (Ha et al., 2016) LayerNorm HyperNetworks (Ha et al., 2016) LayerNorm HM-LSTM & Step Fn. & Slope Annealing(Chung et al., 2016) Our LSTM + Layer Norm + Dropout TARDIS + REINFORCE + R TARDIS + REINFORCE + Auxiliary Cost TARDIS + REINFORCE + Auxiliary Cost + R TARDIS + Gumbel Softmax + ST + R
Table 1: Character-level language modelling results on Penn TreeBank Dataset. TARDIS with Gumbel Softmax and straight-through (ST) estimator performs better than REINFORCE and it performs competitively compared to the SOTA on this task. "+ R" notiï¬es the use of RESET gates α and β.
# 8.2 Sequential Stroke Multi-digit MNIST task | 1701.08718#53 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 55 | 8.2.1 Task and Dataset
Recently (de Jong, 2016) introduced an MNIST pen stroke classiï¬cation task and also provided dataset which consisted of pen stroke sequences representing the skeleton of the digits in the MNIST dataset. Each MNIST digit image I is represented as a sequence of quadruples {dxi, dyi, eosi, eodi}T i=1, where T is the number of pen strokes to deï¬ne the digit, (dxi, dyi) denotes the pen oï¬set from the previous to the current stroke (can be 1, -1 or 0), eosi is a binary valued feature to denote end of stroke and eodi is another binary valued feature to denote end of the digit. In the original dataset, ï¬rst quadruple contains absolute value (x, y) instead of oï¬sets (dx, dy). Without loss of generality, we set the starting position (x, y) to (0, 0) in our experiments. Each digit is represented by 40 strokes on an average and the task is to predict the digit at the end of the stroke sequence. | 1701.08718#55 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 56 | While this dataset was proposed for incremental sequence learning in (de Jong, 2016), we consider the multi-digit version of this dataset to benchmark models that can handle long term dependencies. Speciï¬cally, given a sequence of pen-stroke sequences, the task is to predict the sequence of digits corresponding to each pen-stroke sequences in the given order. This is a challenging task since it requires the model to learn to predict the digit based on the pen-stroke sequence, count the number of digits and remember them and generate them in the same order after seeing all the strokes. In our experiments we consider 3 versions of this task with 5,10, and 15 digit sequences respectively. We generated 200,000 training data
19
Gulcehre, Chandar, and Bengio
points by randomly sampling digits from the training set of the MNIST dataset. Similarly we generated 20,000 validation and test data points by randomly sampling digits from the validation set and test set of the MNIST dataset respectively. Average length of the stroke sequences in each of these tasks are 199, 399, and 599 respectively. | 1701.08718#56 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 57 | Figure 6: An illustration of the sequential MNIST strokes task with multiple digits. The net- work is ï¬rst provided with the sequence of strokes information for each MNIST digits(location information) as input, during the prediction the network tries to predict the MNIST digits that it has just seen. When the model tries to predict the predictions from the previous time steps are fed back into the network. For the ï¬rst time step the model receives a special <bos> token which is fed into the model in the ï¬rst time step when the prediction starts.
# 8.2.2 Results | 1701.08718#57 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 58 | # 8.2.2 Results
We benchmark the performance of LSTM and TARDIS in this new task. Both models receive the sequence of pen strokes and at the end of the sequence are expected to generate the sequence of digits followed by a particular <bos> token. The tasks is illustrated in Figure 6. We evaluate the models based on per-digit error rate. We also compare the performance of TARDIS with REINFORCE with that of TARDIS with gumbel softmax. All the models were trained for same number of updates with early stopping based on the per-digit error rate in the validation set. Results for all 3 versions of the task are reported in Table-2. From the table, we can see that TARDIS performs better than LSTM in all the three versions of the task. Also TARDIS with gumbel-softmax performs slightly better than TARDIS with REINFORCE, which is consistent with our other experiments.
Model 3.54% LSTM TARDIS with REINFORCE 2.56% TARDIS with gumbel softmax 1.89% 2.23% 5-digits 10-digits 15-digits 3.00% 2.09% 8.81% 3.67% 3.09%
Table 2: Per-digit based test error in sequential stroke multi-digit MNIST task with 5,10, and 15 digits. | 1701.08718#58 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 59 | Table 2: Per-digit based test error in sequential stroke multi-digit MNIST task with 5,10, and 15 digits.
We also compare the learning curves of all the three models in Figure-7. From the ï¬gure we can see that TARDIS learns to solve the task faster that LSTM by eï¬ectively utilizing
20
# Memory Augmented Neural Networks with Wormhole Connections
the given memory slots. Also, TARDIS with gumbel softmax converges faster than TARDIS with REINFORCE.
5 digits 10 digits ââ TARDIS+Gumbel 08 ââ LSTM ââ TARDIS+Gumbel ââ TARDIS+REINFORCE 08 2 ââ TARDIS+REINFORCE fae ââ LSTM se : 2 Los eo 5 S Sos 3° Bos o o 7 02 > o2 on ot 00 00 0 5 10 15 20 25 ° 5 epochs epochs 15 digits os os ââ LSTM ââ TARDIS+REINFORCE ââ TARDIS+Gumbel validation error rate o 0 2 9%» 4 so 6 7 8 epochs
Figure 7: Learning curves for LSTM and TARDIS for sequential stroke multi-digit MNIST task with 5, 10, and 15 digits respectively.
# 8.3 NTM Tasks | 1701.08718#59 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 60 | Figure 7: Learning curves for LSTM and TARDIS for sequential stroke multi-digit MNIST task with 5, 10, and 15 digits respectively.
# 8.3 NTM Tasks
Graves et al. (2014) proposed associative recall and the copy tasks to evaluate a modelâs ability to learn simple algorithms and generalize to the sequences longer than the ones seen during the training. We trained a TARDIS model with 4 features for the address and 32 features for the memory content part of the model. We used a model with hidden state of size 120. Our model uses a memory of size 16. We train our model with Adam and used the learning rate of 3e-3. We show the results of our model in Table 3. TARDIS model was able to solve the both tasks, both with Gumbel-softmax and REINFORCE.
# 8.4 Stanford Natural Language Inference | 1701.08718#60 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 61 | # 8.4 Stanford Natural Language Inference
Bowman et al. (2015) proposed a new task to test the machine learning algorithmsâ ability to infer whether two given sentences entail, contradict or are neutral(semantic independence) from each other. However, this task can be considered as a long-term dependency task, if the premise and the hypothesis are presented to the model in sequential order as also explored by Rocktäschel et al. (2015). Because the model should learn the dependency relationship between the hypothesis and the premise. Our model ï¬rst reads the premise, then the hypothesis and at the end of the hypothesis the model predicts whether the premise
21
# Gulcehre, Chandar, and Bengio
D-NTM cont. (Gulcehre et al., 2016) D-NTM discrete (Gulcehre et al., 2016) NTM (Graves et al., 2014) TARDIS + Gumbel Softmax + ST TARDIS REINFORCE + Auxiliary Cost Success Success Success Success Success Success Failure Success Success Success
# Copy Task Associative Recall | 1701.08718#61 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 62 | # Copy Task Associative Recall
Table 3: In this table, we consider a model to be successful on copy or associative recall if its validation cost (binary cross-entropy) is lower than 0.02 over the sequences of maximum length seen during the training. We set the threshold to 0.02 to determine whether a model is successful on a task as in (Gulcehre et al., 2016).
and the hypothesis contradicts or entails. The model proposed by Rocktäschel et al. (2015), applies attention over its previous hidden states over premise when it reads the hypothesis. In that sense their model can still be considered to have some task-speciï¬c architectural design choice. TARDIS and our baseline LSTM models do not include any task-speciï¬c architectural design choices. In Table 4, we compare the results of diï¬erent models. Our model, performs signiï¬cantly better than other models. However recently it has been shown that with architectural tweaks, it is possible to design a model speciï¬cally to solve this task and achieve 88.2% test accuracy (Chen et al., 2016). | 1701.08718#62 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 63 | Model Word by Word Attention(Rocktäschel et al., 2015) Word by Word Attention two-way(Rocktäschel et al., 2015) LSTM + LayerNorm + Dropout TARDIS + REINFORCE + Auxiliary Cost TARDIS + Gumbel Softmax + ST Test Accuracy 83.5 83.2 81.7 82.4 84.3
Table 4: Comparisons of diï¬erent baselines on SNLI Task.
# 9. Conclusion
In this paper, we propose a simple and eï¬cient memory augmented neural network model which can perform well both on algorithmic tasks and more realistic tasks. Unlike the previous approaches, we show better performance on real-world NLP tasks, such as language modelling and SNLI. We have also proposed a new task to measure the performance of the models dealing with long-term dependencies.
We provide a detailed analysis on the eï¬ects of using external memory for the gradients and justify the reason why MANNs generalize better on the sequences longer than the ones seen in the training set. We have also shown that the gradients will vanish at a much slower rate (if they vanish) when an external memory is being used. Our theoretical results should encourage further studies in the direction of developing better attention mechanisms that can create wormhole connections eï¬ciently.
22
Memory Augmented Neural Networks with Wormhole Connections
# Acknowledgments | 1701.08718#63 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 64 | 22
Memory Augmented Neural Networks with Wormhole Connections
# Acknowledgments
We thank Chinnadhurai Sankar for suggesting the phrase "wormhole connections" and proof-reading the paper. We would like to thank Dzmitry Bahdanau for the comments and feedback for the earlier version of this paper. We would like to also thank the developers of Theano 4, for developing such a powerful tool for scientiï¬c computing Theano Development Team (2016). We acknowledge the support of the following organizations for research funding and computing support: NSERC, Samsung, Calcul Québec, Compute Canada, the Canada Research Chairs and CIFAR. SC is supported by a FQRNT-PBEEE scholarship.
4. http://deeplearning.net/software/theano/
23
Gulcehre, Chandar, and Bengio
# References
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoï¬rey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In Proceedings Of The International Conference on Representation Learning (ICLR 2015), 2015. | 1701.08718#64 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 65 | Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is diï¬cult. Neural Networks, IEEE Transactions on, 5(2):157â166, 1994.
Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075, 2015.
Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326, 2015.
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, and Yoshua Bengio. Hierarchical memory networks. arXiv preprint arXiv:1605.07427, 2016.
Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. Enhancing and combining sequential and tree lstm for natural language inference. arXiv preprint arXiv:1609.06038, 2016. | 1701.08718#65 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 66 | Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733, 2016.
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks. arXiv preprint arXiv:1609.01704, 2016.
Tim Cooijmans, Nicolas Ballas, César Laurent, and Aaron Courville. Recurrent batch normalization. arXiv preprint arXiv:1603.09025, 2016.
Edwin D. de Jong. Incremental sequence learning. arXiv preprint arXiv:1611.03068, 2016.
Edouard Grave, Armand Joulin, and Nicolas Usunier. Improving neural language models with a continuous cache. arXiv preprint arXiv:1612.04426, 2016. | 1701.08718#66 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 67 | Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
24
Memory Augmented Neural Networks with Wormhole Connections
Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- BarwiÅska, Sergio G. Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adrià P. Badia, Karl M. Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerï¬eld, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis. Hybrid computing using a neural network with dynamic external memory. Nature, advance online publication, October 2016. ISSN 0028-0836. doi: 10.1038/nature20101. URL http://dx.doi.org/10.1038/nature20101.
Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce with unbounded memory. In Advances in Neural Information Processing Systems, pages 1819â1827, 2015. | 1701.08718#67 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 68 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, and Yoshua Bengio. Dynamic neural turing machine with soft and hard addressing schemes. arXiv preprint arXiv:1607.00036, 2016.
David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016.
Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Universität München, page 91, 1991.
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9 (8):1735â1780, 1997.
Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016.
Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances in Neural Information Processing Systems, pages 190â198, 2015.
Åukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228, 2015. | 1701.08718#68 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 69 | Åukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228, 2015.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Jan Koutnik, Klaus Greï¬, Faustino Gomez, and Juergen Schmidhuber. A clockwork rnn. arXiv preprint arXiv:1402.3511, 2014.
David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al. Zoneout: Regularizing rnns by randomly preserving hidden activations. arXiv preprint arXiv:1606.01305, 2016.
Roland Kuhn and Renato De Mori. A cache-based natural language model for speech recognition. IEEE transactions on pattern analysis and machine intelligence, 12(6):570â 583, 1990.
25
Gulcehre, Chandar, and Bengio | 1701.08718#69 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 70 | 25
Gulcehre, Chandar, and Bengio
Sergey Loyka. On singular value inequalities for the sum of two matrices. arXiv preprint arXiv:1507.06630, 2015.
Chris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016.
Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313â330, 1993.
Tomáš Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Stefan Kombrink, and J Cer- nocky. Subword language modeling with neural networks. preprint (http://www. ï¬t. vutbr. cz/imikolov/rnnlm/char. pdf ), 2012.
Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030, 2014. | 1701.08718#70 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 71 | Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030, 2014.
Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026, 2013a.
Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the diï¬culty of training recurrent neural networks. ICML (3), 28:1310â1318, 2013b.
Jack W. Rae, Jonathan J. Hunt, Tim Harley, Ivo Danihelka, Andrew W. Senior, Greg Wayne, Alex Graves, and Timothy P. Lillicrap. Scaling memory-augmented neural networks with sparse reads and writes. CoRR, abs/1610.09027, 2016.
Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomáš KoÄisk`y, and Phil Blun- som. Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664, 2015. | 1701.08718#71 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 72 | Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lil- licrap. One-shot learning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065, 2016.
Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. Recurrent dropout without memory loss. arXiv preprint arXiv:1603.05118, 2016.
Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the 30th AAAI Conference on Artiï¬cial Intelligence (AAAI-16), 2016.
Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. arXiv preprint arXiv:1503.08895, 2015.
Ilya Sutskever, James Martens, and Geoï¬rey E Hinton. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 1017â1024, 2011.
26
Memory Augmented Neural Networks with Wormhole Connections | 1701.08718#72 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08718 | 73 | 26
Memory Augmented Neural Networks with Wormhole Connections
Theano Development Team. Theano: A Python framework for fast computation of arXiv e-prints, abs/1605.02688, May 2016. URL http: mathematical expressions. //arxiv.org/abs/1605.02688.
Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language compre- hension with the epireader. arXiv preprint arXiv:1606.02270, 2016.
Endel Tulving. Chronesthesia: Conscious awareness of subjective time. 2002.
Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Proceedings Of The International Conference on Representation Learning (ICLR 2015), 2015. In Press.
Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist rein- forcement learning. Machine Learning, 8:229â256, 1992.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings Of The International Conference on Representation Learning (ICLR 2015), 2015. | 1701.08718#73 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | [
{
"id": "1609.01704"
},
{
"id": "1603.09025"
},
{
"id": "1606.01305"
},
{
"id": "1503.08895"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1607.00036"
},
{
"id": "1609.06038"
},
{
"id": "1511.08228"
},
{
"id": "1611.01144"
},
{
"id": "1507.06630"
},
{
"id": "1603.05118"
},
{
"id": "1601.06733"
},
{
"id": "1609.09106"
},
{
"id": "1509.06664"
},
{
"id": "1506.02075"
},
{
"id": "1612.04426"
},
{
"id": "1607.03474"
},
{
"id": "1605.06065"
},
{
"id": "1606.02270"
},
{
"id": "1611.03068"
},
{
"id": "1611.00712"
},
{
"id": "1508.05326"
}
] |
1701.08118 | 0 | 7 1 0 2
n a J 7 2 ] L C . s c [
1 v 8 1 1 8 0 . 1 0 7 1 : v i X r a
# Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis
Bj¨orn Ross Michael Rist Guillermo Carbonell Benjamin Cabrera Nils Kurowsky Michael Wojatzki Research Training Group âUser-Centred Social Mediaâ
Department of Computer Science and Applied Cognitive Science
# University of Duisburg-Essen [email protected]
# Abstract | 1701.08118#0 | Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis | Some users of social media are spreading racist, sexist, and otherwise
hateful content. For the purpose of training a hate speech detection system,
the reliability of the annotations is crucial, but there is no universally
agreed-upon definition. We collected potentially hateful messages and asked two
groups of internet users to determine whether they were hate speech or not,
whether they should be banned or not and to rate their degree of offensiveness.
One of the groups was shown a definition prior to completing the survey. We
aimed to assess whether hate speech can be annotated reliably, and the extent
to which existing definitions are in accordance with subjective ratings. Our
results indicate that showing users a definition caused them to partially align
their own opinion with the definition but did not improve reliability, which
was very low overall. We conclude that the presence of hate speech should
perhaps not be considered a binary yes-or-no decision, and raters need more
detailed instructions for the annotation. | http://arxiv.org/pdf/1701.08118 | Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, Michael Wojatzki | cs.CL | null | Proceedings of NLP4CMC III: 3rd Workshop on Natural Language
Processing for Computer-Mediated Communication (Bochum), Bochumer
Linguistische Arbeitsberichte, vol. 17, sep 2016, pp. 6-9 | cs.CL | 20170127 | 20170127 | [] |
1701.08118 | 1 | Department of Computer Science and Applied Cognitive Science
# University of Duisburg-Essen [email protected]
# Abstract
Some users of social media are spreading racist, sexist, and otherwise hateful con- tent. For the purpose of training a hate speech detection system, the reliability of the annotations is crucial, but there is no universally agreed-upon deï¬nition. We collected potentially hateful messages and asked two groups of internet users to de- termine whether they were hate speech or not, whether they should be banned or not and to rate their degree of offensiveness. One of the groups was shown a deï¬nition prior to completing the survey. We aimed to assess whether hate speech can be an- notated reliably, and the extent to which existing deï¬nitions are in accordance with subjective ratings. Our results indicate that showing users a deï¬nition caused them to partially align their own opinion with the deï¬nition but did not improve reliability, which was very low overall. We conclude that the presence of hate speech should per- haps not be considered a binary yes-or-no decision, and raters need more detailed in- structions for the annotation.
and report it to the relevant authorities. It would also make it easier for researchers to examine the diffusion of hateful content through social media on a large scale. | 1701.08118#1 | Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis | Some users of social media are spreading racist, sexist, and otherwise
hateful content. For the purpose of training a hate speech detection system,
the reliability of the annotations is crucial, but there is no universally
agreed-upon definition. We collected potentially hateful messages and asked two
groups of internet users to determine whether they were hate speech or not,
whether they should be banned or not and to rate their degree of offensiveness.
One of the groups was shown a definition prior to completing the survey. We
aimed to assess whether hate speech can be annotated reliably, and the extent
to which existing definitions are in accordance with subjective ratings. Our
results indicate that showing users a definition caused them to partially align
their own opinion with the definition but did not improve reliability, which
was very low overall. We conclude that the presence of hate speech should
perhaps not be considered a binary yes-or-no decision, and raters need more
detailed instructions for the annotation. | http://arxiv.org/pdf/1701.08118 | Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, Michael Wojatzki | cs.CL | null | Proceedings of NLP4CMC III: 3rd Workshop on Natural Language
Processing for Computer-Mediated Communication (Bochum), Bochumer
Linguistische Arbeitsberichte, vol. 17, sep 2016, pp. 6-9 | cs.CL | 20170127 | 20170127 | [] |
1701.08118 | 2 | and report it to the relevant authorities. It would also make it easier for researchers to examine the diffusion of hateful content through social media on a large scale.
From a natural language processing perspective, hate speech detection can be considered a classiï¬ca- tion task: given an utterance, determine whether or not it contains hate speech. Training a classiï¬er re- quires a large amount of data that is unambiguously hate speech. This data is typically obtained by man- ually annotating a set of texts based on whether a certain element contains hate speech.
The reliability of the human annotations is essen- tial, both to ensure that the algorithm can accurately learn the characteristics of hate speech, and as an upper bound on the expected performance (Warner and Hirschberg, 2012; Waseem and Hovy, 2016). As a preliminary step, six annotators rated 469 tweets. We found that agreement was very low (see Section 3). We then carried out group discussions to ï¬nd possible reasons. They revealed that there is considerable ambiguity in existing deï¬nitions. A given statement may be considered hate speech or not depending on someoneâs cultural background and personal sensibilities. The wording of the ques- tion may also play a role.
# Introduction | 1701.08118#2 | Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis | Some users of social media are spreading racist, sexist, and otherwise
hateful content. For the purpose of training a hate speech detection system,
the reliability of the annotations is crucial, but there is no universally
agreed-upon definition. We collected potentially hateful messages and asked two
groups of internet users to determine whether they were hate speech or not,
whether they should be banned or not and to rate their degree of offensiveness.
One of the groups was shown a definition prior to completing the survey. We
aimed to assess whether hate speech can be annotated reliably, and the extent
to which existing definitions are in accordance with subjective ratings. Our
results indicate that showing users a definition caused them to partially align
their own opinion with the definition but did not improve reliability, which
was very low overall. We conclude that the presence of hate speech should
perhaps not be considered a binary yes-or-no decision, and raters need more
detailed instructions for the annotation. | http://arxiv.org/pdf/1701.08118 | Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, Michael Wojatzki | cs.CL | null | Proceedings of NLP4CMC III: 3rd Workshop on Natural Language
Processing for Computer-Mediated Communication (Bochum), Bochumer
Linguistische Arbeitsberichte, vol. 17, sep 2016, pp. 6-9 | cs.CL | 20170127 | 20170127 | [] |
1701.08118 | 3 | # Introduction
Social media are sometimes used to disseminate hateful messages. In Europe, the current surge in hate speech has been linked to the ongoing refugee crisis. Lawmakers and social media sites are in- creasingly aware of the problem and are developing approaches to deal with it, for example promising to remove illegal messages within 24 hours after they are reported (Titcomb, 2016).
We decided to investigate the issue of reliability further by conducting a more comprehensive study across a large number of annotators, which we present in this paper.
Our contribution in this paper is threefold:
⢠To the best of our knowledge, this paper presents the ï¬rst attempt at compiling a Ger- man hate speech corpus for the refugee crisis.1
⢠We provide an estimate of the reliability of hate speech annotations.
This raises the question of how hate speech can be detected automatically. Such an automatic detec- tion method could be used to scan the large amount of text generated on the internet for hateful content
⢠We investigate how the reliability of the anno- tations is affected by the exact question asked.
1Available at https://github.com/UCSM-DUE/ IWG_hatespeech_public
# 2 Hate Speech | 1701.08118#3 | Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis | Some users of social media are spreading racist, sexist, and otherwise
hateful content. For the purpose of training a hate speech detection system,
the reliability of the annotations is crucial, but there is no universally
agreed-upon definition. We collected potentially hateful messages and asked two
groups of internet users to determine whether they were hate speech or not,
whether they should be banned or not and to rate their degree of offensiveness.
One of the groups was shown a definition prior to completing the survey. We
aimed to assess whether hate speech can be annotated reliably, and the extent
to which existing definitions are in accordance with subjective ratings. Our
results indicate that showing users a definition caused them to partially align
their own opinion with the definition but did not improve reliability, which
was very low overall. We conclude that the presence of hate speech should
perhaps not be considered a binary yes-or-no decision, and raters need more
detailed instructions for the annotation. | http://arxiv.org/pdf/1701.08118 | Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, Michael Wojatzki | cs.CL | null | Proceedings of NLP4CMC III: 3rd Workshop on Natural Language
Processing for Computer-Mediated Communication (Bochum), Bochumer
Linguistische Arbeitsberichte, vol. 17, sep 2016, pp. 6-9 | cs.CL | 20170127 | 20170127 | [] |
1701.08118 | 4 | 1Available at https://github.com/UCSM-DUE/ IWG_hatespeech_public
# 2 Hate Speech
For the purpose of building a classiï¬er, Warner and Hirschberg (2012) deï¬ne hate speech as âabu- sive speech targeting speciï¬c group characteristics, such as ethnic origin, religion, gender, or sexual orientationâ. More recent approaches rely on lists of guidelines such as a tweet being hate speech if it âuses a sexist or racial slurâ (Waseem and Hovy, 2016). These approaches are similar in that they leave plenty of room for personal interpretation, since there may be differences in what is consid- ered offensive. For instance, while the utterance âthe refugees will live off our moneyâ is clearly gen- eralising and maybe unfair, it is unclear if this is already hate speech. More precise deï¬nitions from law are speciï¬c to certain jurisdictions and there- fore do not capture all forms of offensive, hateful speech, see e.g. Matsuda (1993). In practice, so- cial media services are using their own deï¬nitions which have been subject to adjustments over the years (Jeong, 2016). As of June 2016, Twitter bans hateful conduct2. | 1701.08118#4 | Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis | Some users of social media are spreading racist, sexist, and otherwise
hateful content. For the purpose of training a hate speech detection system,
the reliability of the annotations is crucial, but there is no universally
agreed-upon definition. We collected potentially hateful messages and asked two
groups of internet users to determine whether they were hate speech or not,
whether they should be banned or not and to rate their degree of offensiveness.
One of the groups was shown a definition prior to completing the survey. We
aimed to assess whether hate speech can be annotated reliably, and the extent
to which existing definitions are in accordance with subjective ratings. Our
results indicate that showing users a definition caused them to partially align
their own opinion with the definition but did not improve reliability, which
was very low overall. We conclude that the presence of hate speech should
perhaps not be considered a binary yes-or-no decision, and raters need more
detailed instructions for the annotation. | http://arxiv.org/pdf/1701.08118 | Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, Michael Wojatzki | cs.CL | null | Proceedings of NLP4CMC III: 3rd Workshop on Natural Language
Processing for Computer-Mediated Communication (Bochum), Bochumer
Linguistische Arbeitsberichte, vol. 17, sep 2016, pp. 6-9 | cs.CL | 20170127 | 20170127 | [] |
1701.08118 | 5 | With the rise in popularity of social media, the presence of hate speech has grown on the internet. Posting a tweet takes little more than a working internet connection but may be seen by users all over the world.
Along with the presence of hate speech, its real- life consequences are also growing. It can be a precursor and incentive for hate crimes, and it can be so severe that it can even be a health issue (Bur- nap and Williams, 2014). It is also known that hate speech does not only mirror existing opin- ions in the reader but can also induce new negative feelings towards its targets (Martin et al., 2013). Hate speech has recently gained some interest as a research topic on the one hand â e.g. (Djuric et al., 2014; Burnap and Williams, 2014; Silva et al., 2016) â but also as a problem to deal with in politics such as the No Hate Speech Movement by the Council of Europe.
The current refugee crisis has made it evident that governments, organisations and the public share an interest in controlling hate speech in social media. However, there seems to be little consensus on what hate speech actually is. | 1701.08118#5 | Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis | Some users of social media are spreading racist, sexist, and otherwise
hateful content. For the purpose of training a hate speech detection system,
the reliability of the annotations is crucial, but there is no universally
agreed-upon definition. We collected potentially hateful messages and asked two
groups of internet users to determine whether they were hate speech or not,
whether they should be banned or not and to rate their degree of offensiveness.
One of the groups was shown a definition prior to completing the survey. We
aimed to assess whether hate speech can be annotated reliably, and the extent
to which existing definitions are in accordance with subjective ratings. Our
results indicate that showing users a definition caused them to partially align
their own opinion with the definition but did not improve reliability, which
was very low overall. We conclude that the presence of hate speech should
perhaps not be considered a binary yes-or-no decision, and raters need more
detailed instructions for the annotation. | http://arxiv.org/pdf/1701.08118 | Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, Michael Wojatzki | cs.CL | null | Proceedings of NLP4CMC III: 3rd Workshop on Natural Language
Processing for Computer-Mediated Communication (Bochum), Bochumer
Linguistische Arbeitsberichte, vol. 17, sep 2016, pp. 6-9 | cs.CL | 20170127 | 20170127 | [] |
1701.08118 | 6 | 2âYou may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious afï¬liation, age, disability, or disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.â, The Twitter Rules
# 3 Compiling A Hate Speech Corpus | 1701.08118#6 | Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis | Some users of social media are spreading racist, sexist, and otherwise
hateful content. For the purpose of training a hate speech detection system,
the reliability of the annotations is crucial, but there is no universally
agreed-upon definition. We collected potentially hateful messages and asked two
groups of internet users to determine whether they were hate speech or not,
whether they should be banned or not and to rate their degree of offensiveness.
One of the groups was shown a definition prior to completing the survey. We
aimed to assess whether hate speech can be annotated reliably, and the extent
to which existing definitions are in accordance with subjective ratings. Our
results indicate that showing users a definition caused them to partially align
their own opinion with the definition but did not improve reliability, which
was very low overall. We conclude that the presence of hate speech should
perhaps not be considered a binary yes-or-no decision, and raters need more
detailed instructions for the annotation. | http://arxiv.org/pdf/1701.08118 | Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, Michael Wojatzki | cs.CL | null | Proceedings of NLP4CMC III: 3rd Workshop on Natural Language
Processing for Computer-Mediated Communication (Bochum), Bochumer
Linguistische Arbeitsberichte, vol. 17, sep 2016, pp. 6-9 | cs.CL | 20170127 | 20170127 | [] |
1701.08118 | 7 | As previously mentioned, there is no German hate speech corpus available for our needs, especially not for the very recent topic of the refugee crisis in Europe. We therefore had to compile our own corpus. We used Twitter as a source as it offers recent comments on current events. In our study we only considered the textual content of tweets that contain certain keywords, ignoring those that contain pictures or links. This section provides a detailed description of the approach we used to select the tweets and subsequently annotate them. To ï¬nd a large amount of hate speech on the refugee crisis, we used 10 hashtags3 that can be used in an insulting or offensive way. Using these hashtags we gathered 13 766 tweets in total, roughly dating from February to March 2016. How- ever, these tweets contained a lot of non-textual content which we ï¬ltered out automatically by re- moving tweets consisting solely of links or im- ages. We also only considered original tweets, as retweets or replies to other tweets might only be clearly understandable when reading both tweets together. In addition, we removed duplicates and near-duplicates by discarding tweets that had a nor- | 1701.08118#7 | Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis | Some users of social media are spreading racist, sexist, and otherwise
hateful content. For the purpose of training a hate speech detection system,
the reliability of the annotations is crucial, but there is no universally
agreed-upon definition. We collected potentially hateful messages and asked two
groups of internet users to determine whether they were hate speech or not,
whether they should be banned or not and to rate their degree of offensiveness.
One of the groups was shown a definition prior to completing the survey. We
aimed to assess whether hate speech can be annotated reliably, and the extent
to which existing definitions are in accordance with subjective ratings. Our
results indicate that showing users a definition caused them to partially align
their own opinion with the definition but did not improve reliability, which
was very low overall. We conclude that the presence of hate speech should
perhaps not be considered a binary yes-or-no decision, and raters need more
detailed instructions for the annotation. | http://arxiv.org/pdf/1701.08118 | Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, Michael Wojatzki | cs.CL | null | Proceedings of NLP4CMC III: 3rd Workshop on Natural Language
Processing for Computer-Mediated Communication (Bochum), Bochumer
Linguistische Arbeitsberichte, vol. 17, sep 2016, pp. 6-9 | cs.CL | 20170127 | 20170127 | [] |
1701.08118 | 8 | clearly understandable when reading both tweets together. In addition, we removed duplicates and near-duplicates by discarding tweets that had a nor- malised Levenshtein edit distance smaller than .85 to an aforementioned tweet. A ï¬rst inspection of the remaining tweets indicated that not all search terms were equally suited for our needs. The search term #Pack (vermin or lowlife) found a potentially large amount of hate speech not directly linked to the refugee crisis. It was therefore discarded. As a last step, the remaining tweets were manually read to eliminate those which were difï¬cult to un- derstand or incomprehensible. After these ï¬ltering steps, our corpus consists of 541 tweets, none of which are duplicates, contain links or pictures, or are retweets or replies. | 1701.08118#8 | Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis | Some users of social media are spreading racist, sexist, and otherwise
hateful content. For the purpose of training a hate speech detection system,
the reliability of the annotations is crucial, but there is no universally
agreed-upon definition. We collected potentially hateful messages and asked two
groups of internet users to determine whether they were hate speech or not,
whether they should be banned or not and to rate their degree of offensiveness.
One of the groups was shown a definition prior to completing the survey. We
aimed to assess whether hate speech can be annotated reliably, and the extent
to which existing definitions are in accordance with subjective ratings. Our
results indicate that showing users a definition caused them to partially align
their own opinion with the definition but did not improve reliability, which
was very low overall. We conclude that the presence of hate speech should
perhaps not be considered a binary yes-or-no decision, and raters need more
detailed instructions for the annotation. | http://arxiv.org/pdf/1701.08118 | Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, Michael Wojatzki | cs.CL | null | Proceedings of NLP4CMC III: 3rd Workshop on Natural Language
Processing for Computer-Mediated Communication (Bochum), Bochumer
Linguistische Arbeitsberichte, vol. 17, sep 2016, pp. 6-9 | cs.CL | 20170127 | 20170127 | [] |
1701.08118 | 9 | As a ï¬rst measurement of the frequency of hate speech in our corpus, we personally annotated them based on our previous expertise. The 541 tweets were split into six parts and each part was annotated by two out of six annotators in order to determine if hate speech was present or not. The annotators were rotated so that each pair of annotators only evaluated one part. Additionally the offensiveness of a tweet was rated on a 6-point Likert scale, the same scale used later in the study.
3#Pack, #Rapefugees, #Islamisierung, #AsylantenInvasion, #Scharia #Aslyanten, #WehrDich, #Krimmigranten, #RefugeesNotWelcome, #Islamfaschisten,
Even among researchers familiar with the deï¬ni- tions outlined above, there was still a low level of agreement (Krippendorffâs α = .38). This supports our claim that a clearer deï¬nition is necessary in order to be able to train a reliable classiï¬er. The low reliability could of course be explained by vary- ing personal attitudes or backgrounds, but clearly needs more consideration.
# 4 Methods | 1701.08118#9 | Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis | Some users of social media are spreading racist, sexist, and otherwise
hateful content. For the purpose of training a hate speech detection system,
the reliability of the annotations is crucial, but there is no universally
agreed-upon definition. We collected potentially hateful messages and asked two
groups of internet users to determine whether they were hate speech or not,
whether they should be banned or not and to rate their degree of offensiveness.
One of the groups was shown a definition prior to completing the survey. We
aimed to assess whether hate speech can be annotated reliably, and the extent
to which existing definitions are in accordance with subjective ratings. Our
results indicate that showing users a definition caused them to partially align
their own opinion with the definition but did not improve reliability, which
was very low overall. We conclude that the presence of hate speech should
perhaps not be considered a binary yes-or-no decision, and raters need more
detailed instructions for the annotation. | http://arxiv.org/pdf/1701.08118 | Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, Michael Wojatzki | cs.CL | null | Proceedings of NLP4CMC III: 3rd Workshop on Natural Language
Processing for Computer-Mediated Communication (Bochum), Bochumer
Linguistische Arbeitsberichte, vol. 17, sep 2016, pp. 6-9 | cs.CL | 20170127 | 20170127 | [] |
1701.08118 | 10 | # 4 Methods
In order to assess the reliability of the hate speech deï¬nitions on social media more comprehensively, we developed two online surveys in a between- subjects design. They were completed by 56 par- ticipants in total (see Table 1). The main goal was to examine the extent to which non-experts agree upon their understanding of hate speech given a diversity of social media content. We used the Twitter deï¬nition of hateful conduct in the ï¬rst sur- vey. This deï¬nition was presented at the beginning, and again above every tweet. The second survey did not contain any deï¬nition. Participants were randomly assigned one of the two surveys. | 1701.08118#10 | Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis | Some users of social media are spreading racist, sexist, and otherwise
hateful content. For the purpose of training a hate speech detection system,
the reliability of the annotations is crucial, but there is no universally
agreed-upon definition. We collected potentially hateful messages and asked two
groups of internet users to determine whether they were hate speech or not,
whether they should be banned or not and to rate their degree of offensiveness.
One of the groups was shown a definition prior to completing the survey. We
aimed to assess whether hate speech can be annotated reliably, and the extent
to which existing definitions are in accordance with subjective ratings. Our
results indicate that showing users a definition caused them to partially align
their own opinion with the definition but did not improve reliability, which
was very low overall. We conclude that the presence of hate speech should
perhaps not be considered a binary yes-or-no decision, and raters need more
detailed instructions for the annotation. | http://arxiv.org/pdf/1701.08118 | Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, Michael Wojatzki | cs.CL | null | Proceedings of NLP4CMC III: 3rd Workshop on Natural Language
Processing for Computer-Mediated Communication (Bochum), Bochumer
Linguistische Arbeitsberichte, vol. 17, sep 2016, pp. 6-9 | cs.CL | 20170127 | 20170127 | [] |
1701.08118 | 11 | The surveys consisted of 20 tweets presented in a random order. For each tweet, each participant was asked three questions. Depending on the sur- vey, participants were asked (1) to answer (yes/no) if they considered the tweet hate speech, either based on the deï¬nition or based on their personal opinion. Afterwards they were asked (2) to answer (yes/no) if the tweet should be banned from Twitter. Participants were ï¬nally asked (3) to answer how offensive they thought the tweet was on a 6-point Likert scale from 1 (Not offensive at all) to 6 (Very offensive). If they answered 4 or higher, the par- ticipants had the option to state which particular words they found offensive.
After the annotation of the 20 tweets, partici- pants were asked to voluntarily answer an open question regarding the deï¬nition of hate speech. In the survey with the deï¬nition, they were asked if the deï¬nition of Twitter was sufï¬cient. In the survey without the deï¬nition, the participants were asked to suggest a deï¬nition themselves. Finally, sociodemographic data were collected, including age, gender and more speciï¬c information regard- ing the participantâs political orientation, migration background, and personal position regarding the refugee situation in Europe. | 1701.08118#11 | Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis | Some users of social media are spreading racist, sexist, and otherwise
hateful content. For the purpose of training a hate speech detection system,
the reliability of the annotations is crucial, but there is no universally
agreed-upon definition. We collected potentially hateful messages and asked two
groups of internet users to determine whether they were hate speech or not,
whether they should be banned or not and to rate their degree of offensiveness.
One of the groups was shown a definition prior to completing the survey. We
aimed to assess whether hate speech can be annotated reliably, and the extent
to which existing definitions are in accordance with subjective ratings. Our
results indicate that showing users a definition caused them to partially align
their own opinion with the definition but did not improve reliability, which
was very low overall. We conclude that the presence of hate speech should
perhaps not be considered a binary yes-or-no decision, and raters need more
detailed instructions for the annotation. | http://arxiv.org/pdf/1701.08118 | Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, Michael Wojatzki | cs.CL | null | Proceedings of NLP4CMC III: 3rd Workshop on Natural Language
Processing for Computer-Mediated Communication (Bochum), Bochumer
Linguistische Arbeitsberichte, vol. 17, sep 2016, pp. 6-9 | cs.CL | 20170127 | 20170127 | [] |
1701.08118 | 12 | The surveys were approved by the ethical com- mittee of the Department of Computer Science and
Applied Cognitive Science of the Faculty of Engi- neering at the University of Duisburg-Essen.
# 5 Preliminary Results and Discussion
Since the surveys were completed by 56 partici- pants, they resulted in 1120 annotations. Table 1 shows some summary statistics.
Def. No def. 31 30.5 58.6 40.3 .26 .15 17.6 .01 -.32 3.42 .55 -.08 p 25 Participants 33.3 Age (mean) 43.5 Gender (% female) Hate Speech (% yes) 32.6 32.6 Ban (% yes) 3.49 Offensive (mean) r
Table 1: Summary statistics with p values and ef- fect size estimates from WMW tests. Not all par- ticipants chose to report their age or gender.
To assess whether the deï¬nition had any effect, we calculated, for each participant, the percentage of tweets they considered hate speech or suggested to ban and their mean offensiveness rating. This allowed us to compare the two samples for each of the three questions. Preliminary Shapiro-Wilk tests indicated that some of the data were not normally distributed. We therefore used the Wilcoxon-Mann- Whitney (WMW) test to compare the three pairs of series. The results are reported in Table 1. | 1701.08118#12 | Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis | Some users of social media are spreading racist, sexist, and otherwise
hateful content. For the purpose of training a hate speech detection system,
the reliability of the annotations is crucial, but there is no universally
agreed-upon definition. We collected potentially hateful messages and asked two
groups of internet users to determine whether they were hate speech or not,
whether they should be banned or not and to rate their degree of offensiveness.
One of the groups was shown a definition prior to completing the survey. We
aimed to assess whether hate speech can be annotated reliably, and the extent
to which existing definitions are in accordance with subjective ratings. Our
results indicate that showing users a definition caused them to partially align
their own opinion with the definition but did not improve reliability, which
was very low overall. We conclude that the presence of hate speech should
perhaps not be considered a binary yes-or-no decision, and raters need more
detailed instructions for the annotation. | http://arxiv.org/pdf/1701.08118 | Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, Michael Wojatzki | cs.CL | null | Proceedings of NLP4CMC III: 3rd Workshop on Natural Language
Processing for Computer-Mediated Communication (Bochum), Bochumer
Linguistische Arbeitsberichte, vol. 17, sep 2016, pp. 6-9 | cs.CL | 20170127 | 20170127 | [] |
1701.08118 | 13 | Participants who were shown the deï¬nition were more likely to suggest to ban the tweet. In fact, participants in group one very rarely gave differ- ent answers to questions one and two (18 of 500 instances or 3.6%). This suggests that participants in that group aligned their own opinion with the deï¬nition.
We chose Krippendorffâs α to assess reliabil- ity, a measure from content analysis, where human coders are required to be interchangeable. There- fore, it measures agreement instead of association, which leaves no room for the individual predilec- tions of coders. It can be applied to any number of coders and to interval as well as nominal data. (Krippendorff, 2004)
This allowed us to compare agreement between both groups for all three questions. Figure 1 visu- alises the results. Overall, agreement was very low, ranging from α = .18 to .29. In contrast, for the purpose of content analysis, Krippendorff recom- mends a minimum of α = .80, or a minimum of .66 for applications where some uncertainty is un03 2 Group oO 5 BB 0 ceotintion Ed 7 01 Definition 0.0 1 2 3 Question
Figure 1: Reliability (Krippendorffâs α) for the different groups and questions | 1701.08118#13 | Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis | Some users of social media are spreading racist, sexist, and otherwise
hateful content. For the purpose of training a hate speech detection system,
the reliability of the annotations is crucial, but there is no universally
agreed-upon definition. We collected potentially hateful messages and asked two
groups of internet users to determine whether they were hate speech or not,
whether they should be banned or not and to rate their degree of offensiveness.
One of the groups was shown a definition prior to completing the survey. We
aimed to assess whether hate speech can be annotated reliably, and the extent
to which existing definitions are in accordance with subjective ratings. Our
results indicate that showing users a definition caused them to partially align
their own opinion with the definition but did not improve reliability, which
was very low overall. We conclude that the presence of hate speech should
perhaps not be considered a binary yes-or-no decision, and raters need more
detailed instructions for the annotation. | http://arxiv.org/pdf/1701.08118 | Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, Michael Wojatzki | cs.CL | null | Proceedings of NLP4CMC III: 3rd Workshop on Natural Language
Processing for Computer-Mediated Communication (Bochum), Bochumer
Linguistische Arbeitsberichte, vol. 17, sep 2016, pp. 6-9 | cs.CL | 20170127 | 20170127 | [] |
1701.08118 | 14 | Figure 1: Reliability (Krippendorffâs α) for the different groups and questions
problematic (Krippendorff, 2004). Reliability did not consistently increase when participants were shown a deï¬nition.
To measure the extent to which the annotations using the Twitter deï¬nition (question one in group one) were in accordance with participantsâ opinions (question one in group two), we calculated, for each tweet, the percentage of participants in each group who considered it hate speech, and then calculated Pearsonâs correlation coefï¬cient. The two series correlate strongly (r = .895, p < .0001), indicating that they measure the same underlying construct.
# 6 Conclusion and Future Work
This paper describes the creation of our hate speech corpus and offers ï¬rst insights into the low agree- ment among users when it comes to identifying hateful messages. Our results imply that hate speech is a vague concept that requires signiï¬cantly better deï¬nitions and guidelines in order to be anno- tated reliably. Based on the present ï¬ndings, we are planning to develop a new coding scheme which in- cludes clear-cut criteria that let people distinguish hate speech from other content. | 1701.08118#14 | Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis | Some users of social media are spreading racist, sexist, and otherwise
hateful content. For the purpose of training a hate speech detection system,
the reliability of the annotations is crucial, but there is no universally
agreed-upon definition. We collected potentially hateful messages and asked two
groups of internet users to determine whether they were hate speech or not,
whether they should be banned or not and to rate their degree of offensiveness.
One of the groups was shown a definition prior to completing the survey. We
aimed to assess whether hate speech can be annotated reliably, and the extent
to which existing definitions are in accordance with subjective ratings. Our
results indicate that showing users a definition caused them to partially align
their own opinion with the definition but did not improve reliability, which
was very low overall. We conclude that the presence of hate speech should
perhaps not be considered a binary yes-or-no decision, and raters need more
detailed instructions for the annotation. | http://arxiv.org/pdf/1701.08118 | Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, Michael Wojatzki | cs.CL | null | Proceedings of NLP4CMC III: 3rd Workshop on Natural Language
Processing for Computer-Mediated Communication (Bochum), Bochumer
Linguistische Arbeitsberichte, vol. 17, sep 2016, pp. 6-9 | cs.CL | 20170127 | 20170127 | [] |
1701.08118 | 15 | Researchers who are building a hate speech de- tection system might want to collect multiple labels for each tweet and average the results. Of course this approach does not make the original data any more reliable (Krippendorff, 2004). Yet, collecting the opinions of more users gives a more detailed picture of objective (or intersubjective) hatefulness. For the same reason, researchers might want to con- sider hate speech detection a regression problem, predicting, for example, the degree of hatefulness of a message, instead of a binary yes-or-no classiï¬- cation task.
In the future, ï¬nding the characteristics that make users consider content hateful will be use- ful for building a model that automatically detects hate speech and users who spread hateful content, and for determining what makes users disseminate hateful content.
# Acknowledgments
This work was supported by the Deutsche Forschungsgemeinschaft (DFG) under grant No. GRK 2167, Research Training Group âUser- Centred Social Mediaâ.
# References
Peter Burnap and Matthew Leighton Williams. 2014. Hate Speech, Machine Classiï¬cation and Statistical Modelling of Information Flows on Twitter: Inter- pretation and Communication for Policy Decision Making. In Proceedings of IPP 2014, pages 1â18. | 1701.08118#15 | Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis | Some users of social media are spreading racist, sexist, and otherwise
hateful content. For the purpose of training a hate speech detection system,
the reliability of the annotations is crucial, but there is no universally
agreed-upon definition. We collected potentially hateful messages and asked two
groups of internet users to determine whether they were hate speech or not,
whether they should be banned or not and to rate their degree of offensiveness.
One of the groups was shown a definition prior to completing the survey. We
aimed to assess whether hate speech can be annotated reliably, and the extent
to which existing definitions are in accordance with subjective ratings. Our
results indicate that showing users a definition caused them to partially align
their own opinion with the definition but did not improve reliability, which
was very low overall. We conclude that the presence of hate speech should
perhaps not be considered a binary yes-or-no decision, and raters need more
detailed instructions for the annotation. | http://arxiv.org/pdf/1701.08118 | Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, Michael Wojatzki | cs.CL | null | Proceedings of NLP4CMC III: 3rd Workshop on Natural Language
Processing for Computer-Mediated Communication (Bochum), Bochumer
Linguistische Arbeitsberichte, vol. 17, sep 2016, pp. 6-9 | cs.CL | 20170127 | 20170127 | [] |
1701.08118 | 16 | Nemanja Djuric, Robin Morris Jing Zhou, Mihajlo Gr- bovic, Vladan Radosavljevic, and Narayan Bhamidi- pati. 2014. Hate Speech Detection with Comment In ICML 2014, volume 32, pages Embeddings. 1188â1196.
Sarah Jeong. 2016. The History of Twitterâs Rules. VICE Motherboard.
Klaus Krippendorff. 2004. Reliability in Content Anal- ysis: Some Common Misconceptions and Recom- mendations. HCR, 30(3):411â433.
Ryan C Martin, Kelsey Ryan Coyier, Leah M VanSis- tine, and Kelly L Schroeder. 2013. Anger on the In- ternet: the Perceived Value of Rant-Sites. Cyberpsy- chology, behavior and social networking, 16(2):119â 22.
Mari J Matsuda. 1993. Words that Wound - Criti- cal Race Theory, Assaultive Speech, and the First Amendment. Westview Press, New York.
Leandro Silva, Mainack Mondal, Denzil Correa, Fabr´ıcio Benevenuto, and Ingmar Weber. 2016. An- alyzing the Targets of Hate in Online Social Media. In Proceedings of ICWSM 2016, pages 687â90. | 1701.08118#16 | Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis | Some users of social media are spreading racist, sexist, and otherwise
hateful content. For the purpose of training a hate speech detection system,
the reliability of the annotations is crucial, but there is no universally
agreed-upon definition. We collected potentially hateful messages and asked two
groups of internet users to determine whether they were hate speech or not,
whether they should be banned or not and to rate their degree of offensiveness.
One of the groups was shown a definition prior to completing the survey. We
aimed to assess whether hate speech can be annotated reliably, and the extent
to which existing definitions are in accordance with subjective ratings. Our
results indicate that showing users a definition caused them to partially align
their own opinion with the definition but did not improve reliability, which
was very low overall. We conclude that the presence of hate speech should
perhaps not be considered a binary yes-or-no decision, and raters need more
detailed instructions for the annotation. | http://arxiv.org/pdf/1701.08118 | Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, Michael Wojatzki | cs.CL | null | Proceedings of NLP4CMC III: 3rd Workshop on Natural Language
Processing for Computer-Mediated Communication (Bochum), Bochumer
Linguistische Arbeitsberichte, vol. 17, sep 2016, pp. 6-9 | cs.CL | 20170127 | 20170127 | [] |
1701.08118 | 17 | James Titcomb. 2016. Facebook and Twitter promise to crack down on internet hate speech. The Tele- graph.
William Warner and Julia Hirschberg. 2012. Detecting Hate Speech on the World Wide Web. In Proceed- ings of LSM 2012, pages 19â26. ACL.
Zeerak Waseem and Dirk Hovy. 2016. Hateful Sym- bols or Hateful People? Predictive Features for Hate In Proceedings of Speech Detection on Twitter. NAACL-HLT, pages 88â93. | 1701.08118#17 | Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis | Some users of social media are spreading racist, sexist, and otherwise
hateful content. For the purpose of training a hate speech detection system,
the reliability of the annotations is crucial, but there is no universally
agreed-upon definition. We collected potentially hateful messages and asked two
groups of internet users to determine whether they were hate speech or not,
whether they should be banned or not and to rate their degree of offensiveness.
One of the groups was shown a definition prior to completing the survey. We
aimed to assess whether hate speech can be annotated reliably, and the extent
to which existing definitions are in accordance with subjective ratings. Our
results indicate that showing users a definition caused them to partially align
their own opinion with the definition but did not improve reliability, which
was very low overall. We conclude that the presence of hate speech should
perhaps not be considered a binary yes-or-no decision, and raters need more
detailed instructions for the annotation. | http://arxiv.org/pdf/1701.08118 | Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, Michael Wojatzki | cs.CL | null | Proceedings of NLP4CMC III: 3rd Workshop on Natural Language
Processing for Computer-Mediated Communication (Bochum), Bochumer
Linguistische Arbeitsberichte, vol. 17, sep 2016, pp. 6-9 | cs.CL | 20170127 | 20170127 | [] |
1701.07274 | 1 | # DEEP REINFORCEMENT LEARNING: AN OVERVIEW
# Yuxi Li ([email protected])
# ABSTRACT
We give an overview of recent exciting achievements of deep reinforcement learn- ing (RL). We discuss six core elements, six important mechanisms, and twelve applications. We start with background of machine learning, deep learning and reinforcement learning. Next we discuss core RL elements, including value func- tion, in particular, Deep Q-Network (DQN), policy, reward, model and planning, exploration, and knowledge. After that, we discuss important mechanisms for RL, including attention and memory, unsupervised learning, transfer learning, multi- agent RL, hierarchical RL, and learning to learn. Then we discuss various appli- cations of RL, including games, in particular, AlphaGo, robotics, natural language processing, including dialogue systems, machine translation, and text generation, computer vision, business management, ï¬nance, healthcare, education, Industry 4.0, smart grid, intelligent transportation systems, and computer systems. We mention topics not reviewed yet, and list a collection of RL resources. After pre- senting a brief summary, we close with discussions. This is the ï¬rst overview about deep reinforcement learning publicly available on- line. It is comprehensive. Comments and criticisms are welcome. (This particular version is incomplete.) | 1701.07274#1 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 3 | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Problem Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Exploration vs Exploitation . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Value Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.4 Dynamic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.5 Temporal Difference Learning . . . . . . . . . . . . . . . . . . . . . . . . 2.3.6 Multi-step Bootstrapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.7 Function | 1701.07274#3 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 4 | Multi-step Bootstrapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.7 Function Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.8 Policy Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.9 Deep Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . 2.3.10 RL Parlance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.11 Brief Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Deep Q-Network (DQN) And Extensions . . . . . . . . . . . . . . . . . . . . . . . | 1701.07274#4 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 5 | 3.1.1 Deep Q-Network (DQN) And Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Actor-Critic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Policy Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Combining Policy Gradient with Off-Policy RL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 1701.07274#5 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 7 | # 2 Background
# 2.1 Machine Learning .
# 2.2 Deep Learning .
2.3 Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
# 3 Core Elements
# 3.1 Value Function .
# 3.2 Policy .
# 3.3 Reward .
# 3.4 Model and Planning .
# 3.5 Exploration .
# 3.6 Knowledge .
# 4 Important Mechanisms
# 4.1 Attention and Memory .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
# 4.2 Unsupervised Learning .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.1 Horde .
.
.
.
.
.
..
. .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.2 Unsupervised Auxiliary Learning . . . . . . . . . . . . . . . . . . . . . .
# 4.2.3 Generative Adversarial Networks
. . . . . . . . . . . . . . . . . . . . . .
# 4.3 Transfer Learning . | 1701.07274#7 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 8 | . . . . . . . . . . . . . . . . . . . . . .
# 4.3 Transfer Learning .
.
.
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4 Multi-Agent Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . .
2
23
24
24
24
25
25
26
4.5 Hierarchical Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . .
# 4.6 Learning to Learn .
.
.
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
# LeamingtoLean...........
4.6.1 Learning to Learn/Optimize . . . . . . . . . . . . . . . . . . . . . . . . .
4.6.2 Zero/One/Few-Shot Learning . . . . . . . . . . . . . . . . . . . . . . . .
4.6.3 Neural Architecture Design . . . . . . . . . . . . . . . . . . . . . . . . .
# 5 Applications | 1701.07274#8 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 9 | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Perfect Information Board Games . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Imperfect Information Board Games . . . . . . . . . . . . . . . . . . . . . 5.1.3 Video Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Guided Policy Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Learn to Navigate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Dialogue Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 1701.07274#9 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 10 | . . 5.3.1 Dialogue Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Machine Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Text Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.3 Motion Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.4 Scene Understanding . . . . . . . . . . . . . . . . . . . . . . | 1701.07274#10 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 11 | . . . . . . . . . . . 5.4.4 Scene Understanding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.5 Integration with NLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.6 Visual Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Industry 4.0 . . . . . . . . . . . . . . . . . . . . . . | 1701.07274#11 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 12 | . . . . . . . . . . . . . Industry 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.1 Resource Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.2 Performance Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.3 Security & Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 30 33 34 34 35 35 35 36 37 37 38 38 39 39 39 40 40 40 41 41 41 41 42 42 42 43 43 43 | 1701.07274#12 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 14 | 7.1 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 More Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Surveys and Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Courses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Tutorials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Conferences, Journals and Workshops . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Blogs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 1701.07274#14 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 16 | # 7 Resources
# 8 Brief Summary
# 9 Discussions
4
# INTRODUCTION
Reinforcement learning (RL) is about an agent interacting with the environment, learning an optimal policy, by trial and error, for sequential decision making problems in a wide range of ï¬elds in both natural and social sciences, and engineering (Sutton and Barto, 1998; 2018; Bertsekas and Tsitsiklis, 1996; Bertsekas, 2012; Szepesv´ari, 2010; Powell, 2011).
The integration of reinforcement learning and neural networks has a long history (Sutton and Barto, 2018; Bertsekas and Tsitsiklis, 1996; Schmidhuber, 2015). With recent exciting achievements of deep learning (LeCun et al., 2015; Goodfellow et al., 2016), beneï¬ting from big data, powerful computation, new algorithmic techniques, mature software packages and architectures, and strong ï¬nancial support, we have been witnessing the renaissance of reinforcement learning (Krakovsky, 2016), especially, the combination of deep neural networks and reinforcement learning, i.e., deep reinforcement learning (deep RL). | 1701.07274#16 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 17 | Deep learning, or deep neural networks, has been prevailing in reinforcement learning in the last several years, in games, robotics, natural language processing, etc. We have been witnessing break- throughs, like deep Q-network (Mnih et al., 2015) and AlphaGo (Silver et al., 2016a); and novel ar- chitectures and applications, like differentiable neural computer (Graves et al., 2016), asynchronous methods (Mnih et al., 2016), dueling network architectures (Wang et al., 2016b), value iteration networks (Tamar et al., 2016), unsupervised reinforcement and auxiliary learning (Jaderberg et al., 2017; Mirowski et al., 2017), neural architecture design (Zoph and Le, 2017), dual learning for machine translation (He et al., 2016a), spoken dialogue systems (Su et al., 2016b), information extraction (Narasimhan et al., 2016), guided policy search (Levine et al., 2016a), and generative ad- versarial imitation learning (Ho and Ermon, 2016), etc. Creativity would push the frontiers of deep RL further with respect to core elements, mechanisms, and applications. | 1701.07274#17 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 18 | Why has deep learning been helping reinforcement learning make so many and so enormous achieve- ments? Representation learning with deep learning enables automatic feature engineering and end- to-end learning through gradient descent, so that reliance on domain knowledge is signiï¬cantly reduced or even removed. Feature engineering used to be done manually and is usually time- consuming, over-speciï¬ed, and incomplete. Deep, distributed representations exploit the hierar- chical composition of factors in data to combat the exponential challenges of the curse of dimen- sionality. Generality, expressiveness and ï¬exibility of deep neural networks make some tasks easier or possible, e.g., in the breakthroughs and novel architectures and applications discussed above.
Deep learning, as a speciï¬c class of machine learning, is not without limitations, e.g., as a black-box lacking interpretability, as an âalchemyâ without clear and sufï¬cient scientiï¬c principles to work with, and without human intelligence not able to competing with a baby in some tasks. However, there are lots of works to improve deep learning, machine learning, and AI in general. | 1701.07274#18 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 19 | Deep learning and reinforcement learning, being selected as one of the MIT Technology Review 10 Breakthrough Technologies in 2013 and 2017 respectively, will play their crucial role in achieving artiï¬cial general intelligence. David Silver, the major contributor of AlphaGo (Silver et al., 2016a; 2017), even made a formula: artiï¬cial intelligence = reinforcement learning + deep learning (Silver, 2016). | 1701.07274#19 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 20 | The outline of this overview follows. First we discuss background of machine learning, deep learn- ing and reinforcement learning in Section 2. Next we discuss core RL elements, including value function in Section 3.1, policy in Section 3.2, reward in Section 3.3, model and planning in Sec- tion 3.4, exploration in Section 3.5, and knowledge in Section 3.6. Then we discuss important mech- anisms for RL, including attention and memory in Section 4.1, unsupervised learning in Section 4.2, transfer learning in Section 4.3, multi-agent RL in Section 4.4, hierarchical RL in Section 4.5, and, learning to learn in Section 4.6. After that, we discuss various RL applications, including games in Section 5.1, robotics in Section 5.2, natural language processing in Section 5.3, computer vision in Section 5.4, business management in Section 5.5, ï¬nance in Section 5.6, healthcare in Section 5.7, education in Section 5.8, Industry 4.0 in Section 5.9, smart grid in Section 5.10, intelligent trans- portation systems in Section 5.11, and computer systems in Section 5.12. We present a list of topics not reviewed yet in Section 6, give a brief summary in Section 8, and close with discussions in Section 9.
5 | 1701.07274#20 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 22 | Figure 1: Conceptual Organization of the Overview
In Section 7, we list a collection of RL resources including books, surveys, reports, online courses, tutorials, conferences, journals and workshops, blogs, and open sources. If picking a single RL resource, it is Sutton and Bartoâs RL book (Sutton and Barto, 2018), 2nd edition in preparation. It covers RL fundamentals and reï¬ects new progress, e.g., in deep Q-network, AlphaGo, policy gra- dient methods, as well as in psychology and neuroscience. Deng and Dong (2014) and Goodfellow et al. (2016) are recent deep learning books. Bishop (2011), Hastie et al. (2009), and Murphy (2012) are popular machine learning textbooks; James et al. (2013) gives an introduction to machine learn- ing; Provost and Fawcett (2013) and Kuhn and Johnson (2013) discuss practical issues in machine learning applications; and Simeone (2017) is a brief introduction to machine learning for engineers. | 1701.07274#22 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 23 | Figure 1 illustrates the conceptual organization of the overview. The agent-environment interac- tion sits in the center, around which are core elements: value function, policy, reward, model and planning, exploration, and knowledge. Next come important mechanisms: attention and memory, unsupervised learning, transfer learning, multi-agent RL, hierarchical RL, and learning to learn. Then come various applications: games, robotics, NLP (natural language processing), computer vi- sion, business management, ï¬nance, healthcare, education, Industry 4.0, smart grid, ITS (intelligent transportation systems), and computer systems.
The main readers of this overview would be those who want to get more familiar with deep re- inforcement learning. We endeavour to provide as much relevant information as possible. For reinforcement learning experts, as well as new comers, we hope this overview would be helpful as a reference. In this overview, we mainly focus on contemporary work in recent couple of years, by no means complete, and make slight effort for discussions of historical context, for which the best material to consult is Sutton and Barto (2018). | 1701.07274#23 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 24 | In this version, we endeavour to provide a wide coverage of fundamental and contemporary RL issues, about core elements, important mechanisms, and applications. In the future, besides further reï¬nements for the width, we will also improve the depth by conducting deeper analysis of the issues involved and the papers discussed. Comments and criticisms are welcome.
6
# 2 BACKGROUND
In this section, we brieï¬y introduce concepts and fundamentals in machine learning, deep learn- ing (Goodfellow et al., 2016) and reinforcement learning (Sutton and Barto, 2018). We do not give detailed background introduction for machine learning and deep learning. Instead, we recommend the following recent Nature/Science survey papers: Jordan and Mitchell (2015) for machine learn- ing, and LeCun et al. (2015) for deep learning. We cover some RL basics. However, we recommend the textbook, Sutton and Barto (2018), and the recent Nature survey paper, Littman (2015), for reinforcement learning. We also collect relevant resources in Section 7.
2.1 MACHINE LEARNING
Machine learning is about learning from data and making predictions and/or decisions. | 1701.07274#24 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 25 | 2.1 MACHINE LEARNING
Machine learning is about learning from data and making predictions and/or decisions.
Usually we categorize machine learning as supervised, unsupervised, and reinforcement learning.1 In supervised learning, there are labeled data; in unsupervised learning, there are no labeled data; and in reinforcement learning, there are evaluative feedbacks, but no supervised signals. Classiï¬cation and regression are two types of supervised learning problems, with categorical and numerical outputs respectively.
Unsupervised learning attempts to extract information from data without labels, e.g., clustering and density estimation. Representation learning is a classical type of unsupervised learning. However, training feedforward networks or convolutional neural networks with supervised learning is a kind of representation learning. Representation learning ï¬nds a representation to preserve as much informa- tion about the original data as possible, at the same time, to keep the representation simpler or more accessible than the original data, with low-dimensional, sparse, and independent representations. | 1701.07274#25 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 26 | Deep learning, or deep neural networks, is a particular machine learning scheme, usually for su- pervised or unsupervised learning, and can be integrated with reinforcement learning, usually as a function approximator. Supervised and unsupervised learning are usually one-shot, myopic, consid- ering instant reward; while reinforcement learning is sequential, far-sighted, considering long-term accumulative reward.
Machine learning is based on probability theory and statistics (Hastie et al., 2009) and optimiza- tion (Boyd and Vandenberghe, 2004), is the basis for big data, data science (Blei and Smyth, 2017; Provost and Fawcett, 2013), predictive modeling (Kuhn and Johnson, 2013), data mining, informa- tion retrieval (Manning et al., 2008), etc, and becomes a critical ingredient for computer vision, nat- ural language processing, robotics, etc. Reinforcement learning is kin to optimal control (Bertsekas, 2012), and operations research and management (Powell, 2011), and is also related to psychology and neuroscience (Sutton and Barto, 2018). Machine learning is a subset of artiï¬cial intelligence (AI), and is evolving to be critical for all ï¬elds of AI. | 1701.07274#26 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 27 | A machine learning algorithm is composed of a dataset, a cost/loss function, an optimization pro- cedure, and a model (Goodfellow et al., 2016). A dataset is divided into non-overlapping training, validation, and testing subsets. A cost/loss function measures the model performance, e.g., with respect to accuracy, like mean square error in regression and classiï¬cation error rate. Training error measures the error on the training data, minimizing which is an optimization problem. Generaliza- tion error, or test error, measures the error on new input data, which differentiates machine learning from optimization. A machine learning algorithm tries to make the training error, and the gap be- tween training error and testing error small. A model is under-ï¬tting if it can not achieve a low training error; a model is over-ï¬tting if the gap between training error and test error is large. | 1701.07274#27 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 28 | A modelâs capacity measures the range of functions it can ï¬t. VC dimension measures the capacity of a binary classiï¬er. Occamâs Razor states that, with the same expressiveness, simple models are preferred. Training error and generalization error versus model capacity usually form a U-shape relationship. We ï¬nd the optimal capacity to achieve low training error and small gap between train- ing error and generalization error. Bias measures the expected deviation of the estimator from the true value; while variance measures the deviation of the estimator from the expected value, or vari- ance of the estimator. As model capacity increases, bias tends to decrease, while variance tends to
1Is reinforcement learning part of machine learning, or more than it, and somewhere close to artiï¬cial intelligence? We raise this question without elaboration.
7 | 1701.07274#28 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 29 | 1Is reinforcement learning part of machine learning, or more than it, and somewhere close to artiï¬cial intelligence? We raise this question without elaboration.
7
increase, yielding another U-shape relationship between generalization error versus model capacity. We try to ï¬nd the optimal capacity point, of which under-ï¬tting occurs on the left and over-ï¬tting occurs on the right. Regularization add a penalty term to the cost function, to reduce the general- ization error, but not training error. No free lunch theorem states that there is no universally best model, or best regularizor. An implication is that deep learning may not be the best model for some problems. There are model parameters, and hyperparameters for model capacity and regularization. Cross-validation is used to tune hyperparameters, to strike a balance between bias and variance, and to select the optimal model. | 1701.07274#29 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 30 | Maximum likelihood estimation (MLE) is a common approach to derive good estimation of param- eters. For issues like numerical underï¬ow, the product in MLE is converted to summation to obtain negative log-likelihood (NLL). MLE is equivalent to minimizing KL divergence, the dissimilarity between the empirical distribution deï¬ned by the training data and the model distribution. Minimiz- ing KL divergence between two distributions corresponds to minimizing the cross-entropy between the distributions. In short, maximization of likelihood becomes minimization of the negative log- likelihood (NLL), or equivalently, minimization of cross entropy.
Gradient descent is a common approach to solve optimization problems. Stochastic gradient descent extends gradient descent by working with a single sample each time, and usually with minibatches.
Importance sampling is a technique to estimate properties of a particular distribution, by samples from a different distribution, to lower the variance of the estimation, or when sampling from the distribution of interest is difï¬cult.
Frequentist statistics estimates a single value, and characterizes variance by conï¬dence interval; Bayesian statistics considers the distribution of an estimate when making predictions and decisions.
generative vs discriminative
2.2 DEEP LEARNING | 1701.07274#30 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 31 | generative vs discriminative
2.2 DEEP LEARNING
Deep learning is in contrast to âshallowâ learning. For many machine learning algorithms, e.g., linear regression, logistic regression, support vector machines (SVMs), decision trees, and boosting, we have input layer and output layer, and the inputs may be transformed with manual feature en- gineering before training. In deep learning, between input and output layers, we have one or more hidden layers. At each layer except input layer, we compute the input to each unit, as the weighted sum of units from the previous layer; then we usually use nonlinear transformation, or activation function, such as logistic, tanh, or more popular recently, rectiï¬ed linear unit (ReLU), to apply to the input of a unit, to obtain a new representation of the input from previous layer. We have weights on links between units from layer to layer. After computations ï¬ow forward from input to output, at output layer and each hidden layer, we can compute error derivatives backward, and backpropagate gradients towards the input layer, so that weights can be updated to optimize some loss function. | 1701.07274#31 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 32 | A feedforward deep neural network or multilayer perceptron (MLP) is to map a set of input values to output values with a mathematical function formed by composing many simpler functions at each layer. A convolutional neural network (CNN) is a feedforward deep neural network, with convolutional layers, pooling layers and fully connected layers. CNNs are designed to process data with multiple arrays, e.g., colour image, language, audio spectrogram, and video, beneï¬t from the properties of such signals: local connections, shared weights, pooling and the use of many layers, and are inspired by simple cells and complex cells in visual neuroscience (LeCun et al., 2015). ResNets (He et al., 2016d) are designed to ease the training of very deep neural networks by adding shortcut connections to learn residual functions with reference to the layer inputs. A recurrent neural network (RNN) is often used to process sequential inputs like speech and language, element by element, with hidden units to store history of past elements. A RNN can be seen as a multilayer neural network with all layers sharing the same weights, when being unfolded in time of forward computation. It is hard for RNN to store information for very long | 1701.07274#32 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 33 | multilayer neural network with all layers sharing the same weights, when being unfolded in time of forward computation. It is hard for RNN to store information for very long time and the gradient may vanish. Long short term memory networks (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent unit (GRU) (Chung et al., 2014) were proposed to address such issues, with gating mechanisms to manipulate information through recurrent cells. Gradient backpropagation or its variants can be used for training all deep neural networks mentioned above. | 1701.07274#33 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 34 | 8
Dropout (Srivastava et al., 2014) is a regularization strategy to train an ensemble of sub-networks by removing non-output units randomly from the original network. Batch normalization (Ioffe and Szegedy, 2015) performs the normalization for each training mini-batch, to accelerate training by reducing internal covariate shift, i.e., the change of parameters of previous layers will change each layerâs inputs distribution. | 1701.07274#34 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 35 | Deep neural networks learn representations automatically from raw inputs to recover the compo- sitional hierarchies in many natural signals, i.e., higher-level features are composed of lower-level ones, e.g., in images, the hierarch of objects, parts, motifs, and local combinations of edges. Dis- tributed representation is a central idea in deep learning, which implies that many features may represent each input, and each feature may represent many inputs. The exponential advantages of deep, distributed representations combat the exponential challenges of the curse of dimensionality. The notion of end-to-end training refers to that a learning model uses raw inputs without manual feature engineering to generate outputs, e.g., AlexNet (Krizhevsky et al., 2012) with raw pixels for image classiï¬cation, Seq2Seq (Sutskever et al., 2014) with raw sentences for machine translation, and DQN (Mnih et al., 2015) with raw pixels and score to play games.
2.3 REINFORCEMENT LEARNING | 1701.07274#35 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 36 | 2.3 REINFORCEMENT LEARNING
We provide background of reinforcement learning brieï¬y in this section. After setting up the RL problem, we discuss value function, temporal difference learning, function approximation, policy optimization, deep RL, RL parlance, and close this section with a brief summary. To have a good understanding of deep reinforcement learning, it is essential to have a good understanding of rein- forcement learning ï¬rst.
2.3.1 PROBLEM SETUP | 1701.07274#36 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 37 | 2.3.1 PROBLEM SETUP
A RL agent interacts with an environment over time. At each time step t, the agent receives a state $1 in a state space S and selects an action a; from an action space A, following a policy 7(a;|s1), which is the agentâs behavior, i.e., a mapping from state s, to actions a;, receives a scalar reward r,, and transitions to the next state s;41, according to the environment dynamics, or model, for reward function R(s, a) and state transition probability P(s;41|s:, a,) respectively. In an episodic problem, this process continues until the agent reaches a terminal state and then it restarts. The return Ry, = Yr Y risk is the discounted, accumulated reward with the discount factor 7 ⬠(0, 1]. The agent aims to maximize the expectation of such long term return from each state. The problem is set up in discrete state and action spaces. It is not hard to extend it to continuous spaces.
2.3.2 EXPLORATION VS EXPLOITATION
multi-arm bandit
various exploration techniques
2.3.3 VALUE FUNCTION | 1701.07274#37 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 38 | A value function is a prediction of the expected, accumulative, discounted, future reward, measur- ing how good each state, or state-action pair, is. The state value v,(s) = E[R,|s; = s] is the expected return for following policy 7 from state s. v;(s) decomposes into the Bellman equation: Un(s) = 0, Tals) Oy, p(sâ. 7/8, a) [7 + Yux(sâ)]. An optimal state value v,(s) = max, U_(s) = max, x~(s,@) is the maximum state value achievable by any policy for state s. v,(s) decom- poses into the Bellman equation: v,.(s) = max, )>,, ,. p(sâ, r|s, a) [r + yu,(sâ)]. The action value qn(s,a) = E[R:|s; = 8, a, = a] is the expected return for selecting action a in state s and then fol- lowing policy 7. q,(s,a) decomposes into the Bellman equation: g,(s,@) = )>,,,. p(sâ, | 1701.07274#38 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 39 | lowing policy 7. q,(s,a) decomposes into the Bellman equation: g,(s,@) = )>,,,. p(sâ, 7|s, @)[7 + Ya T(aâ|s')ae(sâ, aâ)]. An optimal action value function q,(s,a) = max, qr(s,q@) is the maxi- mum action value achievable by any policy for state s and action a. q.(s,a@) decomposes into the Bellman equation: q.(s,a@) = >¢,,,.p(sâ,7|8,@)[7 + ymax: g.(sâ,aâ)]. We denote an optimal policy by *. | 1701.07274#39 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 40 | 9
2.3.4 DYNAMIC PROGRAMMING
2.3.5 TEMPORAL DIFFERENCE LEARNING
When a RL problem satisï¬es the Markov property, i.e., the future depends only on the current state and action, but not on the past, it is formulated as a Markov Decision Process (MDP), deï¬ned by the 5-tuple (S, A, P, R, γ). When the system model is available, we use dynamic programming methods: policy evaluation to calculate value/action value function for a policy, value iteration and policy iteration for ï¬nding an optimal policy. When there is no model, we resort to RL methods. RL methods also work when the model is available. Additionally, a RL environment can be a multi- armed bandit, an MDP, a POMDP, a game, etc.
Temporal difference (TD) learning is central in RL. TD learning is usually refer to the learning methods for value function evaluation in Sutton (1988). SARSA (Sutton and Barto, 2018) and Q- learning (Watkins and Dayan, 1992) are also regarded as temporal difference learning. | 1701.07274#40 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 41 | TD learning (Sutton, 1988) learns value function V(s) directly from experience with TD error, with bootstrapping, in a model-free, online, and fully incremental way. TD learning is a prediction problem. The update rule is V(s) < V(s) + a[r + yV(sâ) â V(s)], where a is a learning rate, and r+7V(s')âV(s) is called TD error. Algorithm 1 presents the pseudo code for tabular TD learning. Precisely, it is tabular TD(0) learning, where ââ0â indicates it is based on one-step return.
Bootstrapping, like the TD update rule, estimates state or action value based on subsequent esti- mates, is common in RL, like TD learning, Q learning, and actor-critic. Bootstrapping methods are usually faster to learn, and enable learning to be online and continual. Bootstrapping methods are not instances of true gradient decent, since the target depends on the weights to be estimated. The concept of semi-gradient descent is then introduced (Sutton and Barto, 2018). | 1701.07274#41 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 42 | Input: the policy 7 to be evaluated Output: value function V initialize V arbitrarily, e.g., to 0 for all states for each episode do initialize state s for each step of episode, state s is not terminal do a < action given by z for s take action a, observe r, sâ V(s) + V(s) + alr + 7V(s') â V(s)] ses! end
# end
Algorithm 1: TD learning, adapted from Sutton and Barto (2018)
Output: action value function Q initialize Q arbitrarily, e.g., to 0 for all states, set action value for terminal states as 0 for each episode do initialize state s for each step of episode, state s is not terminal do a + action for s derived by Q, e.g., e-greedy take action a, observe r, sâ aâ + action for sâ derived by Q, e.g., e-greedy Q(s,a) â Q(s,a) +alr + 7Q(s/,aâ) â Q(s,a)] s¢saca end
# end
Algorithm 2: SARSA, adapted from Sutton and Barto (2018)
10 | 1701.07274#42 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 43 | # end
Algorithm 2: SARSA, adapted from Sutton and Barto (2018)
10
Output: action value function Q initialize Q arbitrarily, e.g., to 0 for all states, set action value for terminal states as 0 for each episode do initialize state s for each step of episode, state s is not terminal do a + action for s derived by Q, e.g., e-greedy take action a, observe r, sâ Q(s,a) + Q(s, a) + alr + ymaxy Q(sâ,aâ) â Q(s, a)] ses! end
# end
Algorithm 3: Q learning, adapted from Sutton and Barto (2018)
SARSA, representing state, action, reward, (next) state, (next) action, is an on-policy control method to find the optimal policy, with the update rule, Q(s,a) + Q(s,a) + alr + yQ(sâ, aâ) â Q(s, a)]. Algorithm 2 presents the pseudo code for tabular SARSA, precisely tabular SARSA(0). | 1701.07274#43 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 44 | Q-learning is an off-policy control method to find the optimal policy. Q-learning learns action value function, with the update rule, Q(s,a) + Q(s, a) +a[r +7 maxq Q(sâ,aâ) â Q(s, a)]. Q learning refines the policy greedily with respect to action values by the max operator. Algorithm 3 presents the pseudo code for Q learning, precisely, tabular Q(0) learning.
TD-learning, Q-learning and SARSA converge under certain conditions. From an optimal action value function, we can derive an optimal policy.
2.3.6 MULTI-STEP BOOTSTRAPPING
The above algorithms are referred to as TD(0) and Q(0), learning with one-step return. We have TD learning and Q learning variants and Monte-Carlo approach with multi-step return in the forward view. The eligibility trace from the backward view provides an online, incremental implementation, resulting in TD(λ) and Q(λ) algorithms, where λ â [0, 1]. TD(1) is the same as the Monte Carlo approach. | 1701.07274#44 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 45 | Eligibility trace is a short-term memory, usually lasting within an episode, assists the learning pro- cess, by affecting the weight vector. The weight vector is a long-term memory, lasting the whole duration of the system, determines the estimated value. Eligibility trace helps with the issues of long-delayed rewards and non-Markov tasks (Sutton and Barto, 2018).
TD(λ) uniï¬es one-step TD prediction, TD(0), with Monte Carlo methods, TD(1), using eligibility traces and the decay parameter λ, for prediction algorithms. De Asis et al. (2018) made uniï¬cation for multi-step TD control algorithms.
# 2.3.7 FUNCTION APPROXIMATION | 1701.07274#45 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 46 | # 2.3.7 FUNCTION APPROXIMATION
We discuss the tabular cases above, where a value function or a policy is stored in a tabular form. Function approximation is a way for generalization when the state and/or action spaces are large or continuous. Function approximation aims to generalize from examples of a function to construct an approximate of the entire function; it is usually a concept in supervised learning, studied in the ï¬elds of machine learning, patten recognition, and statistical curve ï¬tting; function approximation in reinforcement learning usually treats each backup as a training example, and encounters new issues like nonstationarity, bootstrapping, and delayed targets (Sutton and Barto, 2018). Linear function approximation is a popular choice, partially due to its desirable theoretical properties, esp. before the work of Deep Q-Network (Mnih et al., 2015). However, the integration of reinforcement learning and neural networks dated back a long time ago (Sutton and Barto, 2018; Bertsekas and Tsitsiklis, 1996; Schmidhuber, 2015). | 1701.07274#46 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.