doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1604.06174 | 23 | reduces the memory cost to be sub-linear. Since the backward operation is nearly twice as time consuming as the forward one, it only slows down the computation by a small amount.
In the most general case, the memory cost of each layer is not the same, so we cannot simply set n. However, the trade-off between the intermediate outputs and the cost of each stage still k = holds. In this case, we use Alg. 3 to do a greedy allocation with a given budget for the memory cost within each segment as a single parameter B. Varying B gives us various allocation plans that either assign more memory to the intermediate outputs, or to computation within each stage. When we do static memory allocation, we can get the exact memory cost given each allocation plan. We can use this information to do a heuristic search over B to ï¬nd optimal memory plan that balances the cost of the two. The details of the searching step is presented in the supplementary material. We ï¬nd this approach works well in practice. We can also generalize this algorithm by considering the cost to run each operation to try to keep time consuming operations when possible.
6
Algorithm 3: Memory Planning with Budget | 1604.06174#23 | Training Deep Nets with Sublinear Memory Cost | We propose a systematic approach to reduce the memory consumption of deep
neural network training. Specifically, we design an algorithm that costs
O(sqrt(n)) memory to train a n layer network, with only the computational cost
of an extra forward pass per mini-batch. As many of the state-of-the-art models
hit the upper bound of the GPU memory, our algorithm allows deeper and more
complex models to be explored, and helps advance the innovations in deep
learning research. We focus on reducing the memory cost to store the
intermediate feature maps and gradients during training. Computation graph
analysis is used for automatic in-place operation and memory sharing
optimizations. We show that it is possible to trade computation for memory -
giving a more memory efficient training algorithm with a little extra
computation cost. In the extreme case, our analysis also shows that the memory
consumption can be reduced to O(log n) with as little as O(n log n) extra cost
for forward computation. Our experiments show that we can reduce the memory
cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent
additional running time cost on ImageNet problems. Similarly, significant
memory cost reduction is observed in training complex recurrent neural networks
on very long sequences. | http://arxiv.org/pdf/1604.06174 | Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin | cs.LG | null | null | cs.LG | 20160421 | 20160422 | [
{
"id": "1512.03385"
},
{
"id": "1507.06228"
},
{
"id": "1603.05027"
},
{
"id": "1510.08983"
},
{
"id": "1602.08124"
}
] |
1604.06174 | 24 | 6
Algorithm 3: Memory Planning with Budget
Input: G = (V, pred), input computation graph. Input: C â V , candidate stage splitting points, we will search splitting points over v â C Input: B, approximate memory budget. We can search over B to optimize the memory allocation. temp â 0, x â 0, y â 0 for v in topological-order(V ) do temp â temp + size-of-output(v) if v â C and temp > B then x â x + size-of-output(v), y â max(y, temp) m(v) = 0, temp â 0 else m(v) = 1 end end Output: x approximate cost to store inter-stage feature maps Output: y approximate memory cost for each sub stage Output: m the mirror plan to feed to Alg. 2
input input-grad Le nv-bn-relu conv-bn-relu backward forward bn-backward relu-backward conv-backward ' iconv-bn-relu conv-bn-relu backward forward + data dependency [1 Memory allocation for each output of op, same color indicates shared memory. | 1604.06174#24 | Training Deep Nets with Sublinear Memory Cost | We propose a systematic approach to reduce the memory consumption of deep
neural network training. Specifically, we design an algorithm that costs
O(sqrt(n)) memory to train a n layer network, with only the computational cost
of an extra forward pass per mini-batch. As many of the state-of-the-art models
hit the upper bound of the GPU memory, our algorithm allows deeper and more
complex models to be explored, and helps advance the innovations in deep
learning research. We focus on reducing the memory cost to store the
intermediate feature maps and gradients during training. Computation graph
analysis is used for automatic in-place operation and memory sharing
optimizations. We show that it is possible to trade computation for memory -
giving a more memory efficient training algorithm with a little extra
computation cost. In the extreme case, our analysis also shows that the memory
consumption can be reduced to O(log n) with as little as O(n log n) extra cost
for forward computation. Our experiments show that we can reduce the memory
cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent
additional running time cost on ImageNet problems. Similarly, significant
memory cost reduction is observed in training complex recurrent neural networks
on very long sequences. | http://arxiv.org/pdf/1604.06174 | Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin | cs.LG | null | null | cs.LG | 20160421 | 20160422 | [
{
"id": "1512.03385"
},
{
"id": "1507.06228"
},
{
"id": "1603.05027"
},
{
"id": "1510.08983"
},
{
"id": "1602.08124"
}
] |
1604.06174 | 25 | Figure 4: Recursion view of the memory optimized allocations. The segment can be viewed as a single operator that combines all the operators within the segment. Inside each operator, a sub-graph as executed to calculate the gradient.
# 4.4 More General View: Recursion and Subroutine
In this section, we provide an alternative view of the memory optimization scheme described above. Speciï¬cally, we can view each segment as a bulk operator that combines all the operations inside the segment together. The idea is illustrated in Fig. 4. The combined operator calculates the gradient by executing over the sub-graph that describes its internal computation. This view allows us to treat a series of operations as subroutines. The optimization within the sub-graph does not affect the external world. As a result, we can recursively apply our memory optimization scheme to each sub-graph.
Pay Even Less Memory with Recursion Let g(n) to be the memory cost to do forward and backward pass on a n layer neural network. Assume that we store k intermediate results in the graph and apply the same strategy recursively when doing forward and backward pass on the sub-path. We have the following recursion formula.
g(n) = k + g (n/(k + 1)) (2)
Solving this recursion formula gives us
g(n) = k logk+1(n) (3)
7 | 1604.06174#25 | Training Deep Nets with Sublinear Memory Cost | We propose a systematic approach to reduce the memory consumption of deep
neural network training. Specifically, we design an algorithm that costs
O(sqrt(n)) memory to train a n layer network, with only the computational cost
of an extra forward pass per mini-batch. As many of the state-of-the-art models
hit the upper bound of the GPU memory, our algorithm allows deeper and more
complex models to be explored, and helps advance the innovations in deep
learning research. We focus on reducing the memory cost to store the
intermediate feature maps and gradients during training. Computation graph
analysis is used for automatic in-place operation and memory sharing
optimizations. We show that it is possible to trade computation for memory -
giving a more memory efficient training algorithm with a little extra
computation cost. In the extreme case, our analysis also shows that the memory
consumption can be reduced to O(log n) with as little as O(n log n) extra cost
for forward computation. Our experiments show that we can reduce the memory
cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent
additional running time cost on ImageNet problems. Similarly, significant
memory cost reduction is observed in training complex recurrent neural networks
on very long sequences. | http://arxiv.org/pdf/1604.06174 | Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin | cs.LG | null | null | cs.LG | 20160421 | 20160422 | [
{
"id": "1512.03385"
},
{
"id": "1507.06228"
},
{
"id": "1603.05027"
},
{
"id": "1510.08983"
},
{
"id": "1602.08124"
}
] |
1604.06174 | 26 | Solving this recursion formula gives us
g(n) = k logk+1(n) (3)
7
As a special case, if we set k = 1, we get g(n) = log2 n. This is interesting conclusion as all the existing implementations takes O(n) memory in feature map to train a n layer neural network. This will require O(log2 n) cost forward pass cost, so may not be used commonly. But it demonstrates how we can trade memory even further by using recursion.
# 4.5 Guideline for Deep Learning Frameworks
In this section, we have shown that it is possible to trade computation for memory and combine it with the system optimizations proposed in Sec 3. It is helpful for deep learning frameworks to
Enable option to drop result of low cost operations.
⢠Provide planning algorithms to give efï¬cient memory plan.
⢠Enable user to set the mirror attribute in the computation graph for memory optimization.
While the last option is not strictly necessary, providing such interface enables user to hack their own memory optimizers and encourages future researches on the related directions. Under this spirit, we support the customization of graph mirror plan and will make the source code publicly available.
# 5 Experiments
# 5.1 Experiment Setup | 1604.06174#26 | Training Deep Nets with Sublinear Memory Cost | We propose a systematic approach to reduce the memory consumption of deep
neural network training. Specifically, we design an algorithm that costs
O(sqrt(n)) memory to train a n layer network, with only the computational cost
of an extra forward pass per mini-batch. As many of the state-of-the-art models
hit the upper bound of the GPU memory, our algorithm allows deeper and more
complex models to be explored, and helps advance the innovations in deep
learning research. We focus on reducing the memory cost to store the
intermediate feature maps and gradients during training. Computation graph
analysis is used for automatic in-place operation and memory sharing
optimizations. We show that it is possible to trade computation for memory -
giving a more memory efficient training algorithm with a little extra
computation cost. In the extreme case, our analysis also shows that the memory
consumption can be reduced to O(log n) with as little as O(n log n) extra cost
for forward computation. Our experiments show that we can reduce the memory
cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent
additional running time cost on ImageNet problems. Similarly, significant
memory cost reduction is observed in training complex recurrent neural networks
on very long sequences. | http://arxiv.org/pdf/1604.06174 | Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin | cs.LG | null | null | cs.LG | 20160421 | 20160422 | [
{
"id": "1512.03385"
},
{
"id": "1507.06228"
},
{
"id": "1603.05027"
},
{
"id": "1510.08983"
},
{
"id": "1602.08124"
}
] |
1604.06174 | 27 | # 5 Experiments
# 5.1 Experiment Setup
We evaluate the memory cost of storing intermediate feature maps using the methods described in this paper. We our method on top of MXNet [6], which statically allocate all the intermediate feature maps before computation. This enables us to report the exact memory cost spend on feature maps. Note that the memory cost of parameters and temporal memory (e.g. required by convolution) are not part of the memory cost report. We also record the runtime total memory cost by running training steps on a Titan X GPU. Note that all the memory optimizations proposed in this paper gives equivalent weight gradient for training and can always be safely applied. We compare the following memory allocation algorithms
⢠no optimization, directly allocate memory to each node in the graph without any optimization.
⢠inplace, enable inplace optimization when possible.
⢠sharing, enable inplace optimization as well as sharing. This represents all the system opti- mizations presented at Sec. 3.
⢠drop bn-relu, apply all system optimizations, drop result of batch norm and relu, this is only shown in convolutional net benchmark.
⢠sublinear plan, apply all system optimizations, use plan search with Alg 3 to trade computa- tion with memory.
# 5.2 Deep Convolutional Network | 1604.06174#27 | Training Deep Nets with Sublinear Memory Cost | We propose a systematic approach to reduce the memory consumption of deep
neural network training. Specifically, we design an algorithm that costs
O(sqrt(n)) memory to train a n layer network, with only the computational cost
of an extra forward pass per mini-batch. As many of the state-of-the-art models
hit the upper bound of the GPU memory, our algorithm allows deeper and more
complex models to be explored, and helps advance the innovations in deep
learning research. We focus on reducing the memory cost to store the
intermediate feature maps and gradients during training. Computation graph
analysis is used for automatic in-place operation and memory sharing
optimizations. We show that it is possible to trade computation for memory -
giving a more memory efficient training algorithm with a little extra
computation cost. In the extreme case, our analysis also shows that the memory
consumption can be reduced to O(log n) with as little as O(n log n) extra cost
for forward computation. Our experiments show that we can reduce the memory
cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent
additional running time cost on ImageNet problems. Similarly, significant
memory cost reduction is observed in training complex recurrent neural networks
on very long sequences. | http://arxiv.org/pdf/1604.06174 | Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin | cs.LG | null | null | cs.LG | 20160421 | 20160422 | [
{
"id": "1512.03385"
},
{
"id": "1507.06228"
},
{
"id": "1603.05027"
},
{
"id": "1510.08983"
},
{
"id": "1602.08124"
}
] |
1604.06174 | 28 | ⢠sublinear plan, apply all system optimizations, use plan search with Alg 3 to trade computa- tion with memory.
# 5.2 Deep Convolutional Network
We ï¬rst evaluate the proposed method on convolutional neural network for image classiï¬cation. We use deep residual network architecture [11] (ResNet), which gives the state of art result on this task. Speciï¬cally, we use 32 batch size and set input image shape as (3, 224, 224). We generate different depth conï¬guration of ResNet 1 by increasing the depth of each residual stage.
We show the results in Fig. 5. We can ï¬nd that the system optimizations introduced in Sec. 3 can help to reduce the memory cost by factor of two to three. However, the memory cost after optimization still exhibits a linear trend with respect to number of layers. Even with all the system optimizations, it is only possible to train a 200 layer ResNet with the best GPU we can get. On the other hand, the proposed algorithm gives a sub-linear trend in terms of number of layers. By trade computation with memory, we can train a 1000 layer ResNet using less than 7GB of GPU memory.
# 1We count a conv-bn-relu as one layer
8
(a) Feature map memory cost estimation (b) Runtime total memory cost | 1604.06174#28 | Training Deep Nets with Sublinear Memory Cost | We propose a systematic approach to reduce the memory consumption of deep
neural network training. Specifically, we design an algorithm that costs
O(sqrt(n)) memory to train a n layer network, with only the computational cost
of an extra forward pass per mini-batch. As many of the state-of-the-art models
hit the upper bound of the GPU memory, our algorithm allows deeper and more
complex models to be explored, and helps advance the innovations in deep
learning research. We focus on reducing the memory cost to store the
intermediate feature maps and gradients during training. Computation graph
analysis is used for automatic in-place operation and memory sharing
optimizations. We show that it is possible to trade computation for memory -
giving a more memory efficient training algorithm with a little extra
computation cost. In the extreme case, our analysis also shows that the memory
consumption can be reduced to O(log n) with as little as O(n log n) extra cost
for forward computation. Our experiments show that we can reduce the memory
cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent
additional running time cost on ImageNet problems. Similarly, significant
memory cost reduction is observed in training complex recurrent neural networks
on very long sequences. | http://arxiv.org/pdf/1604.06174 | Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin | cs.LG | null | null | cs.LG | 20160421 | 20160422 | [
{
"id": "1512.03385"
},
{
"id": "1507.06228"
},
{
"id": "1603.05027"
},
{
"id": "1510.08983"
},
{
"id": "1602.08124"
}
] |
1604.06174 | 29 | # 1We count a conv-bn-relu as one layer
8
(a) Feature map memory cost estimation (b) Runtime total memory cost
Figure 5: The memory cost of different allocation strategies on deep residual net conï¬gurations. The feature map memory cost is generated from static memory allocation plan. We also use nvidia- smi to measure the total memory cost during runtime (the missing points are due to out of memory). The ï¬gures are in log-scale, so y = αxβ will translate to log(y) = β log(x) + log α. We can ï¬nd that the graph based allocation strategy indeed help to reduce the memory cost by a factor of two to three. More importantly, the sub-linear planning algorithm indeed gives sub-linear memory trend with respect to the workload. The real runtime result also conï¬rms that we can use our method to greatly reduce memory cost deep net training.
(a) Feature map memory cost estimation (b) Runtime total memory cost
Figure 6: The memory cost of different memory allocation strategies on LSTM conï¬gurations. System optimization gives a lot of memory saving on the LSTM graph, which contains a lot of ï¬ne grained operations. The sub-linear plan can give more than 4x reduction over the optimized plan that do not trade computation with memory. | 1604.06174#29 | Training Deep Nets with Sublinear Memory Cost | We propose a systematic approach to reduce the memory consumption of deep
neural network training. Specifically, we design an algorithm that costs
O(sqrt(n)) memory to train a n layer network, with only the computational cost
of an extra forward pass per mini-batch. As many of the state-of-the-art models
hit the upper bound of the GPU memory, our algorithm allows deeper and more
complex models to be explored, and helps advance the innovations in deep
learning research. We focus on reducing the memory cost to store the
intermediate feature maps and gradients during training. Computation graph
analysis is used for automatic in-place operation and memory sharing
optimizations. We show that it is possible to trade computation for memory -
giving a more memory efficient training algorithm with a little extra
computation cost. In the extreme case, our analysis also shows that the memory
consumption can be reduced to O(log n) with as little as O(n log n) extra cost
for forward computation. Our experiments show that we can reduce the memory
cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent
additional running time cost on ImageNet problems. Similarly, significant
memory cost reduction is observed in training complex recurrent neural networks
on very long sequences. | http://arxiv.org/pdf/1604.06174 | Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin | cs.LG | null | null | cs.LG | 20160421 | 20160422 | [
{
"id": "1512.03385"
},
{
"id": "1507.06228"
},
{
"id": "1603.05027"
},
{
"id": "1510.08983"
},
{
"id": "1602.08124"
}
] |
1604.06174 | 30 | # 5.3 LSTM for Long Sequences
We also evaluate the algorithms on a LSTM under a long sequence unrolling setting. We unrolled a four layer LSTM with 1024 hidden states equals 64 over time. The batch size is set to 64. The input of each timestamp is a continuous 50 dimension vector and the output is softmax over 5000 class. This is a typical setting for speech recognition[17], but our result can also be generalized to other recurrent networks. Using a long unrolling step can potentially help recurrent model to learn long
9
(a) ResNet (b) LSTM
Figure 7: The runtime speed of different allocation strategy on the two settings. The speed is measured by a running 20 batches on a Titan X GPU. We can see that using sub-linear memory plan incurs roughly 30% of additional runtime cost compared to linear memory allocation. The general trend of speed vs workload remains linear for both strategies.
term dependencies over time. We show the results in Fig. 6. We can ï¬nd that inplace helps a lot here. This is because inplace optimization in our experiment enables direct addition of weight gradient to a single memory cell, preventing allocate space for gradient at each timestamp. The sub-linear plan gives more than 4x reduction over the optimized memory plan.
# Impact on Training Speed | 1604.06174#30 | Training Deep Nets with Sublinear Memory Cost | We propose a systematic approach to reduce the memory consumption of deep
neural network training. Specifically, we design an algorithm that costs
O(sqrt(n)) memory to train a n layer network, with only the computational cost
of an extra forward pass per mini-batch. As many of the state-of-the-art models
hit the upper bound of the GPU memory, our algorithm allows deeper and more
complex models to be explored, and helps advance the innovations in deep
learning research. We focus on reducing the memory cost to store the
intermediate feature maps and gradients during training. Computation graph
analysis is used for automatic in-place operation and memory sharing
optimizations. We show that it is possible to trade computation for memory -
giving a more memory efficient training algorithm with a little extra
computation cost. In the extreme case, our analysis also shows that the memory
consumption can be reduced to O(log n) with as little as O(n log n) extra cost
for forward computation. Our experiments show that we can reduce the memory
cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent
additional running time cost on ImageNet problems. Similarly, significant
memory cost reduction is observed in training complex recurrent neural networks
on very long sequences. | http://arxiv.org/pdf/1604.06174 | Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin | cs.LG | null | null | cs.LG | 20160421 | 20160422 | [
{
"id": "1512.03385"
},
{
"id": "1507.06228"
},
{
"id": "1603.05027"
},
{
"id": "1510.08983"
},
{
"id": "1602.08124"
}
] |
1604.06174 | 31 | # Impact on Training Speed
We also measure the runtime cost of each strategy. The speed is benchmarked on a single Titan X GPU. The results are shown in Fig. 7. Because of the double forward cost in gradient calculation, the sublinear allocation strategy costs 30% additional runtime compared to the normal strategy. By paying the small price, we are now able to train a much wider range of deep learning models.
# 6 Conclusion
In this paper, we proposed a systematic approach to reduce the memory consumption of the inter- mediate feature maps when training deep neural networks. Computation graph liveness analysis is used to enable memory sharing between feature maps. We also showed that we can trade the com- putation with the memory. By combining the techniques, we can train a n layer deep neural network with only O( n) memory cost, by paying nothing more than one extra forward computation per mini-batch.
# Acknowledgement | 1604.06174#31 | Training Deep Nets with Sublinear Memory Cost | We propose a systematic approach to reduce the memory consumption of deep
neural network training. Specifically, we design an algorithm that costs
O(sqrt(n)) memory to train a n layer network, with only the computational cost
of an extra forward pass per mini-batch. As many of the state-of-the-art models
hit the upper bound of the GPU memory, our algorithm allows deeper and more
complex models to be explored, and helps advance the innovations in deep
learning research. We focus on reducing the memory cost to store the
intermediate feature maps and gradients during training. Computation graph
analysis is used for automatic in-place operation and memory sharing
optimizations. We show that it is possible to trade computation for memory -
giving a more memory efficient training algorithm with a little extra
computation cost. In the extreme case, our analysis also shows that the memory
consumption can be reduced to O(log n) with as little as O(n log n) extra cost
for forward computation. Our experiments show that we can reduce the memory
cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent
additional running time cost on ImageNet problems. Similarly, significant
memory cost reduction is observed in training complex recurrent neural networks
on very long sequences. | http://arxiv.org/pdf/1604.06174 | Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin | cs.LG | null | null | cs.LG | 20160421 | 20160422 | [
{
"id": "1512.03385"
},
{
"id": "1507.06228"
},
{
"id": "1603.05027"
},
{
"id": "1510.08983"
},
{
"id": "1602.08124"
}
] |
1604.06174 | 32 | # Acknowledgement
We thank the helpful feedbacks from the MXNet community and developers. We thank Ian Goodfellow and Yu Zhang on helpful discussions on computation memory tradeoffs. We would like to thank David Warde-Farley for pointing out the relation to gradient checkpointing. We would like to thank Nvidia for the hardware support. This work was supported in part by ONR (PECASE) N000141010672, NSF IIS 1258741 and the TerraSwarm Research Center sponsored by MARCO and DARPA. Chiyuan Zhang acknowledges the support of a Nuance Foundation Grant.
10
# References | 1604.06174#32 | Training Deep Nets with Sublinear Memory Cost | We propose a systematic approach to reduce the memory consumption of deep
neural network training. Specifically, we design an algorithm that costs
O(sqrt(n)) memory to train a n layer network, with only the computational cost
of an extra forward pass per mini-batch. As many of the state-of-the-art models
hit the upper bound of the GPU memory, our algorithm allows deeper and more
complex models to be explored, and helps advance the innovations in deep
learning research. We focus on reducing the memory cost to store the
intermediate feature maps and gradients during training. Computation graph
analysis is used for automatic in-place operation and memory sharing
optimizations. We show that it is possible to trade computation for memory -
giving a more memory efficient training algorithm with a little extra
computation cost. In the extreme case, our analysis also shows that the memory
consumption can be reduced to O(log n) with as little as O(n log n) extra cost
for forward computation. Our experiments show that we can reduce the memory
cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent
additional running time cost on ImageNet problems. Similarly, significant
memory cost reduction is observed in training complex recurrent neural networks
on very long sequences. | http://arxiv.org/pdf/1604.06174 | Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin | cs.LG | null | null | cs.LG | 20160421 | 20160422 | [
{
"id": "1512.03385"
},
{
"id": "1507.06228"
},
{
"id": "1603.05027"
},
{
"id": "1510.08983"
},
{
"id": "1602.08124"
}
] |
1604.06174 | 33 | 10
# References
[1] Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Good- fellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensor- Flow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorï¬ow.org. | 1604.06174#33 | Training Deep Nets with Sublinear Memory Cost | We propose a systematic approach to reduce the memory consumption of deep
neural network training. Specifically, we design an algorithm that costs
O(sqrt(n)) memory to train a n layer network, with only the computational cost
of an extra forward pass per mini-batch. As many of the state-of-the-art models
hit the upper bound of the GPU memory, our algorithm allows deeper and more
complex models to be explored, and helps advance the innovations in deep
learning research. We focus on reducing the memory cost to store the
intermediate feature maps and gradients during training. Computation graph
analysis is used for automatic in-place operation and memory sharing
optimizations. We show that it is possible to trade computation for memory -
giving a more memory efficient training algorithm with a little extra
computation cost. In the extreme case, our analysis also shows that the memory
consumption can be reduced to O(log n) with as little as O(n log n) extra cost
for forward computation. Our experiments show that we can reduce the memory
cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent
additional running time cost on ImageNet problems. Similarly, significant
memory cost reduction is observed in training complex recurrent neural networks
on very long sequences. | http://arxiv.org/pdf/1604.06174 | Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin | cs.LG | null | null | cs.LG | 20160421 | 20160422 | [
{
"id": "1512.03385"
},
{
"id": "1507.06228"
},
{
"id": "1603.05027"
},
{
"id": "1510.08983"
},
{
"id": "1602.08124"
}
] |
1604.06174 | 34 | [2] Amit Agarwal, Eldar Akchurin, Chris Basoglu, Guoguo Chen, Scott Cyphers, Jasha Droppo, Adam Eversole, Brian Guenter, Mark Hillebrand, Ryan Hoens, Xuedong Huang, Zhiheng Huang, Vladimir Ivanov, Alexey Kamenev, Philipp Kranen, Oleksii Kuchaiev, Wolfgang Manousek, Avner May, Bhaskar Mitra, Olivier Nano, Gaizka Navarro, Alexey Orlov, Marko Padmilac, Hari Parthasarathi, Baolin Peng, Alexey Reznichenko, Frank Seide, Michael L. Seltzer, Malcolm Slaney, Andreas Stolcke, Yongqiang Wang, Huaming Wang, Kaisheng Yao, Dong Yu, Yu Zhang, and Geoffrey Zweig. An introduction to computational networks and the computational network toolkit. Technical Report MSR-TR-2014-112, August 2014.
[3] Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman. Compilers: Principles, Techniques, and Tools. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 1986. | 1604.06174#34 | Training Deep Nets with Sublinear Memory Cost | We propose a systematic approach to reduce the memory consumption of deep
neural network training. Specifically, we design an algorithm that costs
O(sqrt(n)) memory to train a n layer network, with only the computational cost
of an extra forward pass per mini-batch. As many of the state-of-the-art models
hit the upper bound of the GPU memory, our algorithm allows deeper and more
complex models to be explored, and helps advance the innovations in deep
learning research. We focus on reducing the memory cost to store the
intermediate feature maps and gradients during training. Computation graph
analysis is used for automatic in-place operation and memory sharing
optimizations. We show that it is possible to trade computation for memory -
giving a more memory efficient training algorithm with a little extra
computation cost. In the extreme case, our analysis also shows that the memory
consumption can be reduced to O(log n) with as little as O(n log n) extra cost
for forward computation. Our experiments show that we can reduce the memory
cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent
additional running time cost on ImageNet problems. Similarly, significant
memory cost reduction is observed in training complex recurrent neural networks
on very long sequences. | http://arxiv.org/pdf/1604.06174 | Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin | cs.LG | null | null | cs.LG | 20160421 | 20160422 | [
{
"id": "1512.03385"
},
{
"id": "1507.06228"
},
{
"id": "1603.05027"
},
{
"id": "1510.08983"
},
{
"id": "1602.08124"
}
] |
1604.06174 | 35 | [4] Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improve- ments. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
[5] James Bergstra, Olivier Breuleux, Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, Guil- laume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientiï¬c Computing Conference (SciPy), June 2010. Oral Presentation.
[6] Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, , and Zheng Zhang. MXNet: A ï¬exible and efï¬cient machine learning library for heterogeneous distributed systems. In Neural Information Processing Systems, Workshop on Machine Learning Systems (LearningSysâ15), 2015. | 1604.06174#35 | Training Deep Nets with Sublinear Memory Cost | We propose a systematic approach to reduce the memory consumption of deep
neural network training. Specifically, we design an algorithm that costs
O(sqrt(n)) memory to train a n layer network, with only the computational cost
of an extra forward pass per mini-batch. As many of the state-of-the-art models
hit the upper bound of the GPU memory, our algorithm allows deeper and more
complex models to be explored, and helps advance the innovations in deep
learning research. We focus on reducing the memory cost to store the
intermediate feature maps and gradients during training. Computation graph
analysis is used for automatic in-place operation and memory sharing
optimizations. We show that it is possible to trade computation for memory -
giving a more memory efficient training algorithm with a little extra
computation cost. In the extreme case, our analysis also shows that the memory
consumption can be reduced to O(log n) with as little as O(n log n) extra cost
for forward computation. Our experiments show that we can reduce the memory
cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent
additional running time cost on ImageNet problems. Similarly, significant
memory cost reduction is observed in training complex recurrent neural networks
on very long sequences. | http://arxiv.org/pdf/1604.06174 | Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin | cs.LG | null | null | cs.LG | 20160421 | 20160422 | [
{
"id": "1512.03385"
},
{
"id": "1507.06228"
},
{
"id": "1603.05027"
},
{
"id": "1510.08983"
},
{
"id": "1602.08124"
}
] |
1604.06174 | 36 | [7] Jeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, MarcAurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, and Andrew Y. Ng. Large scale distributed deep networks. In NIPS, 2012.
[8] Ian Goodfellow, Yoshua Bengio, , and Aaron Courville. Deep learning. Book in preparation for MIT Press, 2016.
[9] Andreas Griewank and Andrea Walther. Algorithm 799: Revolve: An implementation of checkpointing for the reverse or adjoint mode of computational differentiation. ACM Trans. Math. Softw., 26(1):19â45, March 2000.
[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016.
[12] Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735â1780, November 1997.
11 | 1604.06174#36 | Training Deep Nets with Sublinear Memory Cost | We propose a systematic approach to reduce the memory consumption of deep
neural network training. Specifically, we design an algorithm that costs
O(sqrt(n)) memory to train a n layer network, with only the computational cost
of an extra forward pass per mini-batch. As many of the state-of-the-art models
hit the upper bound of the GPU memory, our algorithm allows deeper and more
complex models to be explored, and helps advance the innovations in deep
learning research. We focus on reducing the memory cost to store the
intermediate feature maps and gradients during training. Computation graph
analysis is used for automatic in-place operation and memory sharing
optimizations. We show that it is possible to trade computation for memory -
giving a more memory efficient training algorithm with a little extra
computation cost. In the extreme case, our analysis also shows that the memory
consumption can be reduced to O(log n) with as little as O(n log n) extra cost
for forward computation. Our experiments show that we can reduce the memory
cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent
additional running time cost on ImageNet problems. Similarly, significant
memory cost reduction is observed in training complex recurrent neural networks
on very long sequences. | http://arxiv.org/pdf/1604.06174 | Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin | cs.LG | null | null | cs.LG | 20160421 | 20160422 | [
{
"id": "1512.03385"
},
{
"id": "1507.06228"
},
{
"id": "1603.05027"
},
{
"id": "1510.08983"
},
{
"id": "1602.08124"
}
] |
1604.06174 | 37 | 11
[13] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32th International Conference on Machine Learning (ICMLâ15), 2015.
[14] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classiï¬cation with deep In Advances in Neural Information Processing Systems 25, convolutional neural networks. pages 1097â1105. 2012.
[15] Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. In S. Haykin and B. Kosko, editors, Intelligent Signal Pro- cessing, pages 306â351. IEEE Press, 2001.
[16] Minsoo Rhu, Natalia Gimelshein, Jason Clemons, Arslan Zulï¬qar, and Stephen W Keckler. Virtualizing deep neural networks for memory-efï¬cient neural network design. arXiv preprint arXiv:1602.08124, 2016. | 1604.06174#37 | Training Deep Nets with Sublinear Memory Cost | We propose a systematic approach to reduce the memory consumption of deep
neural network training. Specifically, we design an algorithm that costs
O(sqrt(n)) memory to train a n layer network, with only the computational cost
of an extra forward pass per mini-batch. As many of the state-of-the-art models
hit the upper bound of the GPU memory, our algorithm allows deeper and more
complex models to be explored, and helps advance the innovations in deep
learning research. We focus on reducing the memory cost to store the
intermediate feature maps and gradients during training. Computation graph
analysis is used for automatic in-place operation and memory sharing
optimizations. We show that it is possible to trade computation for memory -
giving a more memory efficient training algorithm with a little extra
computation cost. In the extreme case, our analysis also shows that the memory
consumption can be reduced to O(log n) with as little as O(n log n) extra cost
for forward computation. Our experiments show that we can reduce the memory
cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent
additional running time cost on ImageNet problems. Similarly, significant
memory cost reduction is observed in training complex recurrent neural networks
on very long sequences. | http://arxiv.org/pdf/1604.06174 | Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin | cs.LG | null | null | cs.LG | 20160421 | 20160422 | [
{
"id": "1512.03385"
},
{
"id": "1507.06228"
},
{
"id": "1603.05027"
},
{
"id": "1510.08983"
},
{
"id": "1602.08124"
}
] |
1604.06174 | 38 | [17] Hasim Sak, Andrew W. Senior, and Franc¸oise Beaufays. Long short-term memory recur- rent neural network architectures for large scale acoustic modeling. In INTERSPEECH 2014, 15th Annual Conference of the International Speech Communication Association, Singapore, September 14-18, 2014, pages 338â342, 2014.
[18] Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. Training very deep networks. arXiv preprint arXiv:1507.06228, 2015.
[19] Yu Zhang, Guoguo Chen, Dong Yu, Kaisheng Yao, Sanjeev Khudanpur, and James Glass. arXiv preprint Highway long short-term memory rnns for distant speech recognition. arXiv:1510.08983, 2015.
# A Search over Budget B
Alg. 3 allows us to generate an optimized memory plan given a single parameter B. This algorithm relies on approximate memory estimation for faster speed. After we get the plan, we can use the static allocation algorithm to calculate the exact memory cost. We can then do a grid search over B to ï¬nd a good memory plan. | 1604.06174#38 | Training Deep Nets with Sublinear Memory Cost | We propose a systematic approach to reduce the memory consumption of deep
neural network training. Specifically, we design an algorithm that costs
O(sqrt(n)) memory to train a n layer network, with only the computational cost
of an extra forward pass per mini-batch. As many of the state-of-the-art models
hit the upper bound of the GPU memory, our algorithm allows deeper and more
complex models to be explored, and helps advance the innovations in deep
learning research. We focus on reducing the memory cost to store the
intermediate feature maps and gradients during training. Computation graph
analysis is used for automatic in-place operation and memory sharing
optimizations. We show that it is possible to trade computation for memory -
giving a more memory efficient training algorithm with a little extra
computation cost. In the extreme case, our analysis also shows that the memory
consumption can be reduced to O(log n) with as little as O(n log n) extra cost
for forward computation. Our experiments show that we can reduce the memory
cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent
additional running time cost on ImageNet problems. Similarly, significant
memory cost reduction is observed in training complex recurrent neural networks
on very long sequences. | http://arxiv.org/pdf/1604.06174 | Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin | cs.LG | null | null | cs.LG | 20160421 | 20160422 | [
{
"id": "1512.03385"
},
{
"id": "1507.06228"
},
{
"id": "1603.05027"
},
{
"id": "1510.08983"
},
{
"id": "1602.08124"
}
] |
1604.06174 | 39 | To get the setting of the grid, we ï¬rst run the allocation algorithm with B = 0, then run the xy. Here x and y are the outputs from Alg. 3 in the ï¬rst run. allocation algorithm again with B = Here x is the approximate cost to store inter-stage feature maps and y is the approximate cost to run each stage. B = xy an estimation of each stageâs memory cost. This can already give a good memory plan. We then set grid around B =
xy to further reï¬ne the solution. â
â
2B] can already give good memory plans in the experiments. We implemented the allocation algorithm in python without any attempt to optimize for speed. Our code costs a few seconds to get the plans needed in the experiments.
12 | 1604.06174#39 | Training Deep Nets with Sublinear Memory Cost | We propose a systematic approach to reduce the memory consumption of deep
neural network training. Specifically, we design an algorithm that costs
O(sqrt(n)) memory to train a n layer network, with only the computational cost
of an extra forward pass per mini-batch. As many of the state-of-the-art models
hit the upper bound of the GPU memory, our algorithm allows deeper and more
complex models to be explored, and helps advance the innovations in deep
learning research. We focus on reducing the memory cost to store the
intermediate feature maps and gradients during training. Computation graph
analysis is used for automatic in-place operation and memory sharing
optimizations. We show that it is possible to trade computation for memory -
giving a more memory efficient training algorithm with a little extra
computation cost. In the extreme case, our analysis also shows that the memory
consumption can be reduced to O(log n) with as little as O(n log n) extra cost
for forward computation. Our experiments show that we can reduce the memory
cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent
additional running time cost on ImageNet problems. Similarly, significant
memory cost reduction is observed in training complex recurrent neural networks
on very long sequences. | http://arxiv.org/pdf/1604.06174 | Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin | cs.LG | null | null | cs.LG | 20160421 | 20160422 | [
{
"id": "1512.03385"
},
{
"id": "1507.06228"
},
{
"id": "1603.05027"
},
{
"id": "1510.08983"
},
{
"id": "1602.08124"
}
] |
1604.04562 | 0 | 7 1 0 2
r p A 4 2 ] L C . s c [
3 v 2 6 5 4 0 . 4 0 6 1 : v i X r a
# A Network-based End-to-End Trainable Task-oriented Dialogue System
# Tsung-Hsien Wen1, David Vandyke1, Nikola Mrkši´c1, Milica Gaši´c1, Lina M. Rojas-Barahona1, Pei-Hao Su1, Stefan Ultes1, and Steve Young1
# 1Cambridge University Engineering Department, Trumpington Street, Cambridge, CB2 1PZ, UK {thw28,djv27,nm480,mg436,lmr46,phs26,su259,sjy11}@cam.ac.uk
# Abstract | 1604.04562#0 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 1 | # Abstract
Teaching machines to accomplish tasks by conversing naturally with humans is challenging. Currently, developing task- oriented dialogue systems requires creating multiple components and typically this in- volves either a large amount of handcraft- ing, or acquiring costly labelled datasets to solve a statistical learning problem for each component. In this work we intro- duce a neural network-based text-in, text- out end-to-end trainable goal-oriented di- alogue system along with a new way of collecting dialogue data based on a novel pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue sys- tems easily and without making too many assumptions about the task at hand. The results show that the model can converse with human subjects naturally whilst help- ing them to accomplish tasks in a restaurant search domain.
# Introduction
need corpora to train on. Furthermore, to make RL tractable, the state and action space must be carefully designed (Young et al., 2013; Young et al., 2010), which may restrict the expressive power and learnability of the model. Also, the reward functions needed to train such models are difï¬cult to design and hard to measure at run-time (Su et al., 2015; Su et al., 2016). | 1604.04562#1 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 2 | At the other end of the spectrum, sequence to sequence learning (Sutskever et al., 2014) has in- spired several efforts to build end-to-end trainable, non-task-oriented conversational systems (Vinyals and Le, 2015; Shang et al., 2015; Serban et al., 2015b). This family of approaches treats dialogue as a source to target sequence transduction problem, applying an encoder network (Cho et al., 2014) to encode a user query into a distributed vector rep- resenting its semantics, which then conditions a decoder network to generate each system response. These models typically require a large amount of data to train. They allow the creation of effective chatbot type systems but they lack any capability for supporting domain speciï¬c tasks, for example, being able to interact with databases (Sukhbaatar et al., 2015; Yin et al., 2015) and aggregate useful information into their responses. | 1604.04562#2 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 3 | Building a task-oriented dialogue system such as a hotel booking or a technical support service is difï¬cult because it is application-speciï¬c and there is usually limited availability of training data. To mitigate this problem, recent machine learning ap- proaches to task-oriented dialogue system design have cast the problem as a partially observable Markov Decision Process (POMDP) (Young et al., 2013) with the aim of using reinforcement learn- ing (RL) to train dialogue policies online through interactions with real users (GaÅ¡i´c et al., 2013). However, the language understanding (Henderson et al., 2014; Yao et al., 2014) and language gener- ation (Wen et al., 2015b; Wen et al., 2016) mod- ules still rely on supervised learning and therefore | 1604.04562#3 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 4 | In this work, we propose a neural network-based model for task-oriented dialogue systems by bal- ancing the strengths and the weaknesses of the two the model is end-to-end research communities: trainable1 but still modularly connected; it does not directly model the user goal, but nevertheless, it still learns to accomplish the required task by pro- viding relevant and appropriate responses at each turn; it has an explicit representation of database (DB) attributes (slot-value pairs) which it uses to achieve a high task success rate, but has a dis- tributed representation of user intent (dialogue act)
1We deï¬ne end-to-end trainable as that each system mod- ule is trainable from data except for a database operator.
Intent Network have <v.food> j 0.7 1 British 0.2 { French 0.1 MySQL query: âSelect * where food=Koreanâ Database
Figure 1: The proposed end-to-end trainable dialogue system framework
to allow ambiguous inputs; and it uses delexicalisa- tion2 and a weight tying strategy (Henderson et al., 2014) to reduce the data required to train the model, but still maintains a high degree of freedom should larger amounts of data become available. We show that the proposed model performs a given task very competitively across several metrics when trained on only a few hundred dialogues. | 1604.04562#4 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 5 | In order to train the model for the target appli- cation, we introduce a novel pipe-lined data col- lection mechanism inspired by the Wizard-of-Oz paradigm (Kelley, 1984) to collect human-human dialogue corpora via crowd-sourcing. We found that this process is simple and enables fast data collection online with very low development costs.
most probable values in the belief state to form a query to the DB, and the search result, along with the intent representation and belief state are trans- formed and combined by a policy network to form a single vector representing the next system action. This system action vector is then used to condition a response generation network (Wen et al., 2015a; Wen et al., 2015b) which generates the required system output token by token in skeletal form. The ï¬nal system response is then formed by substitut- ing the actual values of the database entries into the skeletal sentence structure. A more detailed description of each component is given below.
# 2.1 Intent Network
# 2 Model | 1604.04562#5 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 6 | # 2.1 Intent Network
# 2 Model
We treat dialogue as a sequence to sequence map- ping problem (modelled by a sequence-to-sequence architecture (Sutskever et al., 2014)) augmented with the dialogue history (modelled by a set of belief trackers (Henderson et al., 2014)) and the current database search outcome (modelled by a database operator), as shown in Figure 1. At each turn, the system takes a sequence of tokens2 from the user as input and converts it into two inter- nal representations: a distributed representation generated by an intent network and a probability distribution over slot-value pairs called the belief state (Young et al., 2013) generated by a set of be- lief trackers. The database operator then selects the
The intent network can be viewed as the en- coder in the sequence-to-sequence learning frame- work (Sutskever et al., 2014) whose job is to en- code a sequence of input tokens wt N into a distributed vector representation zt at every turn t. Typically, a Long Short-term Memory (LSTM) network (Hochreiter and Schmidhuber, 1997) is used and the last time step hidden layer zN is taken t as the representation, zt = zN | 1604.04562#6 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 7 | Alternatively, a convolutional neural network (CNN) can be used in place of the LSTM as the encoder (Kalchbrenner et al., 2014; Kim, 2014),
zt = CNN(wt 0, wt 1, ...wt N ) (2)
2Delexicalisation: we replaced slots and values by generic tokens (e.g. keywords like Chinese or Indian are replaced by <v.food> in Figure 1) to allow weight sharing.
and here we investigate both. Since all the slot- value speciï¬c information is delexicalised, the en- coded vector can be viewed as a distributed intent
Jordan RNN PS Output layer Hidden layer a\ aS Turnt | Input layer | want > âF000 E> food âsentence representation <nil> Delexicalised CNN 1* conv. 2°8 conv. 3 conv. max-pool avg-pool | 1604.04562#7 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 8 | Figure 2: Tied Jordan-type RNN belief tracker with delexicalised CNN feature extractor. The output of the CNN feature extractor is a concatenation of top-level sentence (green) embedding and several levels of intermediate ngram-like embeddings (red and blue). However, if a value cannot be delexicalised in the input, its ngram-like embeddings will all be padded with zeros. We pad zero vectors (in gray) before each convolution operation to make sure the representation at each layer has the same length. The output of each tracker pt
representation which replaces the hand-coded di- alogue act representation (Traum, 1999) in tradi- tional task-oriented dialogue systems.
# 2.2 Belief Trackers | 1604.04562#8 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 9 | # 2.2 Belief Trackers
Belief tracking (also called Dialogue State track- ing) provides the core of a task-oriented spoken dialogue system (SDS) (Henderson, 2015). Cur- rent state-of-the-art belief trackers use discrimi- native models such as recurrent neural networks (RNN) (Mikolov et al., 2010; Wen et al., 2013) to directly map ASR hypotheses to belief states (Hen- derson et al., 2014; MrkÅ¡i´c et al., 2016). Although in this work we focus on text-based dialogue sys- tems, we retain belief tracking at the core of our system because: (1) it enables a sequence of free- form natural language sentences to be mapped into a ï¬xed set of slot-value pairs, which can then be used to query a DB. This can be viewed as a simple version of a semantic parser (Berant et al., 2013); (2) by keeping track of the dialogue state, it avoids learning unnecessarily complicated long-term de- pendencies from raw inputs; (3) it uses a smart weight tying strategy that can greatly reduce the data required to train the model, and (4) it provides an inherent robustness which simpliï¬es future ex- tension to spoken systems. | 1604.04562#9 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 10 | tribution p over values v â Vs for each informable slot s, and a binary distribution for each requestable slot3. Each slot in the ontology G4 has its own specialised tracker, and each tracker is a Jordan- type (recurrence from output to hidden layer) (Jor- dan, 1989) RNN5 with a CNN feature extractor, as shown in Figure 2. Like MrkÅ¡i´c et al. (2015), we tie the RNN weights together for each value v but vary features f t v when updating each pre-softmax activation gt v. The update equations for a given slot s are,
v â ptâ1 v = f t v,cnn â ptâ1 f t â
v = ws · sigmoid(Wsf t gt exp(gt v)
(3)
gi, = ws sigmoid(W.f' +b.) +0, (4)
t exp(gi,) (5) Po exp(9o,s) + Liwev, &xXP(G) | 1604.04562#10 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 11 | t exp(gi,) (5) Po exp(9o,s) + Liwev, &xXP(G)
where vector w,, matrix W,, bias terms b, and bi, and scalar gg,, are parameters. Ph is the probability that the user has not mentioned that slot up to turn t and can be calculated by substituting 9g,, for gf, in the numerator of Equation [5] In order to model the discourse context at each turn, the feature vector
3Informable slots are slots that users can use to constrain the search, such as food type or price range; Requestable slots are slots that users can ask a value for, such as address.
Using each user input as new evidence, the task of a belief tracker is to maintain a multinomial dis4A small knowledge graph deï¬ning the slot-value pairs the system can talk about for a particular task.
5We donât use the recurrent connection for requestable slots since they donât need to be tracked.
f t v,cnn is the concatenation of two CNN derived features, one from processing the user input ut at turn t and the other from processing the machine response mtâ1 at turn t â 1,
v,cnn = CNN(u) f t s,v (ut) â CNN(m) s,v (mtâ1) (6) | 1604.04562#11 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 12 | v,cnn = CNN(u) f t s,v (ut) â CNN(m) s,v (mtâ1) (6)
where every token in ut and mtâ1 is represented by an embedding of size N derived from a 1-hot input vector. In order to make the tracker aware when delexicalisation is applied to a slot or value, the slot-value specialised CNN operator CNN(·) s,v(·) ex- tracts not only the top level sentence representation but also intermediate n-gram-like embeddings de- termined by the position of the delexicalised token in each utterance. If multiple matches are observed, the corresponding embeddings are summed. On the other hand, if there is no match for a particular slot or value, the empty n-gram embeddings are padded with zeros. In order to keep track of the position of delexicalised tokens, both sides of the sentence are padded with zeros before each convolution opera- tion. The number of vectors is determined by the ï¬lter size at each layer. The overall process of ex- tracting several layers of position-speciï¬c features is visualised in Figure 2. | 1604.04562#12 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 13 | The belief tracker described above is based on Henderson et al. (2014) with some modiï¬ca- tions: (1) only probabilities over informable and requestable slots and values are output, (2) the re- current memory block is removed, since it appears to offer no beneï¬t in this task, and (3) the n-gram feature extractor is replaced by the CNN extrac- tor described above. By introducing slot-based belief trackers, we essentially add a set of interme- diate labels into the system as compared to train- ing a pure end-to-end system. Later in the paper we will show that these tracker components are critical for achieving task success. We will also show that the additional annotation requirement that they introduce can be successfully mitigated using a novel pipe-lined Wizard-of-Oz data collec- tion framework.
2.3 Policy Network and Database Operator Database Operator Based on the output pt s of the belief trackers, the DB query qt is formed by,
ân= U {argmax pâ,} (7) SES) e
where SI is the set of informable slots. This query is then applied to the DB to create a binary truth | 1604.04562#13 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 14 | value vector xt over DB entities where a 1 indi- cates that the corresponding entity is consistent with the query (and hence it is consistent with the most likely belief state). In addition, if x is not entirely null, an associated entity pointer is main- tained which identiï¬es one of the matching entities selected at random. The entity pointer is updated if the current entity no longer matches the search criteria; otherwise it stays the same. The entity referenced by the entity pointer is used to form the ï¬nal system response as described in Section 2.4. Policy network The policy network can be viewed as the glue which binds the system modules together. Its output is a single vector ot represent- ing the system action, and its inputs are comprised of zt from the intent network, the belief state pt s, and the DB truth value vector xt. Since the genera- tion network only generates appropriate sentence forms, the individual probabilities of the categor- ical values in the informable belief state are im- material and are summed together to form a sum- mary belief vector for each slot Ëpt s represented by three components: the summed value probabilities, the probability that the user said they "donât care" about this slot and the | 1604.04562#14 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 15 | slot Ëpt s represented by three components: the summed value probabilities, the probability that the user said they "donât care" about this slot and the probability that the slot has not been mentioned. Similarly for the truth value vector xt, the number of matching entities mat- ters but not their identity. This vector is therefore compressed to a 6-bin 1-hot encoding Ëxt, which represents different degrees of matching in the DB (no match, 1 match, ... or more than 5 matches). Finally, the policy network output is generated by a three-way matrix transformation, | 1604.04562#15 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 16 | ot = tanh(Wzozt + WpoËpt + WxoËxt)
where matrices Wz5, Wyo, and Wyo are param- eters and p; = @,cq Ps is a concatenation of all summary belief vectors.
# 2.4 Generation Network
The generation network uses the action vector ot to condition a language generator (Wen et al., 2015b). This generates template-like sentences token by token based on the language model prob- abilities,
jâ1, ot) (9) where LSTMj(·) is a conditional LSTM operator for one output step j, wt j is the last output token (i.e. a word, a delexicalised slot name or a delexicalised | 1604.04562#16 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 17 | slot value), and ht jâ1 is the hidden layer. Once the output token sequence has been generated, the generic tokens are replaced by their actual values: (1) replacing delexicalised slots by random sam- pling from a list of surface forms, e.g. <s.food> to food or type of food, and (2) replacing delexicalised values by the actual attribute values of the entity currently selected by the DB pointer. This is simi- lar in spirit to the Latent Predictor Network (Ling et al., 2016) where the token generation process is augmented by a set of pointer networks (Vinyals et al., 2015) to transfer entity speciï¬c information into the response.
Attentive Generation Network Instead of de- coding responses directly from a static action vec- tor ot, an attention-based mechanism (Bahdanau et al., 2014; Hermann et al., 2015) can be used to dynamically aggregate source embeddings at each output step j. In this work we explore the use of an attention mechanism to combine the tracker belief states, i.e. ot is computed at each output step j by,
t = tanh(Wzozt + Ëp(j) o(j) where for a given ontology G,
t + WxoËxt) (10) | 1604.04562#17 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 18 | t + WxoËxt) (10)
p\â) = S- a) tanh(W,, - pi) (11) seEG
and where the attention weights α(j) s by a scoring function, are calculated
al) = softmax (r7 tanh(W,, - u,)) (12)
where ut = zt â Ëxt â Ëpt and vector r are parameters to learn and wt embedding of token wt j.
# 3 Wizard-of-Oz Data Collection
Arguably the greatest bottleneck for statistical ap- proaches to dialogue system development is the collection of appropriate training data, and this is especially true for task-oriented dialogue sys- tems. Serban et al (Serban et al., 2015a) have catalogued existing corpora for developing con- versational agents. Such corpora may be useful for bootstrapping, but, for task-oriented dialogue sys- tems, in-domain data is essential6. To mitigate this problem, we propose a novel crowdsourcing ver- sion of the Wizard-of-Oz (WOZ) paradigm (Kelley, 1984) for collecting domain-speciï¬c corpora.
6E.g. technical support for Apple computers may differ completely from that for Windows, due to the many differ- ences in software and hardware. | 1604.04562#18 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 19 | 6E.g. technical support for Apple computers may differ completely from that for Windows, due to the many differ- ences in software and hardware.
Based on the given ontology, we designed two webpages on Amazon Mechanical Turk, one for wizards and the other for users (see Figure 4 and 5 for the designs). The users are given a task specify- ing the characteristics of a particular entity that they must ï¬nd (e.g. a Chinese restaurant in the north) and asked to type in natural language sentences to fulï¬l the task. The wizards are given a form to record the information conveyed in the last user turn (e.g. pricerange=Chinese, area=north) and a search table showing all the available matching entities in the database. Note these forms contain all the labels needed to train the slot-based belief trackers. The table is automatically updated every time the wizard submits new information. Based on the updated table, the wizard types an appropriate system response and the dialogue continues. | 1604.04562#19 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 20 | In order to enable large-scale parallel data collec- tion and avoid the distracting latencies inherent in conventional WOZ scenarios (Bohus and Rudnicky, 2008), users and wizards are asked to contribute just a single turn to each dialogue. To ensure coher- ence and consistency, users and wizards must re- view all previous turns in that dialogue before they contribute their turns. Thus dialogues progress in a pipe-line. Many dialogues can be active in parallel and no worker ever has to wait for a response from the other party in the dialogue. Despite the fact that multiple workers contribute to each dialogue, we observe that dialogues are generally coherent yet diverse. Furthermore, this turn-level data collection strategy seems to encourage workers to learn and correct each other based on previous turns. | 1604.04562#20 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 21 | In this paper, the system was designed to assist users to ï¬nd a restaurant in the Cambridge, UK area. There are three informable slots (food, pricerange, area) that users can use to constrain the search and six requestable slots (address, phone, postcode plus the three informable slots) that the user can ask a value for once a restaurant has been offered. There are 99 restaurants in the DB. Based on this domain, we ran 3000 HITs (Human Intelligence Tasks) in total for roughly 3 days and collected 1500 dialogue turns. After cleaning the data, we have approximately 680 dialogues in total (some of them are unï¬nished). The total cost for collecting the dataset was â¼ 400 USD.
# 4 Empirical Experiments
Training Firstly the belief Training is divided into two phases. tracker parameters θb are
Table 1: Tracker performance in terms of Precision, Recall, and F-1 score.
Tracker type Prec. Informable Recall F-1 Prec. Requestable Recall F-1 cnn ngram 99.77% 96.09% 97.89% 98.66% 93.79% 96.16% 99.34% 94.42% 96.82% 98.56% 90.14% 94.16% | 1604.04562#21 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 23 | Having fixed the tracker parameters, the re- maining parts of the model @\» are trained using the cross entropy errors from the gen- eration network language model, L2(A\,) = -> Dj ly§)T log p{, where y} and pâ are out- put token targets and predictions respectively, at turn t of output step 7. We treated each dialogue as a batch and used stochastic gradient decent with a small /2 regularisation term to train the model. The collected corpus was partitioned into a train- ing, validation, and testing sets in the ratio 3:1:1. Early stopping was implemented based on the vali- dation set for regularisation and gradient clipping was set to 1. All the hidden layer sizes were set to 50, and all the weights were randomly initialised between -0.3 and 0.3 including word embeddings. The vocabulary size is around 500 for both input and output, in which rare words and words that can be delexicalised are removed. We used three con- volutional layers for all the CNNs in the work and all the filter sizes were set to 3. Pooling operations were only applied after the final convolution layer. Decoding In order to decode without length bias, we decoded each system response m, based on the average log probability of tokens, | 1604.04562#23 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 24 | mâ t = argmax mt {log p(mt|θ, ut)/Jt} (13)
where θ are the model parameters, ut is the user input, and Jt is the length of the machine response. As a contrast, we also investigated the MMI cri- terion (Li et al., 2016) to increase diversity and put additional scores on delexicalised tokens to en- courage task completion. This weighted decoding strategy has the following objective function,
mâ t = argmax mt { log p(mt|θ, ut)/Jtâ (14)
λ log p(mt)/Jt + γRt}
where λ and γ are weights selected on validation set and log p(mt) can be modelled by a standalone LSTM language model. We used a simple heuris- tic for the scoring function Rt designed to reward giving appropriate information and penalise spu- riously providing unsolicited information7. We applied beam search with a beamwidth equal to 10, the search stops when an end of sentence token is generated. In order to obtain language variability from the deployed model we ran decoding until we obtained 5 candidates and randomly sampled one as the system response. | 1604.04562#24 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 25 | Tracker performance Table 1 shows the eval- uation of the trackersâ performance. Due to delex- icalisation, both CNN type trackers and N-gram type trackers (Henderson et al., 2014) achieve high precision, but the N-gram tracker has worse recall. This result suggests that compared to simple N- grams, CNN type trackers can better generalise to sentences with long distance dependencies and more complex syntactic structures.
Corpus-based evaluation We evaluated the end-to-end system by ï¬rst performing a corpus- based evaluation in which the model is used to pre- dict each system response in the held-out test set. Three evaluation metrics were used: BLEU score (on top-1 and top-5 candidates) (Papineni et al., 2002), entity matching rate and objective task suc- cess rate (Su et al., 2015). We calculated the entity matching rate by determining whether the actual selected entity at the end of each dialogue matches the task that was speciï¬ed to the user. The dialogue is then marked as successful if both (1) the offered entity matches, and (2) the system answered all the associated information requests (e.g. what is the address?) from the user. We computed the BLEU scores on the template-like output sentences before lexicalising with the entity value substitution. | 1604.04562#25 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 26 | 7We give an additional reward if a requestable slot (e.g. address) is requested and its corresponding delexicalised slot or value token (e.g. <v.address> and <s.address>) is gener- ated. We give an additional penalty if an informable slot is never mentioned (e.g. food=none) but its corresponding delex- icalised value token is generated (e.g. <v.food>). For more details on scoring, please see Table 5.
Table 2: Performance comparison of different model architectures based on a corpus-based evaluation. | 1604.04562#26 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 27 | Table 2: Performance comparison of different model architectures based on a corpus-based evaluation.
Encoder Tracker Decoder Match(%) Success(%) T5-BLEU T1-BLEU Baseline lstm lstm - turn recurrence lstm lstm - - - - 0.1650 0.1813 0.1718 0.1861 Variant lstm cnn rnn-cnn, w/o req. rnn-cnn lstm lstm 89.70 88.82 30.60 58.52 0.1769 0.2354 0.1799 0.2429 Full model w/ different decoding strategy lstm lstm lstm lstm rnn-cnn rnn-cnn rnn-cnn rnn-cnn lstm + weighted + att. + att. + weighted 86.34 86.04 90.88 90.88 75.16 78.40 80.02 83.82 0.2184 0.2222 0.2286 0.2304 0.2313 0.2280 0.2388 0.2369 | 1604.04562#27 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 28 | Table 2 shows the result of the corpus-based evaluation averaging over 5 randomly initialised networks. The Baseline block shows two baseline models: the ï¬rst is a simple turn-level sequence to sequence model (Sutskever et al., 2014) while the second one introduces an additional recurrence to model the dependency on the dialogue history fol- lowing Serban et al (Serban et al., 2015b). As can be seen, incorporation of the recurrence improves the BLEU score. However, baseline task success and matching rates cannot be computed since the models do not make any provision for a database. The Variant block of Table 2 shows two variants of the proposed end-to-end model. For the ï¬rst one, no requestable trackers were used, only informable trackers. Hence, the burden of modelling user re- quests falls on the intent network alone. We found that without explicitly modelling user requests, the model performs very poorly on task completion (â¼ 30%), even though it can offer the correct entity most of the time(â¼ 90%). More data may help here; however, we found that the incorporation of an explicit internal semantic representation in the full model (shown below) is more efï¬cient | 1604.04562#28 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 29 | More data may help here; however, we found that the incorporation of an explicit internal semantic representation in the full model (shown below) is more efï¬cient and extremely effective. For the second variant, the LSTM intent network is replaced by a CNN. This achieves a very competitive BLEU score but task success is still quite poor (â¼ 58% success). We think this is because the CNN encodes the intent by capturing several local features but lacks the global view of the sentence, which may easily result in an unexpected overï¬t. | 1604.04562#29 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 31 | decoding strategy does not provide a signiï¬cant improvement in BLEU score but it does greatly improve task success rate (â¼ 3%). The Rt term contributes the most to this improvement because it injects additional task-speciï¬c information during decoding. Despite this, the most effective and ele- gant way to improve the performance is to use the attention-based mechanism (+att.) to dynamically aggregate the tracker beliefs (Section 2.4). It gives a slight improvement in BLEU score (â¼ 0.01) and a big gain on task success (â¼ 5%). Finally, we can improve further by incorporating weighted decod- ing with the attention models (+ att. + weighted). As an aside, we used t-SNE (der Maaten and Hin- ton, 2008) to produce a reduced dimension view of the action embeddings ot, plotted and labelled by the ï¬rst three generated output words (full model w/o attention). The ï¬gure is shown as Figure 3. We can see clear clusters based on the system in- tent types, even though we did not explicitly model them using dialogue acts. Human evaluation | 1604.04562#31 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 32 | In order to assess opera- tional performance, we tested our model using paid subjects recruited via Amazon Mechanical Turk. Each judge was asked to follow a given task and to rate the modelâs performance. We assessed the subjective success rate, and the perceived compre- hension ability and naturalness of response on a scale of 1 to 5. The full model with attention and weighted decoding was used and the system was tested on a total of 245 dialogues. As can be seen in Table 3, the average subjective success rate was 98%, which means the system was able to complete the majority of tasks. Moreover, the comprehen- sion ability and naturalness scores both averaged more than 4 out of 5. (See Appendix for some sample dialogues in this trial.)
We also ran comparisons between the NN model
aA > aa o| she v0 4% 7 . © Ee wwe -80 =80 60 =40 =20 0 20 40 60 80 100
Figure 3: The action vector embedding ot generated by the NN model w/o attention. Each cluster is labelled with the ï¬rst three words the embedding generated.
Table 3: Human assessment of the NN system. The rating for comprehension/naturalness are both out of 5.
Table 4: A comparison of the NN system with a rule-based modular system (HDC). | 1604.04562#32 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 33 | Table 4: A comparison of the NN system with a rule-based modular system (HDC).
Metric Success NN 98% Comprehension Naturalness 4.11 4.05 # of dialogues: 245
and a handcrafted, modular baseline system (HDC) consisting of a handcrafted semantic parser, rule- based policy and belief tracker, and a template- based generator. The result can be seen in Table 4. The HDC system achieved â¼ 95% task success rate, which suggests that it is a strong baseline even though most of the components were hand- engineered. Over the 164 dialogues tested, the NN system (NN) was considered better than the handcrafted system (HDC) on all the metrics com- pared. Although both systems achieved similar suc- cess rates, the NN system (NN) was more efï¬cient and provided a more engaging conversation (lower turn number and higher preference). Moreover, the comprehension ability and naturalness of the NN system were also rated higher, which suggests that the learned system was perceived as being more natural than the hand-designed system.
# 5 Conclusions and Future Work
This paper has presented a novel neural network- based framework for task-oriented dialogue sys- tems. The model is end-to-end trainable using two | 1604.04562#33 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 34 | # 5 Conclusions and Future Work
This paper has presented a novel neural network- based framework for task-oriented dialogue sys- tems. The model is end-to-end trainable using two
Metric NDM HDC Tie Subj. Success Avg. # of Turn 96.95% 95.12% 3.95 4.54 - - Comparisons(%) Naturalness Comprehension Preference Performance 25.61 21.95 24.39 25.61 27.44 32.93 25.61 30.49
46.95* 45.12* 50.00* 43.90* # of comparisons: 164
* p <0.005,
supervision signals and a modest corpus of training data. The paper has also presented a novel crowd- sourced data collection framework inspired by the Wizard-of-Oz paradigm. We demonstrated that the pipe-lined parallel organisation of this collection framework enables good quality task-oriented dia- logue data to be collected quickly at modest cost. The experimental assessment of the NN dialogue system showed that the learned model can interact efï¬ciently and naturally with human subjects to complete an application-speciï¬c task. To the best of our knowledge, this is the ï¬rst end-to-end NN- based model that can conduct meaningful dialogues in a task-oriented application. | 1604.04562#34 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 35 | However, there is still much work left to do. Our current model is a text-based dialogue sys- tem, which can not directly handle noisy speech recognition inputs nor can it ask the user for con- ï¬rmation when it is uncertain. Indeed, the extent to which this type of model can be scaled to much larger and wider domains remains an open question which we hope to pursue in our further work.
# Wizard-of-Oz data collection websites
Task 02004; You are looking for and it should serve gastropub food. You don't care about the price range. You want to know the address. Info Desk: Hello, wolcome tothe Cambridge restaurant system . You can ask for restaurants by area, price range or fod type . How may |help you 7 Cust I immer: want a gastopub food Info Desk : There are 4 restaurants serving gastropub food, what pice range do you want ? Nextt Customer: (Your response) [lent car soot ores ergs jt ove re adres sas] ) Submit the HIT
Figure 4: The user webpage. The worker who plays a user is given a task to follow. For each mturk HIT, he/she needs to type in an appropriate sentence to carry on the dialogue by looking at both the task description and the dialogue history. | 1604.04562#35 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 36 | Info Desk: Holo, wz otbe Cantera ste, Youcan sk feu by rape are efoo ye How may 7 st sro Info Desk: Your eeponse} | woud recommend backrest sto, a erdotdeioge? | Suomitihe Hi Name Food ae Pree Range Phone Aaseoos Postcode Suton Srey earner ire expense or2z9 300906, ast expences raza urer? omnons cy 68 4,3NL canire rmoceste vaz3206671 com excnange sree cazaaF ean expense : 3439 Goan Set cezsuu
Figure 5: The wizard page. The wizardâs job is slightly more complex: the worker needs to go through the dialogue history, ï¬ll in the form (top green) by interpreting the user input at this turn, and type in an appropriate response based on the history and the DB result (bottom green). The DB search result is updated when the form is submitted. The form can be divided into informable slots (top) and requestable slots (bottom), which contains all the labels we need to train the trackers.
# Scoring Table | 1604.04562#36 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 37 | # Scoring Table
Table 5: Additional Rt term for delexicalised tokens when using weighted decoding (Equation 14). Not observed means the corresponding tracker has a highest probability on either not mentioned or dontcare value, while observed mean the highest probability is on one of the categorical values. A positive score encourages the generation of that token while a negative score discourages it.
Delexicalised token Examples Rt (observed) Rt (not observed) <s.food>, <s.area>,... informable slot token <v.food>, <v.area>,... informable value token requestable slot token <s.phone>,<s.address>,... requestable value token <v.phone>,<v.address>,... 0.0 +0.05 +0.2 +0.2 0.0 -0.5 0.0 0.0
# Acknowledgements
Tsung-Hsien Wen and David Vandyke are sup- ported by Toshiba Research Europe Ltd, Cam- bridge. The authors would like to thank Ryan Lowe and Lukáš Žilka for their valuable comments.
# References
[Bahdanau et al.2014] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint:1409.0473. | 1604.04562#37 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 38 | [Berant et al.2013] Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. Seman- tic parsing on Freebase from question-answer pairs. In EMNLP, pages 1533â1544, Seattle, Washington, USA. ACL.
[Bohus and Rudnicky2008] Dan Bohus and Alexan- der I. Rudnicky, 2008. Sorry, I Didnât Catch That!, pages 123â154. Springer Netherlands, Dordrecht.
[Cho et al.2014] Kyunghyun Cho, Bart van Merrien- boer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoderâdecoder for statistical machine translation. In EMNLP, pages 1724â1734, Doha, Qatar, October. ACL.
[der Maaten and Hinton2008] Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing Data using t-SNE. JMLR. | 1604.04562#38 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 39 | [der Maaten and Hinton2008] Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing Data using t-SNE. JMLR.
[GaÅ¡i´c et al.2013] Milica GaÅ¡i´c, Catherine Breslin, Matthew Henderson, Dongho Kim, Martin Szum- mer, Blaise Thomson, Pirros Tsiakoulis, and Steve Young. 2013. On-line policy optimisation of bayesian spoken dialogue systems via human inter- action. In ICASSP, pages 8367â8371, May.
[Henderson et al.2014] Matthew Henderson, Blaise Thomson, and Steve Young. 2014. Word-based dialog state tracking with recurrent neural networks. In SIGDIAL, pages 292â299, Philadelphia, PA, USA, June. ACL.
[Henderson2015] Matthew Henderson. 2015. Machine learning for dialog state tracking: A review. In Machine Learning in Spoken Language Processing Workshop.
[Hermann et al.2015] Karl Moritz Hermann, Tomás Ko- ciský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. In Teaching machines to read and comprehend. NIPS, pages 1693â1701, Montreal, Canada. MIT Press. | 1604.04562#39 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 40 | [Hochreiter and Schmidhuber1997] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Compututation, 9(8):1735â1780, November.
[Jordan1989] Michael I. Jordan. 1989. Serial order: A parallel, distributed processing approach. In Ad- vances in Connectionist Theory: Speech. Lawrence Erlbaum Associates.
[Kalchbrenner et al.2014] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolu- tional neural network for modelling sentences. In ACL, pages 655â665, Baltimore, Maryland, June. ACL.
[Kelley1984] John F. Kelley. 1984. An iterative design methodology for user-friendly natural language of- ï¬ce information applications. ACM Transaction on Information Systems.
[Kim2014] Yoon Kim. 2014. Convolutional neural net- works for sentence classiï¬cation. In EMNLP, pages 1746â1751, Doha, Qatar, October. ACL. | 1604.04562#40 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 41 | [Li et al.2016] Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity- promoting objective function for neural conversa- tion models. In NAACL-HLT, pages 110â119, San Diego, California, June. ACL.
[Ling et al.2016] Wang Ling, Phil Blunsom, Edward Grefenstette, Karl Moritz Hermann, Tomáš KoËciský, Fumin Wang, and Andrew Senior. 2016. Latent pre- dictor networks for code generation. In ACL, pages 599â609, Berlin, Germany, August. ACL.
[Mikolov et al.2010] Tomáš Mikolov, Martin Karaï¬at, Lukáš Burget, Jan ËCernocký, and Sanjeev Khudan- pur. 2010. Recurrent neural network based lan- In Interspeech, pages 1045â1048, guage model. Makuhari, Japan. ISCA. | 1604.04562#41 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 42 | Diarmuid à Séaghdha, Blaise Thomson, Milica GaÅ¡i´c, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2015. Multi-domain dialog state tracking In ACL, pages using recurrent neural networks. 794â799, Beijing, China, July. ACL.
Diarmuid à Séaghdha, Tsung-Hsien Wen, Blaise Thom- son, and Steve Young. Neural belief tracker: Data-driven dialogue state tracking. arXiv preprint:1606.03777.
[Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: A method for automatic evaluation of machine trans- In ACL, pages 311â318, Stroudsburg, PA, lation. USA. ACL.
[Serban et al.2015a] Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, and Joelle Pineau. 2015a. A sur- vey of available corpora for building data-driven di- alogue systems. arXiv preprint:1512.05742. | 1604.04562#42 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 43 | [Serban et al.2015b] Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2015b. Hierarchical neural net- work generative models for movie dialogues. arXiv preprint:1507.04808.
[Shang et al.2015] Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In ACL, pages 1577â1586, Beijing, China, July. ACL.
[Su et al.2015] Pei-Hao Su, David Vandyke, Milica Ga- sic, Dongho Kim, Nikola Mrksic, Tsung-Hsien Wen, and Steve J. Young. 2015. Learning from real users: rating dialogue success with neural networks for re- inforcement learning in spoken dialogue systems. In Interspeech, pages 2007â2011, Dresden, Germany. ISCA. | 1604.04562#43 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 44 | [Su et al.2016] Pei-Hao Su, Milica Gasic, Nikola MrkÅ¡i´c, Lina M. Rojas Barahona, Stefan Ultes, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. On-line active reward learning for policy optimisation in spoken dialogue systems. In ACL, pages 2431â2441, Berlin, Germany, August. ACL.
[Sukhbaatar et al.2015] Sainbayar Sukhbaatar, arthur szlam, Jason Weston, and Rob Fergus. 2015. End- In NIPS, pages 2440â to-end memory networks. 2448. Curran Associates, Inc., Montreal, Canada.
[Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learn- In NIPS, pages 3104â ing with neural networks. 3112, Montreal, Canada. MIT Press.
[Traum1999] David R. Traum, 1999. Foundations of Rational Agency, chapter Speech Acts for Dialogue Agents. Springer.
[Vinyals and Le2015] Oriol Vinyals and Quoc V. Le. In ICML 2015. A neural conversational model. Deep Learning Workshop, Lille, France. | 1604.04562#44 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 45 | [Vinyals and Le2015] Oriol Vinyals and Quoc V. Le. In ICML 2015. A neural conversational model. Deep Learning Workshop, Lille, France.
[Vinyals et al.2015] Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In NIPS, pages 2692â2700, Montreal, Canada. Curran Associates, Inc.
[Wen et al.2013] Tsung-Hsien Wen, Aaron Heidel, Hung yi Lee, Yu Tsao, and Lin-Shan Lee. 2013. Recurrent neural network based language model per- In sonalization by social network crowdsourcing. Interspeech, pages 2007â2011, Lyon France. ISCA.
[Wen et al.2015a] Tsung-Hsien Wen, Milica GaÅ¡i´c, Dongho Kim, Nikola MrkÅ¡i´c, Pei-Hao Su, David Vandyke, and Steve Young. 2015a. Stochastic lan- guage generation in dialogue using recurrent neural networks with convolutional sentence reranking. In SIGdial, pages 275â284, Prague, Czech Republic, September. ACL. | 1604.04562#45 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 46 | [Wen et al.2015b] Tsung-Hsien Wen, Milica GaÅ¡i´c, Nikola MrkÅ¡i´c, Pei-Hao Su, David Vandyke, and 2015b. Semantically conditioned Steve Young. lstm-based natural language generation for spoken In EMNLP, pages 1711â1721, dialogue systems. Lisbon, Portugal, September. ACL.
[Wen et al.2016] Tsung-Hsien Wen, Milica GaÅ¡i´c, Nikola MrkÅ¡i´c, Pei-Hao Su, David Vandyke, and Steve Young. 2016. Multi-domain neural network language generation for spoken dialogue systems. In NAACL-HLT, pages 120â129, San Diego, California, June. ACL.
[Yao et al.2014] Kaisheng Yao, Baolin Peng, Yu Zhang, Dong Yu, Geoffrey Zweig, and Yangyang Shi. 2014. Spoken language understanding using long short- term memory neural networks. In IEEE SLT, pages 189â194, December.
[Yin et al.2015] Pengcheng Yin, Zhengdong Lu, Hang Li, and Ben Kao. 2015. Neural enquirer: Learning to query tables. arXiv preprint:1512.00965. | 1604.04562#46 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 47 | [Young et al.2010] Steve Young, Milica Gaši´c, Simon Keizer, François Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. 2010. The hidden in- formation state model: A practical framework for pomdp-based spoken dialogue management. Com- puter, Speech and Language.
[Young et al.2013] Steve Young, Milica Gaši´c, Blaise Thomson, and Jason D. Williams. 2013. Pomdp- based statistical spoken dialog systems: A review. Proceedings of the IEEE.
# Appendix: Sample dialogues
Table 6: Some samples of real conversational logs between online judges and the end-to-end system.
# Human Machine Human Machine Human Machine
: Restaurant in any area that serves Vietnamese food : thanh binh serves vietnamese food type , is there anything else i can help you with ? : What is the phone number and post code : the number is 01223 362456 and the area code is c.b 3, 0 a.f . is there anything else i can do for you ? : Good bye : thank you for using the cambridge restaurant system . | 1604.04562#47 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 48 | Human Machine Human Machine Human Machine Human Machine Human Machine : i am looking for moderately priced halal food : i am sorry , there are no halal restaurants in the moderate price range . would you like something else ? : how about european food : galleria serves european food type and is in the moderate price range . : address and postcode : the address is 33 bridge street and the area code is c.b 2, 1 u.w . is there anything else i can help you with today ? : thanks : galleria is a european restaurant in the moderate price range . would you like their phone ? : thank you : thank you for using the cambridge restaurant system . good bye Human Machine Human Machine Human Machine Human Machine Human Machine : Hi, Iâm hungry for some Indonesian food : i am sorry , there are no indonesian restaurants . would you like a different food ? : Oh no. How about Indian? : there are 9 restaurants serving indian types of food . do you have a preference on side of town ? : East side please : sitar tandoori is a indian restaurant in the east side of town . would you like their information ? : Yes, the address and phone number please : sitar tandoori is located at 43 high street cherry hinton cherry hinton and their phone number | 1604.04562#48 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 49 | you like their information ? : Yes, the address and phone number please : sitar tandoori is located at 43 high street cherry hinton cherry hinton and their phone number is 01223 249955 . : Okay, thank you : thank you , goodbye . Human Machine Human Machine Human Machine Human Machine Human Machine Human Machine Human Machine : hello : what cuisine would you like ? : afghan : i am sorry , there are no afghan restaurants . would you like something else ? : korean : little seoul serves korean food , it is located at 108 regent street city centre . : is it expensive? : little seoul serves korean food and is in the expensive price range . : what area is that : little seoul is located in the centre area of town . would you like their number ? : yes : the telephone number is 01223 308681 . is there anything else i can help you with today ? : no goodbye : thank you for using the cambridge restaurant system , goodbye . Human Machine Human Machine Human Machine Human Machine : I want to ï¬nd a restaurant in the east part of town that serves creative food : i am sorry , there are no creative restaurants in the east side of town . would you like a different restaurant ? : how about indian food? : curry prince is a | 1604.04562#49 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 50 | sorry , there are no creative restaurants in the east side of town . would you like a different restaurant ? : how about indian food? : curry prince is a indian restaurant in the east of town . would you like their phone number ? : yes, may i have the address and phone number? : curry prince is located at 451 newmarket road fen ditton and their phone is 01223 566388 . is there anything else i can help you with today ? : no, thank you goodbye : thank you for using the cambridge restaurant system . good bye Machine Human Machine Human Machine Human Machine Human Machine Human Machine Human Machine : what food are you interested ? : english : i am sorry , there are no english restaurants . would you like to try a different food type ? : chinese : there are 8 restaurants serving chinese type of cuisine . do you have a preference of area ? : south part : the good luck chinese food takeaway is a chinese restaurant in the south of town . would you like the phone number ? : no. just address and price range : the good luck chinese food takeaway is located at 82 cherry hinton road cherry hinton . is there anything else i can help you with ? : what is their price range? : the good luck chinese food takeaway is expensively priced chinese | 1604.04562#50 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.04562 | 52 | # Human Machine Human Machine Human Machine Human Machine Human Machine
: Is there a gastropub anywhere around here? : there are 3 gastropub restaurants . would you like more information ? : Is there one on the west side? : i am sorry , there are no gastropub restaurants in the west side of town . would you like something else ? : How about east? : royal standard is a gastropub restaurant in the east of town . : Whatâs the address? : the address is 290 mill road city centre . : Thanks, bye : thank you for using the cambridge restaurant system . good bye
_ : _ : | 1604.04562#52 | A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain. | http://arxiv.org/pdf/1604.04562 | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | cs.CL, cs.AI, cs.NE, stat.ML | published at EACL 2017 | null | cs.CL | 20160415 | 20170424 | [] |
1604.03168 | 1 | # ABSTRACT
High computational complexity hinders the widespread usage of Convolutional Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are arguably the most promising approach for reducing both execution time and power consumption. One of the most important steps in accelerator development is hardware-oriented model approximation. In this paper we present Ristretto, a model approximation framework that analyzes a given CNN with respect to numerical resolution used in representing weights and outputs of convolutional and fully connected layers. Ristretto can condense models by using ï¬xed point arithmetic and representation instead of ï¬oating point. Moreover, Ristretto ï¬ne- tunes the resulting ï¬xed point network. Given a maximum error tolerance of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit. The code for Ristretto is available.
# INTRODUCTION
The annually held ILSVRC competition has seen state-of-the-art classiï¬cation accuracies by deep networks such as AlexNet by Krizhevsky et al. (2012), VGG by Simonyan & Zisserman (2015), GoogleNet (Szegedy et al., 2015) and ResNet (He et al., 2015). These networks contain millions of parameters and require billions of arithmetic operations. | 1604.03168#1 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 2 | Various solutions have been offered to reduce the resource-requirement of CNNs. Fixed point arith- metic is less resource hungry compared to ï¬oating point. Moreover, it has been shown that ï¬xed point arithmetic is adequate for neural network computation (Hammerstrom, 1990). This observa- tion has been leveraged recently to condense deep CNNs. Gupta et al. (2015) show that networks on datasets like CIFAR-10 (10 images classes) can be trained in 16-bit. Further trimming of the same network uses as low as 7-bit multipliers (Courbariaux et al., 2014). Another approach by Courbariaux et al. (2016) uses binary weights and activations, again on the same network. | 1604.03168#2 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 3 | The complexity of deep CNNs can be split into two parts. First, the convolutional layers contain more than 90% of the required arithmetic operations. By turning these ï¬oating point operations into operations with small ï¬xed point numbers, both the chip area and energy consumption can be sig- niï¬cantly reduced. The second resource-intense layer type are fully connected layers, which contain over 90% of the network parameters. As a nice by-product of using bit-width reduced ï¬xed point numbers, the data transfer to off-chip memory is reduced for fully connected layers. In this paper, we concentrate on approximating convolutional and fully connected layers only. Using ï¬xed point arithmetic is a hardware-friendly way of approximating CNNs. It allows the use of smaller process- ing elements and reduces the memory requirements without adding any computational overhead such as decompression. | 1604.03168#3 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 4 | Even though it has been shown that CNNs perform well with small ï¬xed point numbers, there exists no thorough investigation of the delicate trade-off between bit-width reduction and accuracy loss. In this paper we present Ristretto, which automatically ï¬nds a perfect balance between the bit-width reduction and the given maximum error tolerance. Ristretto performs a fast and fully automated trimming analysis of any given network. This post-training tool can be used for application-speciï¬c trimming of neural networks.
1
Accepted as a workshop contribution at ICLR 2016
# 2 MIXED FIXED POINT PRECISION
In the next two sections we discuss quantization of a ï¬oating point CNN to ï¬xed point. Moreover, we explain dynamic ï¬xed point, and show how it can be used to further decrease network size while maintaining the classiï¬cation accuracy.
m bits Layer activation O89 C8 &000"⢠m+n+1 bits { So Ss m+n+2 bits { . c < : +) ce (4) i -â me oa fagae ad ae at ae fed v | v m+n+lg(x) bits { ee ee Bias 9 e) 32-bit adder n-bit : Truncate Truncate m bits I T Layer output | 1604.03168#4 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 5 | Figure 1: Data path of quantized convolutional and fully connected layers.
The data path of fully connected and convolutional layers consists of a series of MAC operations (multiplication and accumulation), as shown in Figure 1. The layer activations are multiplied with the network weights, and the results are accumulated to form the output. As shown by Qiu et al. (2016), it is a good approach to use mixed precision, i.e., different parts of a CNN use different bit-widths.
In Figure 1, m and n refer to the number of bits for layer outputs and layer weights, respectively. Multiplication results are accumulated using an adder tree which gets thicker towards the end. The adder outputs in the ï¬rst level are m + n + 2 bits wide, and the bit-width grows by 1 bit in each level. In the last level, the bit-width is m + n + lg2 x, where x is the number of multiplication operations per output value. In the last stage, the bias is added to form the layer output. For each network layer, we need to ï¬nd the right balance between reducing the bit-widths (m and n) and maintaining a good classiï¬cation accuracy.
# 3 DYNAMIC FIXED POINT | 1604.03168#5 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 6 | # 3 DYNAMIC FIXED POINT
The different parts of a CNN have a significant dynamic range. In large layers, the outputs are the result of thousands of accumulations, thus the network parameters are much smaller than the layer outputs. Fixed point has only limited capability to cover a wide dynamic range. Dynamic fixed point (Williamson} [1991] 2014) is a solution to this problem. In dynamic fixed point, each number is represented as follows: (â1)° -2-f!. are 2â. x;. Here B denotes the bit-width, s the sign bit, fl is the fractional length, and x the mantissa bits. The intermediate values in a network have different ranges. Therefor it is desirable to assign fixed point numbers into groups with constant fl, such that the number of bits allocated to the fractional part is constant within that group. Each network layer is split into two groups: one for the layer outputs, one for the layer weights. This allows to better cover the dynamic range of both layer outputs and weights, as weights are normally significantly smaller. On the hardware side, it is possible to realize dynamic fixed point arithmetic using bit shifters. | 1604.03168#6 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 7 | Different hardware accelerators for deployment of neural networks have been proposed (Motamedi et al., 2016; Qiu et al., 2016; Han et al., 2016a). The ï¬rst important step in accelerator design is the compression of the network in question. In the next section we present Ristretto, a tool which can condense any neural network in a fast and automated fashion.
2
Accepted as a workshop contribution at ICLR 2016
# 4 RISTRETTO: APPROXIMATION FRAMEWORK IN CAFFE
From Caffe to Ristretto According to Wikipedia, Ristretto is âa short shot of espresso coffee made with the normal amount of ground coffee but extracted with about half the amount of waterâ. Similarly, our compressor removes the unnecessary parts of a CNN, while making sure the essence â the ability to predict image classes â is preserved. With its strong community and fast training for deep CNNs, Caffe (Jia et al., 2014) is an excellent framework to build on. | 1604.03168#7 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 8 | Ristretto takes a trained model as input, and automatically brews a condensed network version. Input and output of Ristretto are a network description ï¬le (prototxt) and the network parameters. Optionally, the quantized network can be ï¬ne-tuned with Ristretto. The resulting ï¬xed point model in Caffe-format can then be used for a hardware accelerator.
Weight Analysis Determine statistical parameters for Activation Analysis Determine statistical parameters for Bit-Width Reduction Determine the required bit-width for different Fine-tuning Retrain fixed point network parameters effective quantization effective quantization layers t Test the Accuracy Using Training Set Review the effect
Figure 2: Network approximation ï¬ow with Ristretto. | 1604.03168#8 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 9 | Quantization ï¬ow Ristrettoâs quantization ï¬ow has ï¬ve stages (Figure 2) to compress a ï¬oating point network into ï¬xed point. In the ï¬rst step, the dynamic range of the weights is analyzed to ï¬nd a good ï¬xed point representation. For the quantization from ï¬oating point to ï¬xed point, we use round-nearest. The second step runs several thousand images in forward path. The generated layer activations are analyzed to generate statistical parameters. Ristretto uses enough bits in the integer part of ï¬xed point numbers to avoid saturation of layer activations. Next Ristretto performs a binary search to ï¬nd the optimal number of bits for convolutional weights, fully connected weights, and layer outputs. In this step, a certain network part is quantized, while the rest remains in ï¬oating point. Since there are three network parts that should use independent bit-widths (weights of convolutional and fully connected layers as well as layer outputs), iteratively quantizing one network part allows us to ï¬nd the optimal bit-width for each part. Once a good trade-off between small number representation and classiï¬cation accuracy is found, the resulting ï¬xed point network is retrained. | 1604.03168#9 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 10 | Fine-tuning In order to make up for the accuracy drop incurred by quantization, the fixed point network is fine- tuned in Ristretto. During this retraining procedure, the network learns how to classify images with fixed point parameters. Since the network weights can only have discrete values, the main chal- lenge consists in the weight update. We adopt the idea of previous work (Courbariaux et al.|/2015) which uses full precision shadow weights. Small weight updates Aw are applied to the full precision weights w, whereas the discrete weights wâ are sampled from the full precision weights. The sam- pling during fine-tuning is done with stochastic rounding. This rounding scheme was successfully used by for weight updates of 16-bit fixed point networks.
Ristretto uses the fine-tuning procedure illustrated in Figure [3] For each batch, the full precision weights are quantized to fixed point. During forward propagation, these discrete weights are used to compute the layer outputs y. Each layer / turns its input batch 2; into output y;, according to its function f; : (x;,wâ) â y;. Assuming the last layer computes the loss, we denote f as the overall CNN function.
3
Accepted as a workshop contribution at ICLR 2016 | 1604.03168#10 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 11 | 3
Accepted as a workshop contribution at ICLR 2016
Stochastic Round nearest sampling sampling w! Val data orn PN YT Apply param : t H i coe H fprop: a :Aw â Fullprecision yi _ A Full precision y, â fprop update params w : w= fiGuw') params w of, / : Validation / Fra ad 2) â accuracy bprop
Figure 3: Fine-tuning with shadow weights. The left side shows the training process with full- precision shadow weights. On the right side the ï¬ne-tuned network is benchmarked on the validation data set. Fixed point values are represented in orange.
The goal of back propagation is to compute the error gradient δf /δw with respect to each ï¬xed point parameter. For parameter updates we use the Adam rule by Kingma & Ba (2015). As an important observation, we do not quantize layer outputs to ï¬xed point during ï¬ne-tuning. We use ï¬oating point layer outputs instead, which enables Ristretto to analytically compute the error gradient with respect to each parameter. In contrast, the validation of the network is done with ï¬xed point layer outputs. | 1604.03168#11 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 12 | To achieve the best ï¬ne-tuning results, we used a learning rate that is an order of magnitude lower than the last full precision training iteration. Since the choice of hyper parameters for retraining is crucial (Bergstra & Bengio, 2012), Ristretto relies on minimal human intervention in this step.
Fast ï¬ne-tuning with ï¬xed point parameters Ristretto brews a condensed network with ï¬xed point weights and ï¬xed point layer activations. For simulation of the forward propagation in hardware, Ristretto uses full ï¬oating point for accumula- tion. This follows the thought of Gupta et al. (2015) and is conform with our description of the forward data path in hardware (Figure 2). During ï¬ne-tuning, the full precision weights need to be converted to ï¬xed point for each batch, but after that all computation can be done in ï¬oating point (Figure 3). Therefore Ristretto can fully leverage optimized matrix-matrix multiplication routines for both forward and backward propagation. Thanks to its fast implementation on the GPU, a ï¬xed point CaffeNet can be tested on the ILSVRC 2014 validation dataset (50k images) in less than 2 minutes (using one Tesla K-40 GPU).
# 5 RESULTS | 1604.03168#12 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 13 | # 5 RESULTS
In this section we present the results of approximating 32-bit ï¬oating point networks by condensed ï¬xed point models. All classiï¬cation accuracies were obtained running the respective network on the whole validation dataset. We present approximation results of Ristretto for ï¬ve different net- works. First, we consider LeNet (LeCun et al., 1998) which can classify handwritten digits (MNIST dataset). Second, CIFAR-10 Full model provided by Caffe is used to classify images into 10 different classes. Third, we condense CaffeNet, which is the Caffe version of AlexNet and classiï¬es images into the 1000 ImageNet categories. Fourth, we use the BVLC version of GoogLeNet (Szegedy et al., 2015) to classify images of the same data set. Finally, we approximate SqueezeNet (Iandola et al., 2016), a recently proposed architecture with the classiï¬cation accuracy of AlexNet, but >50X fewer parameters. | 1604.03168#13 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 14 | Impact of dynamic ï¬xed point We used Ristretto to quantize CaffeNet (AlexNet) into ï¬xed point, and compare traditional ï¬xed point with dynamic ï¬xed point. To allow a simpler comparison, all layer outputs and network parameters share the same bit-width. Results show a good performance of static ï¬xed point for as low as 18-bit (Figure 4). However, when reducing the bit-width further, the accuracy starts to drop signiï¬cantly, while dynamic ï¬xed point has a stable accuracy.
4
Accepted as a workshop contribution at ICLR 2016
Static vs Dynamic Fixed Point â+â Dynamic fixed point =-+-<@-== Integer length: 9-bit ===+-@=== Integer length: 10-bit --+----- Integer length: 11-bit Classification Accuracy % Bit-width
Figure 4: Impact of dynamic ï¬xed point: The ï¬gure shows top-1 accuracy for CaffeNet on ILSVRC 2014 validation dataset. Integer length refers to the number of bits assigned to the integer part of ï¬xed point numbers. | 1604.03168#14 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 15 | We can conclude that dynamic ï¬xed point performs signiï¬cantly better for such a large network. With dynamic ï¬xed point, we can adapt the number of bits allocated to integer and fractional part, according to the dynamic range of different parts of the network. We will therefore concentrate on dynamic ï¬xed point for the subsequent experiments.
Quantization of individual network parts In this section, we analyze the impact of quantization on different parts of a ï¬oating point CNN. Table 1 shows the classiï¬cation accuracy when the layer outputs, the convolution kernels or the parameters of fully connected layers are quantized to dynamic ï¬xed point.
In all three nets, the convolution kernels and layer activations can be trimmed to 8-bit with an absolute accuracy change of only 0.3%. Fully connected layers are more affected from trimming to 8-bit weights, the absolute change is maximally 0.9%. Interestingly, LeNet weights can be trimmed to as low as 2-bit, with absolute accuracy change below 0.4%.
Table 1: Quantization results for different parts of three networks. Only one number category is cast to ï¬xed point, and the remaining numbers are in ï¬oating point format.
Fixed point bit-width
# 16-bit
# 8-bit
â
# 4-bit
# 2-bit | 1604.03168#15 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 16 | Fixed point bit-width
# 16-bit
# 8-bit
â
# 4-bit
# 2-bit
LeNet, 32-bit ï¬oating point accuracy: 99.1%
Layer output CONV parameters FC parameters 99.1% 99.1% 98.9% 85.9% 99.1% 99.1% 99.1% 98.9% 99.1% 99.1% 98.9% 98.7%
Full CIFAR-10, 32-bit ï¬oating point accuracy: 81.7%
Layer output CONV parameters FC parameters 81.6% 81.6% 79.6% 48.0% 81.7% 81.4% 75.9% 19.1% 81.7% 80.8% 79.9% 77.5%
CaffeNet top-1, 32-bit ï¬oating point accuracy: 56.9%
Layer output CONV parameters FC parameters 56.8% 56.7% 06.0% 00.1% 56.9% 56.7% 00.1% 00.1% 56.9% 56.3% 00.1% 00.1% | 1604.03168#16 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 17 | Fine-tuning of all considered network parts Here we report the accuracy of ï¬ve networks that were condensed and ï¬ne-tuned with Ristretto. All networks use dynamic ï¬xed point parameters as well as dynamic ï¬xed point layer outputs for convolutional and fully connected layers. LeNet performs well in 2/4-bit, while CIFAR-10 and
5
Accepted as a workshop contribution at ICLR 2016
the three ImageNet CNNs can be trimmed to 8-bit (see Table 2). Surprisingly, these compressed networks still perform nearly as well as their ï¬oating point baseline. The relative accuracy drops of LeNet, CIFAR-10 and SqueezeNet are very small (<0.6%), whereas the approximation of the larger CaffeNet and GoogLeNet incurs a slightly higher cost (0.9% and 2.3% respectively). We hope we will further improve the ï¬ne-tuning results of these larger networks in the future.
The SqueezeNet architecture was developed by Iandola et al. (2016) with the goal of a small CNN that performs well on the ImageNet data set. Ristretto can make the already small network even smaller, so that its parameter size is less than 2 MB. This condensed network is well-suited for deployment in smart mobile systems. | 1604.03168#17 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 18 | All ï¬ve 32-bit ï¬oating point networks can be approximated well in 8-bit and 4-bit ï¬xed point. For a hardware implementation, this reduces the size of multiplication units by about one order of magni- tude. Moreover, the required memory bandwidth is reduced by 4â8X. Finally, it helps to hold 4â8X more parameters in on-chip buffers. The code for reproducing the quantization and ï¬ne-tuning re- sults is available1.
Table 2: Fine-tuned networks with dynamic ï¬xed point parameters and outputs for convolutional and fully connected layers. The numbers in brackets indicate accuracy without ï¬ne-tuning. | 1604.03168#18 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 19 | LeNet (Exp 1) LeNet (Exp 2) Full CIFAR-10 SqueezeNet top-1 CaffeNet top-1 GoogLeNet top-1 Layer outputs 4-bit 4-bit 8-bit 8-bit 8-bit 8-bit CONV parameters 4-bit 2-bit 8-bit 8-bit 8-bit 8-bit FC parameters 4-bit 2-bit 8-bit 8-bit 8-bit 8-bit 32-bit ï¬oating point baseline 99.1% 99.1% 81.7% 57.7% 56.9% 68.9% Fixed point accuracy 99.0% (98.7%) 98.8% (98.0%) 81.4% (80.6%) 57.1% (55.2%) 56.0% (55.8%) 66.6% (66.1%) | 1604.03168#19 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 20 | A previous work by Courbariaux et al. (2014) concentrates on training with limited numerical pre- cision. They can train a dynamic ï¬xed point network on the MNIST data set using just 7-bits to represent activations and weights. Ristretto doesnât reduce the resource requirements for training, but concentrates on inference instead. Ristretto can produce a LeNet network with 2-bit parameters and 4-bit activations. Our approach is different in that we train with high numerical precision, then quantize to ï¬xed point, and ï¬nally ï¬ne-tune the ï¬xed point network.
Other works (Courbariaux et al., 2016; Rastegari et al., 2016) can reduce the bit-width even fur- ther to as low as 1-bit, using more advanced number encodings than dynamic ï¬xed point. Ristrettoâs strength lies in its capability to approximate a large number of existing ï¬oating point models on chal- lenging data sets. For the ï¬ve considered networks, Ristretto can quantize activations and weights to 8-bit or lower, at an accuracy drop below 2.3%, compared to the ï¬oating point baseline. | 1604.03168#20 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 21 | While more sophisticated data compression schemes could be used to achieve higher network size reduction, our approach is very hardware friendly and imposes no additional overhead such as de- compression.
# 6 CONCLUSION AND FUTURE WORK
In this work we presented Ristretto, a Caffe-based approximation framework for deep convolutional neural networks. The framework reduces the memory requirements, area for processing elements and overall power consumption for hardware accelerators. A large net like CaffeNet can be quan- tized to 8-bit for both weights and layer outputs while keeping the networkâs accuracy change below 1% compared to its 32-bit ï¬oating point counterpart. Ristretto is both fast and automated, and we release the code as an open source project.
Ristretto is in its ï¬rst development stage. We consider adding new features in the future: 1. Shared weights: Fetching cookbook indices from off-chip memory, instead of real values (Han et al.,
# 1https://github.com/pmgysel/caffe
6
Accepted as a workshop contribution at ICLR 2016 | 1604.03168#21 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 22 | # 1https://github.com/pmgysel/caffe
6
Accepted as a workshop contribution at ICLR 2016
2016b). 2. Network pruning as shown by the same authors. 3. Network binarization as shown by Courbariaux et al. (2016) and Rastegari et al. (2016). These additional features will help to reduce the bit-width even further, and to reduce the computational complexity of trimmed networks.
# REFERENCES
Bergstra, J. and Bengio, Y. Random Search for Hyper-Parameter Optimization. The Journal of Machine Learning Research, 13(1):281â305, 2012.
Courbariaux, M., David, J.-P., and Bengio, Y. Training Deep Neural Networks with Low Precision Multiplications. arXiv preprint arXiv:1412.7024, 2014.
Courbariaux, M., Bengio, Y., and David, J.-P. BinaryConnect: Training Deep Neural Networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pp. 3105â3113, 2015. | 1604.03168#22 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 23 | Courbariaux, M., Hubara, I., Soudry, D., El-Yaniv, R., and Bengio, Y. Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1. arXiv preprint arXiv:1602.02830, 2016.
Gupta, S., Agrawal, A., Gopalakrishnan, K., and Narayanan, P. Deep Learning with Limited Nu- merical Precision. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 1737â1746, 2015.
Hammerstrom, D. A VLSI Architecture for High-Performance, Low-Cost, On-chip Learning. In IJCNN International Joint Conference on Neural Networks, 1990, pp. 537â544. IEEE, 1990.
Han, S., Liu, X., Mao, H., Pu, J., Pedram, A., Horowitz, M. A., and Dally, W. J. EIE: Efï¬cient In- ference Engine on Compressed Deep Neural Network. arXiv preprint arXiv:1602.01528, 2016a. | 1604.03168#23 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 24 | Han, S., Mao, H., and Dally, W. J. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. In International Conference on Learning Representations, 2016b.
He, K., Zhang, X., Ren, S., and Sun, J. Deep Residual Learning for Image Recognition. arXiv preprint arXiv:1512.03385, 2015.
Iandola, F. N., Moskewicz, M. W., Ashraf, K., Han, S., Dally, W. J., and Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. arXiv:1602.07360, 2016.
Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., and Darrell, T. Caffe: Convolutional Architecture for Fast Feature Embedding. In Proceedings of the ACM International Conference on Multimedia, pp. 675â678. ACM, 2014.
Kingma, D. and Ba, J. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations, 2015. | 1604.03168#24 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 25 | Kingma, D. and Ba, J. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations, 2015.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. ImageNet Classiï¬cation with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems, pp. 1097â1105, 2012.
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
Motamedi, M., Gysel, P., Akella, V., and Ghiasi, S. Design Space Exploration of FPGA-Based Deep Convolutional Neural Networks. In 2016 21st Asia and South Paciï¬c Design Automation Conference (ASP-DAC), pp. 575â580. IEEE, 2016. | 1604.03168#25 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.03168 | 26 | Qiu, J., Wang, J., Yao, S., Guo, K., Li, B., Zhou, E., Yu, J., Tang, T., Xu, N., Song, S., Wang, Y., and Yang, H. Going Deeper with Embedded FPGA Platform for Convolutional Neural Network. In Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pp. 26â35, 2016.
7
Accepted as a workshop contribution at ICLR 2016
Rastegari, M., Ordonez, V., Redmon, J., and Farhadi, A. XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks. arXiv preprint arXiv:1603.05279, 2016.
Simonyan, K. and Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recog- nition. In International Conference on Learning Representations, 2015.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1â9, 2015. | 1604.03168#26 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | [
{
"id": "1602.07360"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1602.01528"
}
] |
1604.01696 | 0 | 6 1 0 2
r p A 6 ] L C . s c [
1 v 6 9 6 1 0 . 4 0 6 1 : v i X r a
# A Corpus and Cloze Evaluation for Deeper Understanding of Commonsense Stories
# Nasrin Mostafazadeh!, Nathanael Chambersâ, Xiaodong He*, Devi Parikhâ, Dhruv Batraâ, Lucy Vanderwendeâ, Pushmeet Kohli*, James Allen!®
1 University of Rochester, 2 United States Naval Academy, 3 Microsoft Research, 4 Virginia Tech, 5 The Institute for Human & Machine Cognition
{nasrinm, james}@cs.rochester.edu, [email protected], {parikh, dbatra}@vt.edu, {xiaohe, lucyv, pkohli}@microsoft.com
# Abstract | 1604.01696#0 | A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories | Representation and learning of commonsense knowledge is one of the
foundational problems in the quest to enable deep language understanding. This
issue is particularly challenging for understanding casual and correlational
relationships between events. While this topic has received a lot of interest
in the NLP community, research has been hindered by the lack of a proper
evaluation framework. This paper attempts to address this problem with a new
framework for evaluating story understanding and script learning: the 'Story
Cloze Test'. This test requires a system to choose the correct ending to a
four-sentence story. We created a new corpus of ~50k five-sentence commonsense
stories, ROCStories, to enable this evaluation. This corpus is unique in two
ways: (1) it captures a rich set of causal and temporal commonsense relations
between daily events, and (2) it is a high quality collection of everyday life
stories that can also be used for story generation. Experimental evaluation
shows that a host of baselines and state-of-the-art models based on shallow
language understanding struggle to achieve a high score on the Story Cloze
Test. We discuss these implications for script and story learning, and offer
suggestions for deeper language understanding. | http://arxiv.org/pdf/1604.01696 | Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen | cs.CL, cs.AI | In Proceedings of the 2016 North American Chapter of the ACL (NAACL
HLT), 2016 | null | cs.CL | 20160406 | 20160406 | [] |
1604.01696 | 1 | # Abstract
Representation and learning of commonsense knowledge is one of the foundational prob- lems in the quest to enable deep language un- derstanding. This issue is particularly chal- lenging for understanding casual and corre- lational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the âStory Cloze Testâ. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of 50k five-sentence commonsense stories, ROCSto- ries, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high qual- ity collection of everyday life stories that can also be used for story generation. Experimen- tal evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learn- ing, and offer suggestions for deeper language understanding.
# 1 Introduction | 1604.01696#1 | A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories | Representation and learning of commonsense knowledge is one of the
foundational problems in the quest to enable deep language understanding. This
issue is particularly challenging for understanding casual and correlational
relationships between events. While this topic has received a lot of interest
in the NLP community, research has been hindered by the lack of a proper
evaluation framework. This paper attempts to address this problem with a new
framework for evaluating story understanding and script learning: the 'Story
Cloze Test'. This test requires a system to choose the correct ending to a
four-sentence story. We created a new corpus of ~50k five-sentence commonsense
stories, ROCStories, to enable this evaluation. This corpus is unique in two
ways: (1) it captures a rich set of causal and temporal commonsense relations
between daily events, and (2) it is a high quality collection of everyday life
stories that can also be used for story generation. Experimental evaluation
shows that a host of baselines and state-of-the-art models based on shallow
language understanding struggle to achieve a high score on the Story Cloze
Test. We discuss these implications for script and story learning, and offer
suggestions for deeper language understanding. | http://arxiv.org/pdf/1604.01696 | Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen | cs.CL, cs.AI | In Proceedings of the 2016 North American Chapter of the ACL (NAACL
HLT), 2016 | null | cs.CL | 20160406 | 20160406 | [] |
1604.01696 | 2 | # 1 Introduction
Story understanding is an extremely challenging task in natural language understanding with a long- running history in AI (Charniak, 1972; Winograd, 1972; Turner, 1994; Schubert and Hwang, 2000).
Recently, there has been a renewed interest in story and narrative understanding based on progress made in core NLP tasks. This ranges from generic story telling models to building systems which can com- pose meaningful stories in collaboration with hu- mans (Swanson and Gordon, 2008). Perhaps the biggest challenge of story understanding (and story generation) is having commonsense knowledge for the interpretation of narrative events. The question is how to provide commonsense knowledge regard- ing daily events to machines. | 1604.01696#2 | A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories | Representation and learning of commonsense knowledge is one of the
foundational problems in the quest to enable deep language understanding. This
issue is particularly challenging for understanding casual and correlational
relationships between events. While this topic has received a lot of interest
in the NLP community, research has been hindered by the lack of a proper
evaluation framework. This paper attempts to address this problem with a new
framework for evaluating story understanding and script learning: the 'Story
Cloze Test'. This test requires a system to choose the correct ending to a
four-sentence story. We created a new corpus of ~50k five-sentence commonsense
stories, ROCStories, to enable this evaluation. This corpus is unique in two
ways: (1) it captures a rich set of causal and temporal commonsense relations
between daily events, and (2) it is a high quality collection of everyday life
stories that can also be used for story generation. Experimental evaluation
shows that a host of baselines and state-of-the-art models based on shallow
language understanding struggle to achieve a high score on the Story Cloze
Test. We discuss these implications for script and story learning, and offer
suggestions for deeper language understanding. | http://arxiv.org/pdf/1604.01696 | Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen | cs.CL, cs.AI | In Proceedings of the 2016 North American Chapter of the ACL (NAACL
HLT), 2016 | null | cs.CL | 20160406 | 20160406 | [] |
1604.01696 | 3 | A large body of work in story understanding has focused on learning scripts (Schank and Abel- son, 1977). Scripts represent structured knowledge about stereotypical event sequences together with their participants. It is evident that various NLP applications (text summarization, co-reference res- olution, question answering, etc.) can hugely ben- efit from the rich inferential capabilities that struc- tured knowledge about events can provide. Given that developing hand-built scripts is extremely time- consuming, there is a serious need for automati- cally induced scripts. Most relevant to this issue is work on unsupervised learning of ânarrative chainsâ (Chambers and Jurafsky, 2008) and event schemas (Chambers and Jurafsky, 2009; Balasubramanian et al., 2013; Cheung et al., 2013; Nguyen et al., 2015). The first requirement of any learner is to decide on a corpus to drive the learning process. We are fore- most interested in a resource that is full of temporal and causal relations between events because causal- ity is a central component of coherency. Personal stories from daily weblogs are good sources of com- causal information and Swan# monsense
# (Gordon | 1604.01696#3 | A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories | Representation and learning of commonsense knowledge is one of the
foundational problems in the quest to enable deep language understanding. This
issue is particularly challenging for understanding casual and correlational
relationships between events. While this topic has received a lot of interest
in the NLP community, research has been hindered by the lack of a proper
evaluation framework. This paper attempts to address this problem with a new
framework for evaluating story understanding and script learning: the 'Story
Cloze Test'. This test requires a system to choose the correct ending to a
four-sentence story. We created a new corpus of ~50k five-sentence commonsense
stories, ROCStories, to enable this evaluation. This corpus is unique in two
ways: (1) it captures a rich set of causal and temporal commonsense relations
between daily events, and (2) it is a high quality collection of everyday life
stories that can also be used for story generation. Experimental evaluation
shows that a host of baselines and state-of-the-art models based on shallow
language understanding struggle to achieve a high score on the Story Cloze
Test. We discuss these implications for script and story learning, and offer
suggestions for deeper language understanding. | http://arxiv.org/pdf/1604.01696 | Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen | cs.CL, cs.AI | In Proceedings of the 2016 North American Chapter of the ACL (NAACL
HLT), 2016 | null | cs.CL | 20160406 | 20160406 | [] |
1604.01696 | 4 | # (Gordon
son, 2009; Manshadi et al., 2008), but teasing out useful information from noisy blog entries is a prob- lem of its own. Consider the following snippet from ICWSM 2011 Spinn3r Dataset of Weblog entries (Burton et al., 2009):
âT had an interesting day in the studio today. It was so in- teresting that I took pictures along the way to describe it to you. Sometimes I like to read an autobiography/biography to discover how someone got from there to here.....how they started, how they traveled in mind and spirit, what made them who they are now. Well, today, my work was a little like that.â
This text is full of discourse complexities. A host of challenging language understanding tasks are re- quired to get at the commonsense knowledge em- bedded within such text snippets. What is needed is a simplified version of these narratives. This pa- per introduces a new corpus of such short common- sense stories. With careful prompt design and mul- tiple phases of quality control, we collected 50k high quality five-sentence stories that are full of stereotypical causal and temporal relations between events. The corpus not only serves as a resource for learning commonsense narrative schemas, but is also suitable for training story generation models. We de- scribe this corpus in detail in Section 3. | 1604.01696#4 | A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories | Representation and learning of commonsense knowledge is one of the
foundational problems in the quest to enable deep language understanding. This
issue is particularly challenging for understanding casual and correlational
relationships between events. While this topic has received a lot of interest
in the NLP community, research has been hindered by the lack of a proper
evaluation framework. This paper attempts to address this problem with a new
framework for evaluating story understanding and script learning: the 'Story
Cloze Test'. This test requires a system to choose the correct ending to a
four-sentence story. We created a new corpus of ~50k five-sentence commonsense
stories, ROCStories, to enable this evaluation. This corpus is unique in two
ways: (1) it captures a rich set of causal and temporal commonsense relations
between daily events, and (2) it is a high quality collection of everyday life
stories that can also be used for story generation. Experimental evaluation
shows that a host of baselines and state-of-the-art models based on shallow
language understanding struggle to achieve a high score on the Story Cloze
Test. We discuss these implications for script and story learning, and offer
suggestions for deeper language understanding. | http://arxiv.org/pdf/1604.01696 | Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen | cs.CL, cs.AI | In Proceedings of the 2016 North American Chapter of the ACL (NAACL
HLT), 2016 | null | cs.CL | 20160406 | 20160406 | [] |
1604.01696 | 5 | This new corpus also addresses a problem facing script learning over the past few years. Despite the attention scripts have received, progress has been in- hibited by the lack of a systematic evaluation frame- work. A commonly used evaluation is the âNarra- tive Cloze Testâ (Chambers and Jurafsky, 2008) in which a system predicts a held-out event (a verb and its arguments) given a set of observed events. For example, the following is one such test with a missing event: {X threw, pulled X, told X, ???, X completed}!. As is often the case, several works now optimize to this specific test, achieving higher scores with shallow techniques. This is problematic because the models often are not learning common- sense knowledge, but rather how to beat the shallow test.
This paper thus introduces a new evaluation framework called the Story Cloze Test. Instead of predicting an event, the system is tasked with choos- ing an entire sentence to complete the given story.
âNarrative cloze tests were not meant to be human solvable.
We collected 3,742 doubly verified Story Cloze Test cases. The test is described in detail in Section 4. | 1604.01696#5 | A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories | Representation and learning of commonsense knowledge is one of the
foundational problems in the quest to enable deep language understanding. This
issue is particularly challenging for understanding casual and correlational
relationships between events. While this topic has received a lot of interest
in the NLP community, research has been hindered by the lack of a proper
evaluation framework. This paper attempts to address this problem with a new
framework for evaluating story understanding and script learning: the 'Story
Cloze Test'. This test requires a system to choose the correct ending to a
four-sentence story. We created a new corpus of ~50k five-sentence commonsense
stories, ROCStories, to enable this evaluation. This corpus is unique in two
ways: (1) it captures a rich set of causal and temporal commonsense relations
between daily events, and (2) it is a high quality collection of everyday life
stories that can also be used for story generation. Experimental evaluation
shows that a host of baselines and state-of-the-art models based on shallow
language understanding struggle to achieve a high score on the Story Cloze
Test. We discuss these implications for script and story learning, and offer
suggestions for deeper language understanding. | http://arxiv.org/pdf/1604.01696 | Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen | cs.CL, cs.AI | In Proceedings of the 2016 North American Chapter of the ACL (NAACL
HLT), 2016 | null | cs.CL | 20160406 | 20160406 | [] |
1604.01696 | 6 | We collected 3,742 doubly verified Story Cloze Test cases. The test is described in detail in Section 4.
Finally, this paper proposes several models, in- cluding the most recent state-of-the-art approaches for the narrative cloze test, for tackling the Story Cloze Test. The results strongly suggest that achiev- ing better than random or constant-choose perfor- mance requires richer semantic representation of events together with deeper levels of modeling the semantic space of stories. We believe that switching to the Story Cloze Test as the empirical evaluation framework for story understanding and script learn- ing can help direct the field to a new direction of deeper language understanding.
# 2 Related Work | 1604.01696#6 | A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories | Representation and learning of commonsense knowledge is one of the
foundational problems in the quest to enable deep language understanding. This
issue is particularly challenging for understanding casual and correlational
relationships between events. While this topic has received a lot of interest
in the NLP community, research has been hindered by the lack of a proper
evaluation framework. This paper attempts to address this problem with a new
framework for evaluating story understanding and script learning: the 'Story
Cloze Test'. This test requires a system to choose the correct ending to a
four-sentence story. We created a new corpus of ~50k five-sentence commonsense
stories, ROCStories, to enable this evaluation. This corpus is unique in two
ways: (1) it captures a rich set of causal and temporal commonsense relations
between daily events, and (2) it is a high quality collection of everyday life
stories that can also be used for story generation. Experimental evaluation
shows that a host of baselines and state-of-the-art models based on shallow
language understanding struggle to achieve a high score on the Story Cloze
Test. We discuss these implications for script and story learning, and offer
suggestions for deeper language understanding. | http://arxiv.org/pdf/1604.01696 | Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen | cs.CL, cs.AI | In Proceedings of the 2016 North American Chapter of the ACL (NAACL
HLT), 2016 | null | cs.CL | 20160406 | 20160406 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.