doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1709.02349 | 81 | # âθ â cd
# t,trunc.âθ log Ïθ(ad
# t |hd
# t ) rd
t where d â¼ Uniform(1, D), t â¼ Uniform(1, T d),
(14)
Off-policy Evaluation: To evaluate the policy, we estimate the expected return (Precup 2000):
RÏθ [R] â t,trunc. rd cd t . d,t (15)
Furthermore, by substituting rd t with a constant reward of 1.0 for each time step, we can compute the estimated number of time steps per episode under the policy. As will be discussed later, this is an orthogonal metric based on which we can analyse and evaluate each policy. However, this estimate does not include the number of priority responses, since there are no actions for the agent to take when there is a priority response. | 1709.02349#81 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 82 | Training: We initialize the policy model with the parameters of Supervised AMT, and then train the parameters w.r.t. eq. (14) with stochastic gradient descent using Adam. We use a set of a few thousand dialogues recorded between Alexa users and a preliminary version of the system. About 60% of these examples are used for training, and about 20% are used for development and testing. To reduce the risk of overï¬tting, we only train the weights related to the second last layer using off-policy REINFORCE. We use a random grid search with different hyper-parameters, which include the temperature parameter λ and the learning rate. We select the hyper-parameters with the highest expected return on the development set.
# 4.6 Off-policy REINFORCE with Learned Reward Function
Similar to the Supervised Learned Reward policy, we may use the reward model for training with the Off-policy REINFORCE algorithm. This section describes how we combine the two approaches.
Reward Shaping with Learned Reward Model: We use the reward model to compute a new estimate for the reward at each time step in each dialogue:
rd t def= if user utterance at time t + 1 has negative sentiment, gÏ(ht, at) otherwise. (16)
This is substituted into eq. (14) for training and into eq. (15) for evaluation. | 1709.02349#82 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 83 | This is substituted into eq. (14) for training and into eq. (15) for evaluation.
Training: As with Off-policy REINFORCE, we initialize the policy model with the parameters of the Supervised AMT model, and then train the parameters w.r.t. eq. (14) with mini-batch stochastic gradient descent using Adam. We use the same set of dialogues and split as Off-policy REINFORCE. We use a random grid search with different hyper-parameters, As before, to reduce the risk of overï¬tting, we only train the weights related to the second last layer using this method. which include the temperature parameter λ and the learning rate, and select the hyper-parameters with the highest expected return on the development set. In this case, the expected return is computed according to the learned reward model. As this policy uses the learned reward model, we call it Off-policy REINFORCE Learned Reward.
# 4.7 Q-learning with the Abstract Discourse Markov Decision Process | 1709.02349#83 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 84 | # 4.7 Q-learning with the Abstract Discourse Markov Decision Process
The approaches described so far have each their own advantages and disadvantages. One way to quantify their differences is through a decomposition known as the bias-variance trade-off. At one end of the spectrum, the Supervised AMT policy has low variance, because it was trained with hundreds of thousands of human annotations at the level of each model response. However, for the same reason, Supervised AMT incurs a substantial bias, because the human annotations do not reï¬ect the real user satisfaction for an entire conversation. At the other end of the spectrum, Off-policy REINFORCE suffers from high variance, because it was trained with only a few thousand dialogues and corresponding user scores. To make matters worse, the user scores are affected by many external factors (e.g. user proï¬le, user expectations, and so on) and occur at the granularity of an entire conversation. Nevertheless, this method incurs low bias because it directly optimizes the objective metric we care about (i.e. the user score).22 By utilizing a learned reward function, Supervised
22Due to truncated importance weights, however, the off-policy REINFORCE training procedure is still biased.
21 | 1709.02349#84 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 85 | 22Due to truncated importance weights, however, the off-policy REINFORCE training procedure is still biased.
21
Learned Reward and Off-policy REINFORCE Learned Reward suffer less from bias, but since the learned reward function has its own variance component, they are both bound to have higher variance. In general, ï¬nding the optimal trade-off between bias and variance can be notoriously difï¬cult. In this section we propose a novel method for trading off bias and variance by learning the policy from simulations in an approximate Markov decision process.
Motivation A Markov decision process (MDP) is a framework for modeling sequential decision making (Sutton & Barto 1998). In the general setting, an MDP is a model consisting of a discrete set of states H, a discrete set of actions A, a transition distribution function P , a reward distribution function R, and a discount factor γ. As before, an agent aims to maximize its reward during each episode. Let t denote the time step of an episode with length T . At time step t, the agent is in state ht â H and takes action at â A. Afterwards, the agent receives reward rt â¼ R(ht, at) and transitions to a new state ht+1 â¼ P (ht|at). | 1709.02349#85 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 86 | Given an MDP model for open-domain conversations, there are dozens of algorithms we could apply to learn the agentâs policy (Sutton & Barto 1998). Unfortunately, such an MDP is difï¬cult to build or estimate. We could try to naively estimate one from the recorded dialogues, but this would require solving two extremely difï¬cult problems. First, we would need to learn the transition distribution P , which outputs the next user utterance in the dialogue given the dialogue history. This problem is likely to be as difï¬cult as our original problem of ï¬nding an appropriate response to the user! Second, we would need to learn the reward distribution R for each time step. However, as we have shown earlier, it is very difï¬cult to learn to predict the user score for an entire dialogue. Given the data we have available, estimating the reward for a single turn is likely also going to be difï¬cult. Instead, we propose to tackle the problem by splitting it into three smaller parts. | 1709.02349#86 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 87 | Figure 7: Probabilistic directed graphical model for the Abstract Discourse Markov Decision Process. For each time step t, zt is a discrete random variable which represents the abstract state of the dialogue, ht represents the dialogue history, at represents the action taken by the system (i.e. the selected response), yt represents the sampled AMT label and rt represents the sampled reward.
The Abstract Discourse Markov Decision Process The model we propose to learn is called the Abstract Discourse MDP. As illustrated in Figure 7, the model follows a hierarchical structure at each time step. At time t, the agent is in state zt â Z, a discrete random variable representing the abstract discourse state. This variable only represents a few high-level properties related to the dialogue history. We deï¬ne the set Z is the Cartesian product:
Z = ZDialogue act à ZUser sentiment à ZGeneric user utterance, (17) | 1709.02349#87 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 88 | Z = ZDialogue act à ZUser sentiment à ZGeneric user utterance, (17)
and where of sets. {Accept, Reject, Request, Politics, Generic Question, Personal Question, Statement, Greeting, Goodbye, Other}. the intention of acts The second set consists of sentiments types: userâs utterance (Stolcke et al. 2000). ZUser sentiment = {Negative, Neutral, Positive}. The third set represent a binary variable: ZGeneric user utterance = {True, False}. This variable is True only when the user utterance is generic and topic-independent (i.e. when the user utterance only contains stop-words). We build a hand-crafted deterministic classiï¬er, which maps a dialogue history to the corresponding classes in ZDialogue act, ZUser sentiment and ZGeneric user utterance. We denote this mapping fhâz. Although we only
22
consider dialogue acts, sentiment and generic utterances, it is trivial to expand the abstract discourse state with other types of discrete or real-valued variables.
Given a sample zt, the Abstract Discourse MDP samples a dialogue history ht from a ï¬nite set of dialogue histories H. In particular, ht is sampled at uniformly random from the set of dialogue histories where the last utterance is mapped to zt: | 1709.02349#88 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 89 | ht â¼ P (h|H, fhâz, zt) def= Uniform({h | h â H and fhâz(h) = zt}). In other words, ht is a dialogue history where dialogue act, user sentiment and generic property is identical to the discrete variable zt.
For our purpose, H is the set of all recorded dialogues between Alexa users and a preliminary version of the system. This formally makes the Abstract Discourse MDP a non-parametric model, since sampling from the model requires access to the set of recorded dialogue histories H. This set grows over time when the system is deployed in practice. This is useful, because it allows to continuously improve the policy as new data becomes available. Further, it should be noted that the set Z is small enough that every possible state is observed several times in the recorded dialogues. | 1709.02349#89 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 90 | Given a sample ht, the agent chooses an action at according to its policy Ïθ(at|ht), with parameters θ. A reward rt is then sampled such that rt â¼ R(ht, at), where R is a distribution function. In our case, we use the probability function PËθ, where the parameters Ëθ are estimated using supervised learning on AMT labels in eq. (6). We specify a reward of â2.0 for a "very poor" response class, a reward of â1.0 for a "poor" response class, a reward of 0.0 for an "acceptable" response class, a reward of 1.0 for a "good" response class and a reward of 2.0 for an "excellent" response class. To reduce the number of hyperparameters, we use the expected reward instead of a sample:23
rt = PËθ(y|ht, at)T[â2.0, â1.0, 0.0, 1.0, 2.0].
(19)
Next, a variable yt â {"very poor", "poor", "acceptable", "good", "excellent"} is sampled: | 1709.02349#90 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 91 | (19)
Next, a variable yt â {"very poor", "poor", "acceptable", "good", "excellent"} is sampled:
yt â¼ PËθ(y|ht, at). This variable represents one appropriateness interpretation of the output. This variable helps predict the future state zt+1, because the overall appropriateness of a response has a signiï¬cant impact on the userâs next utterance (e.g. very poor responses often cause users to respond with What? or I donât understand.). Finally, a new state zt+1 is sampled according to P ËÏ: | 1709.02349#91 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 92 | zt+1 â¼ P ËÏ(z|zt, ht, at, yt). where P ËÏ is the transition distribution with parameters ËÏ. The transition distribution is parametrized by three independent two-layer MLP models, which take as input the same features as the scoring function, as well as 1) a one-hot vector representing the sampled response class yt, 2) a one-hot vector representing the dialogue act of the last user utterance, 3) a one-hot vector representing the sentiment of the last user utterance, 4) a binary variable indicating whether the last user utterance was generic, and 5) a binary variable indicating whether the last user utterance contained a wh-word (e.g. what, who). The ï¬rst MLP predicts the next dialogue act, the second MLP predicts the next sentiment type and the third MLP predicts whether the next user utterance is generic. The dataset for training the MLPs consists of 499, 757 transitions, of which 70% are used for training and 30% for evaluation. The MLPs are trained with maximum log-likelihood using mini-batch stochastic gradient descent. We use Adam and early-stop on a hold-out set. Due to the large number of examples, no | 1709.02349#92 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 93 | maximum log-likelihood using mini-batch stochastic gradient descent. We use Adam and early-stop on a hold-out set. Due to the large number of examples, no regularization is used. The three MLP models obtain a joint perplexity of 19.51. In comparison, a baseline model, which always assigns the average class frequency as the output probability obtains a perplexity of 23.87. On average, this means that roughly 3 â 4 possible zt+1 states can be eliminated by conditioning on the previous variables zt, ht, at and yt. In other words, the previous state zt and ht, together with the agentâs action at has a signiï¬cant effect on the future state zt+1. This means that an agent trained in the Abstract Discourse MDP has the potential to learn to take into account future states of the dialogue when selecting its action. This is in contrast to policies learned using supervised learning, which do not consider future dialogue states. | 1709.02349#93 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 95 | Full test set Difï¬cult test set Policy Score mean Score std Score mean Score std Alicebot Evibot + Alicebot Supervised AMT Off-policy REINFORCE Q-learning AMT 2.19 ± 0.03 2.25 ± 0.04 2.63 ± 0.04 2.61 ± 0.04 2.64 ± 0.04 1.17 [1.15, 1.20] 1.22 [1.20, 1.25] 1.34 [1.31, 1.37] 1.33 [1.31, 1.36] 1.37 [1.34, 1.40] 1.79 ± 0.03 1.79 ± 0.03 2.34 ± 0.04 2.30 ± 0.04 2.35 ± 0.04 0.88 [0.86, 0.90] 0.86 [0.84, 0.88] 1.26 [1.23, 1.29] 1.25 [1.22, 1.28] 1.31 [1.28, 1.34] | 1709.02349#95 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 96 | The idea of modeling a high-level abstraction of the dialogue, zt, is related to the dialogue state tracking challenge (Williams et al. 2013, 2016). In this challenge, the task is to map the dialogue history to a discrete state representing all salient information about the dialogue. Unlike the dialogue state tracking challenge, however, the variable zt only includes limited salient information about the dialogue. For example, in our implementation, zt does not include topical information. As such, zt is only a partial representation of the dialogue history.
Training Given the Abstract Discourse MDP, we are now able to learn policies directly from simulations. We use Q-learning with experience replay to learn the policy parametrized as an action- value function (Mnih et al.]2013 3). Q-learning is a simple off-policy reinforcement learning algorithm, which has been shown to be effective for training policies parametrized by neural networks. For experience replay, we use a memory buffer of size 1000. We use an ¢-greedy exploration scheme with ⬠= 0.1. We experiment with discount factors 7 ⬠{0.1,0.2,0.5}. As before, the parameters are updated using Adam. To reduce the risk of overfitting, we only train the weights related to the final output layer and the skip-connection (shown in dotted lines in Figure[2) using Q-learning. | 1709.02349#96 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 97 | Training is carried out in two alternating phases. We train the policy for 100 episodes. Then, we evaluate the policy for 100 episodes w.r.t. average return. Afterwards, we continue training the policy for another 100 episodes. During evaluation, each dialogue history is sampled from a separate set of dialogue histories, HEval, which is disjoint from the set of dialogue histories, HTrain used at training time. This ensures that the policy is not overï¬tting our ï¬nite set of dialogue histories. For each hyper-parameter combination, we train the policy between 400 and 600 episodes. We select the policy which performs best w.r.t. average return. To keep notation brief, we call this policy Q-learning AMT.
# 4.8 Preliminary Evaluation
In this section, we carry out a preliminary evaluation of the response model selection policies. | 1709.02349#97 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 98 | # 4.8 Preliminary Evaluation
In this section, we carry out a preliminary evaluation of the response model selection policies.
AMT Evaluation: We ï¬rst evaluate the learned policies on the w.r.t. the human scores in the AMT test set. We measure the average performance as a real-valued scalar, where the label "Very poor" is given a score of 1, label "Poor" is given a score of 2 and so on. We also report standard deviations for the scores, which measure the variance or risk the policies are willing to take; higher standard deviations indicate that a policy is more likely to select responses which result in extreme labels (e.g. "Very poor" and "Excellent"). For both means and standard deviations we report 90% conï¬dence intervals estimated under the assumption that the scores are Gaussian-distributed. In addition to measuring performance on the full test set, we also measure performance on a subset of the test set where neither Alicebot nor Evibot had responses labeled "Good" or "Excellent". These are test examples, where an appropriate response is likely to come only from some of the other models. Determining an appropriate response for these examples is likely to be more difï¬cult. We refer to this subset as the "Difï¬cult test set". | 1709.02349#98 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 99 | We evaluate the policies Supervised AMT, Off-policy REINFORCE and Q-learning AMT. In addition, we also evaluate two heuristic policies: 1) a policy selecting only Alicebot responses called Alicebot, and 2) a policy selecting Evibot responses when possible and Alicebot responses otherwise, called Evibot + Alicebot.
The results are given in Table 3. The results show that the three learned policies are all signiï¬cantly better w.r.t. mean score compared to both Alicebot and Evibot + Alicebot. Not surprisingly, this
24
difference is ampliï¬ed on the difï¬cult test set. Q-learning AMT, Supervised AMT and Off-policy REINFORCE appear to perform overall equally well. This shows that machine learning has helped learn effective policies, able to select other model responses when neither the Alicebot and Evibot responses are appropriate. Next, the results show that Q-learning AMT has higher standard deviations than the other policies on both the full test set and the difï¬cult test set. Furthermore, since these standard deviations are evaluated at the level of a single response, we might expect this variability to compound throughout an entire conversation. This strongly indicates that Q-learning AMT is more risk tolerant than the other policies. | 1709.02349#99 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 100 | Table 4: Off-policy evaluation w.r.t. expected (average) Alexa user score and number of time steps (excluding priority responses) on test set.
Policy Alexa user score Time steps Supervised AMT Supervised Learned Reward Off-policy REINFORCE Off-policy REINFORCE Learned Reward Q-learning AMT 2.06 0.94 2.45 1.29 2.08 8.19 3.66 10.08 5.02 8.28
Off-policy Evaluation: One way to evaluate the selection policies is by using the off-policy evalu- ation given in eq. (15). This equation provides an estimate of the expected Alexa user score under each policy.24 As described earlier, the same equation can be used to estimate the expected number of time steps per episode (excluding priority responses). | 1709.02349#100 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 101 | The expected (average) Alexa user score and number of time steps per episode (excluding priority responses) are given in Table 4. Here we observe that the Off-policy REINFORCE performs best followed by Q-learning AMT and Supervised AMT w.r.t. expected Alexa user score. Off-policy REINFORCE reaches 2.45, which is a major 17.8% improvement over the second best performing model Q-learning AMT. However, this advantage should be taken with a grain of salt. As discussed earlier, the off-policy evaluation in eq. (15) is a biased estimator since the importance weights have been truncated. Moreover, Off-policy REINFORCE has been trained speciï¬cally to maximize this biased estimator, while all other policies have been trained to maximize other objective functions. Similarly, w.r.t. expected number of time steps, Off-policy REINFORCE reaches the highest number of time steps followed by Q-learning AMT and Supervised AMT. As before, we should take this result with a grain of salt, since this evaluation is also biased and does not take into account priority responses. Further, itâs not clear that increasing the number of time steps will increase user scores. Nevertheless, Off-policy REINFORCE, Q-learning AMT and Supervised AMT appear to be our prime candidates for further experiments. | 1709.02349#101 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 102 | Response Model Selection Frequency: Figure 8 shows the frequency with which Supervised AMT, Off-policy REINFORCE and Q-learning AMT select different response models. We observe that the policy learned using Off-policy REINFORCE tends to strongly prefer Alicebot responses over other models. The Alicebot responses are among the safest and most topic-dependent, generic responses in the system, which suggests that Off-policy REINFORCE has learned a highly risk averse strategy. On the other hand, the Q-learning AMT policy selects Alicebot responses substantially less often than both Off-policy REINFORCE and Supervised AMT. Instead, Q-learning AMT tends to prefer responses retrieved from Washington Post and from Google search results. These responses are semantically richer and have the potential to engage the user more deeply in a particular topic, but they are also more risky (e.g. a bad choice could derail the entire conversation.). This suggests that Q-learning AMT has learned a more risk tolerant strategy. One possible explanation for this difference is that Q-learning AMT was trained using simulations. By learning online from simulations, the policy has been able to explore new actions and discover high-level strategies | 1709.02349#102 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 103 | is that Q-learning AMT was trained using simulations. By learning online from simulations, the policy has been able to explore new actions and discover high-level strategies lasting multiple time steps. In particular, the policy has been allowed to experiment with riskier actions and to learn remediation or fall-back strategies, in order to handle cases where a risky action fails. This might also explain its stronger preference for BoWFactGenerator responses, which might be serving as a fall-back strategy by outputting factual statements on the current topic. This would have been difï¬cult | 1709.02349#103 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 104 | 24For the policies parametrized as action-value functions, we transform eq. (2) to eq. (4) by setting fθ = Qθ and ï¬tting the temperature parameter λ on the Off-policy REINFORCE development set.
25
Evibot Alicebot Elizabot Initiatorbot BoWFactGenerator VHREDSubtitles LSTMClassifierMSMarco Washington Post Models Retrieval Model Policy Retrievaâ Models ss Supervised AMT GRUQuestionGenerator mmm Off-policy REINFORCE Other Models Mm Q-learning AMT 0 5 10 15 20 25 30 35 40 45 Response selection frequency (in %)
# Reddit edait
Figure 8: Response model selection probabilities across response models for Supervised AMT, Off- policy REINFORCE and Q-learning AMT on the AMT label test dataset. 95% conï¬dence intervals are shown based on the Wilson score interval for binomial distributions.
Table 5: Policy evaluation using the Abstract Discourse MDP w.r.t. average return, average reward per time step and average episode length on dev set (± standard deviations). The reward function is based on Supervised AMT. | 1709.02349#104 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 105 | Policy Average return Random â32.18 ± 31.77 Alicebot â15.56 ± 15.61 Evibot + Alicebot â11.33 ± 12.43 Supervised AMT â6.46 ± 8.01 Supervised Learned Reward â24.19 ± 23.30 Off-policy REINFORCE â7.30 ± 8.90 â0.87 ± 0.24 â0.37 ± 0.16 â0.29 ± 0.19 â0.15 ± 0.16 â0.73 ± 0.27 â0.16 ± 0.16 34.29 ± 33.02 42.01 ± 42.00 37.5 ± 38.69 42.84 ± 42.92 31.91 ± 30.09 43.24 ± 43.58 Off-policy REINFORCE Learned Reward Q-learning AMT â10.19 ± 11.15 â6.54 ± 8.02 â0.28 ± 0.19 â0.15 ± 0.18 35.51 ± 35.05 40.68 ± 39.13
# Average reward per time step Average dialogue length
to learn for Off-policy REINFORCE, since the sequence of actions for such high-level strategies are sparsely observed in the data and, when they are observed, the corresponding returns (Alexa user scores) have high variance. | 1709.02349#105 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 106 | A second observation is that Q-learning AMT has the strongest preference for Initiatorbot among the three policies. This could indicate that Q-learning AMT leans towards a system-initiative strategy (e.g. a strategy where the system tries to maintain control of the conversation by asking questions, changing topics and so on). Further analysis is needed to conï¬rm this.
Abstract Discourse MDP Evaluation Next, we can evaluate the performance of each policy w.r.t. simulations in the Abstract Discourse MDP. We simulate 500 episodes under each policy and evaluate it w.r.t. average return, average reward per time step and dialogue length. In addition to evaluating the ï¬ve policies described earlier, we also evaluate three heuristic policies: 1) a policy selecting responses at random called Random, 2) the Alicebot policy, and 3) the Evibot + Alicebot policy. Evaluating these models will serve to validate the approximate MDP. | 1709.02349#106 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 107 | The results are given in Table 5. We observe that Supervised AMT performs best w.r.t. average return and average reward per time step. However, this comes as no surprise. The reward function in the MDP is deï¬ned as Supervised AMT, so by construction this policy achieves the highest reward per time step. Next we observe that Q-learning AMT is on par with Supervised AMT, both achieving same â0.15 average reward per time step. Second in line comes Off-policy REINFORCE, achieving
26
an average reward per time step of â0.16. However, Off-policy REINFORCE also achieved the highest average dialogue length of 43.24. At the other end of the spectrum comes, as expected, the Random policy performing worst w.r.t. all metrics. In comparison, both Alicebot and Evibot + Alicebot perform better w.r.t. all metrics, with Evibot + Alicebot achieving the best average return and average reward per time step out of the three heuristic policies. This validates the utility of the Abstract Discourse MDP as an environment for training and evaluating policies. Overall, Off-policy REINFORCE, Q-learning AMT and Supervised AMT still appear to be the best performing models in the preliminary evaluation. | 1709.02349#107 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 108 | Evibot ° ° Alicebot e ° ° ° Elizabot Initiatorbot BoWMovies e ° ° ° BoWEscapePlan ° ° BoWFactGenerator Reddit models -B- -B- Be ° ° SkipThoughtBooks Q-learning AMT policy VHREDSubtitles LSTMClassifierMSMarco oe oo oe e GRUQuestionGenerator; 0 0 0 0 0 2 0 0 0 VHREDWashingtonPost { 0 BoWWashingtonPost ~â â om eo 6 ° VHREDSubtitles | © | © a oe See egnaeypey on uw 8383888 § see 8S sa @ ae 8 e226 8 a2oe2ea257 83 8 seeed > SS S88 U5 82S = 5 ⬠goeeseeoes â @ s $s zi 8 & 5 2 aos 2e 226 @x & =ecasa -6%o908 8 5eEes f£adys ez Ss eee Seee esuy of a os ee" eS iaas 8 G Seg % F2¢ a 35S Supervised AMT policy | 1709.02349#108 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 109 | Figure 9: Contingency table comparing selected response models between Supervised AMT and Q-learning AMT. The cells in the matrix show the number of times the Supervised AMT policy selected the row response model and the Q-learning AMT policy selected the column response model. The cell frequencies were computed by simulating 500 episodes under the Q-learning policy in the Abstract Discourse MDP. Note that all models retrieving responses from Reddit have been agglomerated into the class Reddit models. | 1709.02349#109 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 110 | Finally, we compare Q-learning AMT with Supervised AMT w.r.t. the action taken in states from episodes simulated in the Abstract Discourse MDP. As shown in Figure 9, the two policies diverge w.r.t. several response models. When Supervised AMT would have selected topic-independent, generic Alicebot and Elizabot responses, Q-learning AMT often selects BoWFactGenerator, Initiatorbot and VHREDWashingtonPost responses. For example, there were 347 instances where Supervised AMT selected Alicebot, but where Q-learning AMT selected BoWFactGenerator. Similarly, where Supervised AMT would have preferred generic VHREDSubtitle responses, Q-learning AMT often selects responses from BoWFactGenerator, InitiatorBot and VHREDRedditSports. This supports our previous analysis showing that Q-learning AMT has learned a more risk tolerant strategy, which involves response models with semantically richer content.
In the next section, we evaluate these policies with real-world users.
27
# 5 A/B Testing Experiments | 1709.02349#110 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 111 | In the next section, we evaluate these policies with real-world users.
27
# 5 A/B Testing Experiments
To evaluate the dialogue manager policies described in the previous section, we carry out A/B testing experiments. During each A/B testing experiment, we evaluate several policies for selecting the response model. When Alexa users start a conversation with the system, they are automatically assigned to a random policy and afterwards their dialogues and ï¬nal scores are recorded.
A/B testing allows us to accurately compare different dialogue manager policies by keeping all other system factors constant (or almost constant). This is in contrast to evaluating the system performance over time, when the system is continuously being modiï¬ed. In such a situation, it is often difï¬cult to evaluate the improvement or degradation of performance w.r.t. particular system modiï¬cations. | 1709.02349#111 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 112 | However, even during our A/B testing experiments, the distribution over Alexa users still changes through time. Different types of users will be using the system depending on the time of day, weekday and holiday season. In addition, the user expectations towards our system change over time as they interact with other socialbots in the competition. In other words, we must consider the Alexa user distribution as following a non-stationary stochastic process. Therefore, we take two steps to reduce confounding factors and correlations between users. First, during each A/B testing experiment, we evaluate all policies of interest simultaneously. This ensures that we have approximately the same number of users interacting with each policy w.r.t. time of day and weekday. This minimizes the effect of changes in the user distribution on the ï¬nal user scores within that period. However, since the user distribution changes between the A/B testing experiments, we still cannot accurately compare policy performance across A/B testing experiments. Second, we discard scores from returning users (i.e. users who have already evaluated the system once). Users who are returning to the system are likely to be inï¬uenced by their previous interactions with the system. For example, users who | 1709.02349#112 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 113 | evaluated the system once). Users who are returning to the system are likely to be inï¬uenced by their previous interactions with the system. For example, users who previously had a positive experience with the system may be biased towards giving high scores in their next interaction. Further, the users who return to the system are likely to belong to a particular subpopulation of users. This particular group of users may inherently have more free time and be more willing to engage with socialbots than other users. Discarding returning user scores ensures that the evaluation is not biased towards this subpopulation of users. By discarding scores from returning users, we also ensure that the evaluation counts every user exactly once. Finally, it should be noted that we ignore dialogues where the Alexa user did not give a score. This inevitably biases our evaluation, since users who do not provide a score are likely to have been dissatisï¬ed with the system or to have been expecting different functionality (e.g. non-conversational activities, such as playing music, playing games or taking quizzes). One potential remedy is to have all dialogues evaluated by a third-party (e.g. by asking human annotators on Amazon Mechanical Turk to evaluate | 1709.02349#113 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 115 | # 5.1 A/B Testing Experiment #1
The ï¬rst A/B testing experiment was carried out between July 29th, 2017 and August 6th, 2017. We tested six dialogue manager policies: Evibot + Alicebot, Supervised AMT, Supervised Learned Reward, Off-policy REINFORCE, Off-policy REINFORCE Learned Reward and Q-learning AMT. For Off-policy REINFORCE and Off-policy REINFORCE Learned Reward, we use the greedy variant deï¬ned in eq. (5).
This experiment occurred early in the Amazon Alexa Prize competition. This means that Alexa users have few expectations towards our system (e.g. expectations that the system can converse on a particular topic, or that the system can engage in non-conversational activities, such as playing word games or taking quizzes). Further, the period July 29th - August 6th overlaps with the summer holidays in the United States. This means that we might expect more children to interact with system than during other seasons. Policy Evaluation The results are given in Table 6.25 The table shows the average Alexa user scores, average dialogue length, average percentage of positive user utterances and average percentage of negative user utterances. In total, over a thousand user ratings were collected after discarding returning users. Ratings were collected after the end of the semi-ï¬nals competition, where all ratings | 1709.02349#115 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 116 | 2595% conï¬dence intervals are computed under the assumption that the Alexa user scores for each policy are drawn from a Gaussian distribution with its own mean and variance. This is an approximation, since the Alexa user scores only have support on the interval [1, 5].
28
Table 6: First A/B testing experiment with six different policies (± 95% conï¬dence intervals). Star â indicates policy is signiï¬cantly better than other policies at 95% statistical signiï¬cance level. | 1709.02349#116 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 117 | Policy User score Dialogue length Pos. utterances Neg. utterances Evibot + Alicebot Supervised AMT Supervised Learned Reward Off-policy REINFORCE 2.86 ± 0.22 2.80 ± 0.21 2.74 ± 0.21 2.86 ± 0.21 31.84 ± 6.02 34.94 ± 8.07 27.83 ± 5.05 37.51 ± 7.21 2.80% ± 0.79 4.00% ± 1.05 2.56% ± 0.70 3.98% ± 0.80 5.63% ± 1.27 8.06% ± 1.38 6.46% ± 1.29 6.25 ± 1.28 Off-policy REINFORCE Learned Reward Q-learning AMT* 2.84 ± 0.23 3.15 ± 0.20 34.56 ± 11.55 30.26 ± 4.64 2.79% ± 0.76 3.75% ± 0.93 6.90% ± 1.45 5.41% ± 1.16
Table 7: Amazon Alexa Prize semi-ï¬nals average team statistics provided by Amazon.
Policy User score Dialogue length All teams Non-ï¬nalist teams Finalist teams 2.92 2.81 3.31 22 22 26
had been transcribed by human annotators. Each policy was evaluated by about two hundred unique Alexa users. | 1709.02349#117 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 118 | As expected from our preliminary evaluation, we observe that Q-learning AMT and Off-policy REINFORCE perform best among all policies w.r.t. user scores. Q-learning AMT obtained an average user score of 3.15, which is signiï¬cantly higher than all other policies at a 95% statistical signiï¬cance level w.r.t. a one-tailed two-sample t-test. In comparison, the average user score for all the teams in the competition during the semi-ï¬nals was only 2.92. Interestingly, Off-policy REINFORCE achieved the longest dialogues with an average length of 37.51. This suggests Off-policy REINFORCE yields highly engaging conversations. In comparison, in the semi-ï¬nals, the average dialogue length of all teams was 22 and of the ï¬nalist teams was 26. We also observe that Off-policy REINFORCE had a slightly higher percentage of user utterances with negative sentiment compared to Q-learning AMT. This potentially indicates that the longer dialogues also include some frustrated interactions (e.g. users who repeat the same questions or | 1709.02349#118 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 119 | to Q-learning AMT. This potentially indicates that the longer dialogues also include some frustrated interactions (e.g. users who repeat the same questions or statements in the hope that the system will return a more interesting response next time). The remaining policies achieved average Alexa user scores between 2.74 and 2.86, with the heuristic policy Evibot + Alicebot obtaining 2.86. This suggests that the other policies have not learned to select responses more appropriately than the Evibot + Alicebot heuristic. | 1709.02349#119 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 120 | In conclusion, the results indicate that the risk tolerant learned by the Q-learning AMT policy performs best among all policies. This shows that learning a policy through simulations in an Abstract Discourse MDP may serve as a fruitful path towards developing open-domain socialbots. In addition, the performance of Off-policy REINFORCE indicates that optimizing the policy directly towards Alexa user scores could also potentially yield improvements. However, further investigation is required.
# Length Analysis
In an effort to further understand how the policies differ from each other, we carry out an analysis of the policies performance as a function of dialogue length. Although, we have recorded only a limited amount of data for dialogues with a particular length, this analysis could help illuminate directions for future experiments. | 1709.02349#120 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 121 | Table 8 shows the average Alexa user scores w.r.t. four dialogue length intervals for the six policies. The estimates are based on between 30-70 Alexa user ratings for each policy and interval combination. First, we observe that Q-learning AMT performs better than all other policies for all intervals except the medium-short interval (10 â 19, or 5 â 10 back-and-forth turns). Further, its high performance for the long intervals (20 â 39 and ⥠40) would suggest that Q-learning AMT performs excellent in long dialogues. The other learned policies Supervised AMT, Off-policy REINFORCE and Off-policy REINFORCE Learned Reward also appear to perform excellent in long dialogues. On the other
29
Table 8: First A/B testing experiment user scores with six different policies w.r.t. varying dialogue length (± one standard deviation). | 1709.02349#121 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 122 | 29
Table 8: First A/B testing experiment user scores with six different policies w.r.t. varying dialogue length (± one standard deviation).
Dialogue length Policy < 10 10 - 19 20 - 39 ⥠40 Evibot + Alicebot Supervised AMT Supervised Learned Reward Off-policy REINFORCE 2.88 ± 1.71 2.91 ± 1.59 3.31 ± 1.43 2.99 ± 1.64 2.58 ± 1.33 2.64 ± 1.38 2.45 ± 1.57 2.72 ± 1.57 2.93 ± 1.28 2.60 ± 1.40 2.19 ± 1.38 2.56 ± 1.31 2.99 ± 1.37 3.13 ± 1.43 2.90 ± 1.54 3.26 ± 1.45 Off-policy REINFORCE Learned Reward Q-learning AMT 2.91 ± 1.64 3.46 ± 1.40 2.53 ± 1.45 2.60 ± 1.45 2.9 ± 1.56 3.19 ± 1.39 3.14 ± 1.36 3.31 ± 1.33 | 1709.02349#122 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 123 | hand, the heuristic Evibot + Alicebot policy and the Supervised Learned Reward policy appear to perform poorly in long dialogues, but that is not surprising given their low overall performance. In particular, Supervised Learned Reward seems to be performing well only for very short dialogues. This potentially indicates that the policy fails to either maintain user engagement or memorize longer-term context. However, further investigation is required.
# Topical Speciï¬city and Coherence
We carry out an analysis of the topical speciï¬city and coherence of the different policies. This analysis aims to quantify how much each policy stays on topic (e.g. whether the policy selects responses on the current topic or on new topics) and how speciï¬c its content is (e.g. how frequently the policy selects generic, topic-independent responses). This analysis is carried out at the utterance level, where we are fortunate to have more recorded data. | 1709.02349#123 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 124 | The results are shown in Table 9. For topic speciï¬city, we measure the average number of noun phrases per user utterance and the average number of noun phrases per system utterance.26 The more topic speciï¬c the user is, the higher we would expect the ï¬rst metric to be. Similarly, the more topic speciï¬c the system is the higher we would expect the second metric to be. For topic coherence, we measure the word overlap between the userâs utterance and the systemâs response, as well as word overlap between the userâs utterance and the systemâs response at the next turn. The more the policy prefers to stay on topic, the higher we would expect these two metrics to be. | 1709.02349#124 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 125 | As shown in the table, Q-learning AMT has obtained signiï¬cantly higher scores w.r.t. both word overlap metrics and the average number of noun phrases per system utterance. This indicates that the Q-learning AMT policy has the highest topical coherency among all six policies, and that it generates the most topic speciï¬c (semantically rich) responses. This is in line with our previous analysis, where we found that Q-learning follows a highly risk tolerant strategy. Next in line, comes Supervised AMT, which also appears to maintain high topic speciï¬city and coherence. In fact, Supervised AMT obtained the highest metric w.r.t. number of noun phrases per user utterance, which indicates that this policy is encouraging the user to give more topic speciï¬c responses. Afterwards comes Off-policy REINFORCE and Off-policy REINFORCE Learned Reward, which tend to select responses with signiï¬cantly less noun phrases and less word overlap. This is also in line with our previous analysis, where we found that Off-policy REINFORCE follows a risk averse strategy. Finally, the heuristic policy | 1709.02349#125 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 126 | overlap. This is also in line with our previous analysis, where we found that Off-policy REINFORCE follows a risk averse strategy. Finally, the heuristic policy Evibot + Alicebot selects responses with very few noun phrases and least word overlap among all policies. This indicates that the heuristic policy might be the least topic coherent policy, and that it mainly selects generic, topic-independent responses. | 1709.02349#126 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 127 | Initiatorbot Evaluation This experiment also allowed us to analyze the outcomes of different conversation starter phrases given by the Initiatorbot. We carried out this analysis by computing the average Alexa user score for each of the 40 possible phrases. We found that phrases related to news (e.g. "Do you follow the news?"), politics (e.g. "Do you want to talk about politics?") and travelling (e.g. "Tell me, where do you like to go on vacation?") performed poorly across all policies. On the other hand, phrases related to animals (e.g. "Do you have pets?" and "What is the cutest animal you can think of?"), movies (e.g. "Letâs talk about movies. Whatâs the last movie you watched?") and
26We use https://spacy.io version 1.9.0 to detect noun phrases with the package "en_core_web_md- 1.2.1".
30 | 1709.02349#127 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 128 | 26We use https://spacy.io version 1.9.0 to detect noun phrases with the package "en_core_web_md- 1.2.1".
30
Table 9: First A/B testing experiment topical speciï¬city and coherence of the six different policies. The columns are average number of noun phrases per user utterance (User NPs), average number of noun phrases per system utterance (System NPs), average number of overlapping words between the userâs utterance and the systemâs response (Word overlap t â t + 1), and average number of overlapping words between the userâs utterance and the systemâs response in the next turn (Word overlap t â t + 3). 95% conï¬dence intervals are also shown. Stop words are excluded. | 1709.02349#128 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 129 | Policy User NPs System NPs Word overlap Word overlap t â t + 1 t â t + 3 Evibot + Alicebot Supervised AMT Supervised Learned Reward Off-policy REINFORCE 0.55 ± 0.03 0.62 ± 0.03 0.57 ± 0.03 0.59 ± 0.02 1.05 ± 0.05 1.75 ± 0.07 1.50 ± 0.07 1.45 ± 0.05 7.33 ± 0.21 10.48 ± 0.28 8.35 ± 0.29 9.05 ± 0.21 7.31 ± 0.22 10.65 ± 0.29 8.36 ± 0.31 9.14 ± 0.22 Off-policy REINFORCE Learned Reward Q-learning AMT 0.61 ± 0.03 0.58 ± 0.03 1.04 ± 0.06 1.98 ± 0.08 7.42 ± 0.25 11.28 ± 0.30 7.42 ± 0.26 11.52 ± 0.32
Table 10: Second A/B testing experiment with two different policies (± 95% conï¬dence intervals). | 1709.02349#129 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 131 | food (e.g. "Letâs talk about food. What is your favorite food?") performed well across all policies. For example, conversations where the Initiatorbot asked questions related to news and politics had an average Alexa user score of only 2.91 for the top two systems (Off-policy REINFORCE and Q-learning AMT). Mean while, conversations where the Initiatorbot asked questions about animals, food and movies the corresponding average Alexa user score was 3.17. We expected the conversation topic to affect user engagement, however it is surprising that these particular topics (animals, food and movies) were the most preferred ones. One possible explanation is that our system does not perform well on news, politics and travelling topics. However, the system already had several response models dedicated to discussing news and politics: six sequence-to-sequence models extracting responses from Reddit news and Reddit politics, two models extracting responses from Washington Post user comments and the BoWTrump model extracting responses from Donald J. Trumpâs Twitter proï¬le. In addition, Evibot is capable of answering many factual questions about news and politics and | 1709.02349#131 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 132 | from Donald J. Trumpâs Twitter proï¬le. In addition, Evibot is capable of answering many factual questions about news and politics and BoWFactGenerator contains hundreds of facts related to news and politics. As such, there may be another more plausible explanation for usersâ preferences towards topics, such as animals, movies and food. One likely explanation is the age group of the users. While inspecting our conversational transcripts, we observed that many users interacting with the system appeared to be children or teenagers. It would hardly come as a surprise if this user population would prefer to talk about animals, movies and foods rather than news, politics and travels. | 1709.02349#132 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 133 | # 5.2 A/B Testing Experiment #2
The second A/B testing experiment was carried out between August 6th, 2017 and August 15th, 2017. We tested two dialogue manager policies: Off-policy REINFORCE and Q-learning AMT. As before, we use the greedy variant of Off-policy REINFORCE deï¬ned in eq. (5).
This experiment occurred at the end of the Amazon Alexa Prize competition semi-ï¬nals. This means that many Alexa users have already interacted with other socialbots in the competition, and therefore are likely to have developed expectations towards the systems. These expectations are likely to involve conversing on a particular topic or engaging in non-conversational activities, such as playing games). Further, the period August 6th - August 15th overlaps with the end of the summer holidays and the beginning of the school year in the United States. This means that we should expect less children to interact with the system than in the previous A/B testing experiment.
31
Table 11: Third A/B testing experiment with two different policies (± 95% conï¬dence intervals). | 1709.02349#133 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 134 | 31
Table 11: Third A/B testing experiment with two different policies (± 95% conï¬dence intervals).
Policy User score Dialogue length Pos. utterances Neg. utterances Off-policy REINFORCE Q-learning AMT 3.03 ± 0.18 3.06 ± 0.17 30.93 ± 4.96 33.69 ± 5.84 2.72 ± 0.59 3.63 ± 0.68 7.36 ± 1.22 6.67 ± 0.98
Policy Evaluation The results are given in Table 10. In total, about eight hundred user ratings were collected after discarding returning users. As such, each policy was evaluated by about six hundred unique Alexa users. As before, all ratings were transcribed by human annotators.
We observe that both Off-policy REINFORCE and Q-learning AMT perform better than the policies in the previous experiment. However, in this experiment, Off-policy REINFORCE achieved an average Alexa user score of 3.06 while Q-learning AMT achieved a lower score of only 2.92. Nonetheless, Off-policy REINFORCE is not statistically signiï¬cantly better. In this experiment, there is also no signiï¬cant difference between the two policies w.r.t. percentage of positive and negative user utterances. | 1709.02349#134 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 135 | As discussed earlier, the performance difference compared to the previous A/B testing experiment could be due to the change in user proï¬les and user expectations. At this point in time, more of the Alexa users have interacted with socialbots from other teams. Mean while, all socialbots have been evolving. Therefore, user expectations towards our system are likely to be higher now. Further, since the summer holidays have ended, less children and more adults are expected to interact with our system. It is plausible that these adults also have higher expectations towards the system, and even more likely that they are less playful and less tolerant towards mistakes. Given this change in user proï¬les and expectations, the risk tolerant strategy learned by the Q-learning AMT policy is likely to fare poorly compared to the risk averse strategy learned by Off-policy REINFORCE.
# 5.3 A/B Testing Experiment #3
The third A/B testing experiment was carried out between August 15th, 2017 and August 21st, 2017. Due to the surprising results in the previous A/B testing experiment, we decided to continue testing the two dialogue manager policies Off-policy REINFORCE and Q-learning AMT. As before, we use the greedy variant of Off-policy REINFORCE deï¬ned in eq. (5). | 1709.02349#135 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 136 | This experiment occurred after the end of the Amazon Alexa Prize competition semi-ï¬nals. As discussed before, this means that it is likely that many Alexa users have already developed expectations towards the systems. Further, the period August 15th - August 21st lies entirely within the beginning of the school year in the United States. This means that we should expect less children to interact with the system than in the previous A/B testing experiment.
Policy Evaluation The results are given in Table 11. In total, about six hundred user ratings were collected after discarding returning users. As such, each policy was evaluated by about three hundred unique Alexa users. Unlike the previous two experiments, due to the semi-ï¬nals having ended, these ratings were not transcribed by human annotators. | 1709.02349#136 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 137 | We observe again that both Off-policy REINFORCE and Q-learning AMT perform better than the other policies evaluated in the ï¬rst experiment. However, in this experiment, Off-policy REINFORCE only achieved an average Alexa user score of 3.03 while Q-learning AMT achieved the higher score of 3.06. As before, neither policy is statistically signiï¬cantly better than the other. Nevertheless, as in the ï¬rst experiment, Q-learning AMT achieved a higher percentage of positive utterances and a lower percentage of negative utterances than Off-policy REINFORCE. In this experiment, Q-learning AMT also obtains the longest dialogues on average. Overall, this experiment indicates that Q-learning AMT is the better policy.
As before, the difference in performance compared to the previous A/B testing experiments is likely due to the change in user proï¬les and user expectations. The fact that Q-learning AMT now performs slightly better than Off-policy REINFORCE might be explained by many different causes. First, despite the conï¬dence intervals and statistical tests presented earlier, it is of course possible that the previous A/B testing experiments did not have enough statistical power to accurately discriminate whether Q-learning AMT or Off-policy REINFORCE obtains the highest average user score. Second,
32 | 1709.02349#137 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 138 | 32
it is possible that the topics users want to discuss now are simply better handled by Q-learning AMT. Third, it is possible that adult users might only have a weak preference toward the risk averse Q- learning AMT policy, and that there is still a signiï¬cant amount of children and teenagers interacting with the system even though the summer holidays have ended. Finally, it is possible that the user population has grown tired of Off-policy REINFORCE, which follows a risk averse strategy by responding with less semantic content.
# 5.4 Discussion | 1709.02349#138 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 139 | # 5.4 Discussion
The two dialogue manager policies Q-learning AMT and Off-policy REINFORCE have demonstrated substantial improvements over all other policies, including policies learned using supervised learning and heuristic policies. As discussed earlier, the Q-learning AMT policy achieved an average Alexa user score substantially above the average score of all teams in the Amazon Alexa Prize competition semi-ï¬nals, without relying on non-conversational activities. In addition, it also achieved a higher number of dialogue turns than both the average of all teams in the semi-ï¬nals and the average of all ï¬nalist teams in the semi-ï¬nals. The policy Off-policy REINFORCE similarly obtained a high number of dialogue, suggesting that the resulting conversations are far more engaging. The results demonstrate the advantages of the overall ensemble approach, where many different models generate natural language responses and the dialogue manager policy selects one response among them. The results also highlight the advantages of learning the policy using reinforcement learning techniques. By optimizing the policy to maximize either real-world user scores or to maximize rewards in the Abstract Discourse MDP (with a proxy reward function) we have demonstrated that signiï¬cant gains can be achieved w.r.t. both real-world user scores and number of dialogue turns.
# 6 Related Work | 1709.02349#139 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 140 | # 6 Related Work
Dialogue Manager Architecture: Any open-domain conversational agent will have to utilize many different types of modules, such as modules for looking up information, modules for daily chitchat discussions, modules for discussing movies, and so on. In this respect, our system architecture is related to some of the recent general-purpose dialogue system frameworks (Zhao et al. 2016, Miller et al. 2017, Truong et al. 2017). These systems abstract away the individual modules into black boxes sharing the same interface, similar to the response models in our ensemble. This, in turn, enables them to be controlled by an executive component (e.g. a dialogue manager).
# Reinforcement Learning:
Much work has applied reinforcement learning to training or improving dialogue systems. The idea that dialogue can be formulated as a sequential decision making problem based on a Markov decision process (MDP) appeared already in the 1990s for goal-oriented dialogue systems (Singh et al. 1999, 2002, Williams & Young 2007, Young et al. 2013, Paek 2006, Henderson et al. 2008, Pieraccini et al. 2009, Su et al. 2015). | 1709.02349#140 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 141 | One line of research in this area has focused on learning dialogue systems through simulations using abstract dialogue states and actions (Eckert et al. 1997, Levin et al. 2000, Chung 2004, Cuayáhuitl et al. 2005, Georgila et al. 2006, Schatzmann et al. 2007, Heeman 2009, Traum et al. 2008, Georgila & Traum 2011, Lee & Eskenazi 2012, Khouzaimi et al. 2017, López-Cózar 2016, Su et al. 2016, Fatemi et al. 2016, Asri et al. 2016). The approaches here differ based on how the simulator itself is created or estimated, and whether or not the simulator is also considered an agent, which is trying to optimize its own reward. For example, Levin et al. (2000) tackle the problem of building a ï¬ight booking dialogue system. They estimate a user simulator model by counting transition probabilities between dialogue states and user actions (similar to an n-gram model), which is then used to train a reinforcement learning policy. In their setting, the states and actions are all abstract discrete variables, which minimizes the amount of natural language understanding and generation the policy has to learn. As another example, Georgila & | 1709.02349#141 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 142 | the states and actions are all abstract discrete variables, which minimizes the amount of natural language understanding and generation the policy has to learn. As another example, Georgila & Traum (2011) tackle the problem of learning dialogue policies for negotiation games, where each party in the dialogue is an agent with its own reward function. In their setting, each policy is in effect also a user simulator, and is trained by playing against other policies using model-free on-policy reinforcement learning. As a more recent example, Yu et al. (2016) build a open-domain, chitchat dialogue system using reinforcement learning. In particular, Yu et al. (2016) propose to learn a dialogue manager policy through model-free off-policy reinforcement learning based on simulations with the template-based system A.L.I.C.E. (Wallace 2009) with a reward | 1709.02349#142 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 144 | Researchers have also recently started to investigate learning generative neural network policies operating directing on raw text through user simulations (Li et al. 2016, Das et al. 2017, Lewis et al. 2017, Liu & Lane 2017, Lewis et al. 2017). In contrast to earlier work, these policies require both a deeper understanding of natural language and an ability to generate natural language. For example, Li et al. (2016) propose to train a generative sequence-to-sequence neural network using maximum log-likelihood, and then ï¬ne-tune it with a multi-objective function. The multi-objective function includes, among other things, a reinforcement learning signal based on self-play Monte Carlo rollouts (i.e. simulated trajectories are generated by sampling from the model, similar to (Silver et al. 2016)) using a hand-crafted reward function. Lewis et al. (2017) apply model-free reinforcement learning for learning a system capable of negotiation in a toy domain from crowdsourced data. They demonstrate that itâs feasible to learn an effective policy by training a generative sequence-to-sequence neural network on crowdsourced data, and that the policy can be further improved using on-policy reinforcement | 1709.02349#144 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 145 | an effective policy by training a generative sequence-to-sequence neural network on crowdsourced data, and that the policy can be further improved using on-policy reinforcement learning through self-play and Monte Carlo rollouts. Both Li et al. (2016) and Lewis et al. (2017) use self-play. Self-play is a viable option for training their policies because their problems are symmetric in the policy space (e.g. any policy performing well on one side of the negotiation game will also perform well on the other side). In contrast, self-play is unlikely to be an effective training method in our case, because the interactions are highly asymmetric: human users speak differently to our system than they would to humans and, further, they expect different answers. Liu & Lane (2017) use model-free on-policy reinforcement learning to improve a system in a restaurant booking toy domain. For training the system policy, they employ a user simulator trained on real-world human-human dialogues. In particular, under the constraint that both the system and the user share the exact same reward function, they demonstrate that reinforcement learning can be used to improve both the system policy and the user simulator. In a related vein, Zhao & | 1709.02349#145 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 146 | the exact same reward function, they demonstrate that reinforcement learning can be used to improve both the system policy and the user simulator. In a related vein, Zhao & Eskenazi (2016) learn an end-to-end neural network system for playing a quiz game using off-policy reinforcement learning, where the environment is a game simulator. They demonstrate that combining reinforcement learning with dialogue state tracking labels yields superior performance. | 1709.02349#146 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 147 | In all the work reviewed so far, user simulators have been deï¬ned as rule-based models (e.g. A.L.I.C.E.), parametric models (e.g. n-gram models, generative neural networks), or a combination of the two. In most cases, given a user simulator, the collected training data is discarded and the policy is learned directly from simulations with the user simulator. In contrast, the Abstract Discourse MDP that we propose is a non-parametric approach, which repeatedly uses the collected training data during policy training.
Reinforcement learning has also been applied to teaching agents to communicate with each other in multi-agent environments (Foerster et al. 2016, Sukhbaatar et al. 2016, Lazaridou, Pham & Baroni 2016, Lazaridou, Peysakhovich & Baroni 2016, Mordatch & Abbeel 2017).
# 7 Future Work
# 7.1 Personalization | 1709.02349#147 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 148 | # 7 Future Work
# 7.1 Personalization
One important direction for future research is personalization, i.e. building a model of each userâs personality, opinions and interests. This will allow the system to provide a better user experience by adapting the response models to known attributes of the user. We are in the process of implementing a state machine that given a user id, retrieves the relevant information attributes of the user from a database. If a particular user attribute is missing, then the state machine will ask the user for the relevant information and store it in the database. One important user attribute is the userâs name. If no name is found in the database, the state machine may ask the user what they would like to be called and afterwards extracts the name from the userâs response. If a personal name is detected, it is stored in the database to be available for other modules to insert into their responses. Name detection proceeds as follows. First we match the response against a small collection of templates, such as "my name is ..." or "call me ...". Then we use part-of-speech (POS) tags of the resulting matches to detect
34
the end boundary of the name. To avoid clipping the name too early due to wrong POS tags, we also match words against a list of common names in the 1990 US Census data27. | 1709.02349#148 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 149 | 34
the end boundary of the name. To avoid clipping the name too early due to wrong POS tags, we also match words against a list of common names in the 1990 US Census data27.
In the future, we plan to explore learning user embeddings from previous interactions with each user, since we know from previous experiments that text information alone contains a signiï¬cant amount of information about the speakerâs identity (Serban & Pineau 2015). Learning an embedding for each user will allow the system to become more personalized, by providing our response models with additional context beyond the immediate dialogue history.
# 7.2 Text-based Evaluation | 1709.02349#149 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 150 | # 7.2 Text-based Evaluation
: It is well known that speech recognition errors have a signiï¬cant impact on the user experience in dialogue systems (Raux et al. 2006). Furthermore, speech recognition errors are likely to have a particularly averse effect on our system, because our system encourages open-ended, unrestricted conversations. Unlike many goal-driven and rule-based systems, our system does not take control of the dialogue or direct the user to respond with a keyword from a set of canned responses.28 Because the users are more likely to give open-ended responses, the system is also more likely to suffer from speech recognition errors. As we discussed in Section 4, we did indeed observe a negative correlation between the conï¬dences of the speech recognition system and the Alexa user scores. Moreover, it is likely that speech recognition errors have a stronger systematic effect on some of the policies evaluated in Section 5.
To mitigate the issues of speech recognition errors, we plan to evaluate the system with different policies through a text-based evaluation on Amazon Mechanical Turk. This would also help reduce other problems, such as errors due to incorrect turn-taking (e.g. when the system barges in on the user, who is still speaking) (Ward et al. 2005).
# 8 Conclusion | 1709.02349#150 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 151 | # 8 Conclusion
We have proposed a new large-scale ensemble-based dialogue system framework for the Amazon Alexa Prize competition. Our system leverages a variety of machine learning techniques, including deep learning and reinforcement learning. We have developed a new set of deep learning models for natural language retrieval and generation, including recurrent neural networks, sequence-to-sequence models and latent variable models. In addition, we have developed a novel reinforcement learning procedure and evaluated it against existing reinforcement learning methods in A/B testing experiments with real-world users. These innovations have enabled us to make substantial improvements upon our baseline system. On a scale 1 â 5, our best performing system reached an average user score of 3.15, with a minimal amount of hand-crafted states and rules and without engaging in non-conversational activities (such as playing games or quizzes). The performance is substantially above the average of all teams in the competition semi-ï¬nals, which was only 2.92. Furthermore, the same system averaged a high 14.5 â 16.0 turns per conversation, which is substantially above both the average of all teams and the average of ï¬nalist teams in the competition semi-ï¬nals, suggesting that our system is one of the most engaging systems in the competition. Since nearly all our system components are trainable machine learning models, the system is likely to improve greatly with more interactions and additional data.
# Acknowledgments | 1709.02349#151 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 152 | # Acknowledgments
We thank Aaron Courville, Michael Noseworthy, Nicolas Angelard-Gontier, Ryan Lowe, Prasanna Parthasarathi and Peter Henderson for helpful advice related to the system architecture, crowdsourcing and reinforcement learning throughout the Alexa Prize competition. We thank Christian Droulers for building the graphical user interface for text-based chat. We thank Amazon for providing Tesla K80 GPUs through the Amazon Web Services platform. Some of the Titan X GPUs used for this research
27Obtained from: https://deron.meranda.us/data/. 28In contrast, one socialbot system in the Alexa semi-ï¬nals would start the conversation by asking the user a question such as "I am able to talk about news, sports and politics. Which would you like to talk about?" after which the user is expected to mention one of the keywords "news", "sports" or "politics". This type of system-initiative greatly reduces the number of speech recognition errors, because it is far easier to discriminate between a few keywords compared to transcribing a complete open-ended utterance.
35
were donated by the NVIDIA Corporation. The authors acknowledge NSERC, Canada Research Chairs, CIFAR, IBM Research, Nuance Foundation, Microsoft Maluuba and Druide Informatique Inc. for funding.
# References | 1709.02349#152 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 153 | # References
Ameixa, D., Coheur, L., Fialho, P. & Quaresma, P. (2014), Luke, I am your father: dealing with out-of-domain requests by using movies subtitles, in âIntelligent Virtual Agentsâ, Springer.
Asri, L. E., He, J. & Suleman, K. (2016), A sequence-to-sequence model for user simulation in spoken dialogue systems, in âInterSpeechâ.
Aust, H., Oerder, M., Seide, F. & Steinbiss, V. (1995), âThe Philips automatic train timetable information systemâ, Speech Communication 17(3).
Bird, S., Klein, E. & Loper, E. (2009), Natural Language Processing with Python, OâReilly Media.
Blunsom, P., Grefenstette, E. & Kalchbrenner, N. (2014), A convolutional neural network for mod- elling sentences, in âProceedings of the 52nd Annual Meeting of the Association for Computational Linguisticsâ, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. | 1709.02349#153 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 154 | Bohus, D., Raux, A., Harris, T. K., Eskenazi, M. & Rudnicky, A. I. (2007), Olympus: an open-source framework for conversational spoken language interface research, in âProceedings of the workshop on bridging the gap: Academic and industrial research in dialog technologiesâ, Association for Computational Linguistics, pp. 32â39.
Breiman, L. (1996), âBagging predictorsâ, Machine learning 24(2), 123â140.
Charras, F., Duplessis, G. D., Letard, V., Ligozat, A.-L. & Rosset, S. (2016), Comparing system- response retrieval models for open-domain and casual conversational agent, in âWorkshop on Chatbots and Conversational Agent Technologiesâ.
Cho, K., van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H. & Bengio, Y. (2014), Learning phrase representations using rnn encoderâdecoder for statistical machine translation, in âEMNLPâ. | 1709.02349#154 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 155 | Chung, G. (2004), Developing a ï¬exible spoken dialog system using simulation, in âProceedings of the 42nd Annual Meeting on Association for Computational Linguisticsâ, Association for Computational Linguistics, p. 63.
Colby, K. M. (1981), âModeling a paranoid mindâ, Behavioral and Brain Sciences 4.
Cuayáhuitl, H., Renals, S., Lemon, O. & Shimodaira, H. (2005), Human-computer dialogue simula- tion using hidden markov models, in âAutomatic Speech Recognition and Understanding, 2005 IEEE Workshop onâ, IEEE, pp. 290â295.
Das, A., Kottur, S., Moura, J. M., Lee, S. & Batra, D. (2017), Learning cooperative visual dialog agents with deep reinforcement learning, in âInternational Conference on Computer Visionâ.
Eckert, W., Levin, E. & Pieraccini, R. (1997), User modeling for spoken dialogue system evaluation, in âAutomatic Speech Recognition and Understanding, 1997. Proceedings., 1997 IEEE Workshop onâ, IEEE, pp. 80â87. | 1709.02349#155 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 156 | Fatemi, M., Asri, L. E., Schulz, H., He, J. & Suleman, K. (2016), Policy networks with two-stage training for dialogue systems, in âSIGDIALâ.
Ferrucci, D., Brown, E., Chu-Carroll, J., Fan, J., Gondek, D., Kalyanpur, A. A., Lally, A., Murdock, J. W., Nyberg, E., Prager, J. et al. (2010), âBuilding Watson: An overview of the DeepQA projectâ, AI magazine 31(3).
Foerster, J., Assael, Y. M., de Freitas, N. & Whiteson, S. (2016), Learning to communicate with deep multi-agent reinforcement learning, in âAdvances in Neural Information Processing Systemsâ, pp. 2137â2145.
36
Georgila, K., Henderson, J. & Lemon, O. (2006), User simulation for spoken dialogue systems: Learning and evaluation, in âNinth International Conference on Spoken Language Processingâ.
Georgila, K. & Traum, D. (2011), Reinforcement learning of argumentation dialogue policies in nego- tiation, in âTwelfth Annual Conference of the International Speech Communication Associationâ. | 1709.02349#156 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 157 | Glorot, X., Bordes, A. & Bengio, Y. (2011), Deep sparse rectiï¬er neural networks, in âProceedings of the Fourteenth International Conference on Artiï¬cial Intelligence and Statisticsâ, pp. 315â323.
Heeman, P. A. (2009), Representing the reinforcement learning state in a negotiation dialogue, in âAutomatic Speech Recognition & Understanding, 2009. ASRU 2009. IEEE Workshop onâ, IEEE, pp. 450â455.
Henderson, J., Lemon, O. & Georgila, K. (2008), âHybrid reinforcement/supervised learning of dialogue policies from ï¬xed data setsâ, Computational Linguistics 34(4), 487â511.
Im, J. (2017).
URL: http://search.aifounded.com/
JurËcÃËcek, F., DuÅ¡ek, O., Plátek, O. & Žilka, L. (2014), Alex: A statistical dialogue systems framework, in âInternational Conference on Text, Speech, and Dialogueâ, Springer, pp. 587â594. | 1709.02349#157 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 158 | Khouzaimi, H., Laroche, R. & Lefevre, F. (2017), Incremental human-machine dialogue simulation, in âDialogues with Social Robotsâ, Springer, pp. 53â66.
Kingma, D. & Ba, J. (2015), Adam: A method for stochastic optimization, in âICLRâ.
Kingma, D. P. & Welling, M. (2014), âAuto-encoding variational Bayesâ, ICLR .
Kiros, R., Zhu, Y., Salakhutdinov, R. R., Zemel, R., Urtasun, R., Torralba, A. & Fidler, S. (2015), Skip-thought vectors, in âNIPSâ.
Koren, Y., Bell, R. & Volinsky, C. (2009), âMatrix factorization techniques for recommender systemsâ, Computer 42(8).
Lazaridou, A., Peysakhovich, A. & Baroni, M. (2016), âMulti-agent cooperation and the emergence of (natural) languageâ, arXiv preprint arXiv:1612.07182 . | 1709.02349#158 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 159 | Lazaridou, A., Pham, N. T. & Baroni, M. (2016), âTowards multi-agent communication-based language learningâ, arXiv preprint arXiv:1605.07133 .
Lee, S. & Eskenazi, M. (2012), Pomdp-based letâs go system for spoken dialog challenge, in âSpoken Language Technology Workshop (SLT), 2012 IEEEâ, IEEE, pp. 61â66.
Levin, E., Pieraccini, R. & Eckert, W. (2000), âA stochastic model of human-machine interaction for learning dialog strategiesâ, IEEE Transactions on speech and audio processing 8(1), 11â23.
Lewis, M., Yarats, D., Dauphin, Y. N., Parikh, D. & Batra, D. (2017), Deal or No Deal? End-to-End Learning for Negotiation Dialogues, in âEMNLPâ.
Li, J., Monroe, W., Ritter, A., Galley, M., Gao, J. & Jurafsky, D. (2016), âDeep reinforcement learning for dialogue generationâ, arXiv preprint arXiv:1606.01541 . | 1709.02349#159 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 160 | Lin, L.-J. (1993), Reinforcement learning for robots using neural networks, Technical report, Carnegie- Mellon Univ Pittsburgh PA School of Computer Science.
Liu, B. & Lane, I. (2017), Iterative policy learning in end-to-end trainable task-oriented neural dialog models, in âProceedings of 2017 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)â, Okinawa, Japan.
Liu, C.-W., Lowe, R., Serban, I. V., Noseworthy, M., Charlin, L. & Pineau, J. (2016), How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation, in âEMNLPâ.
López-Cózar, R. (2016), âAutomatic creation of scenarios for evaluating spoken dialogue systems via user-simulationâ, Knowledge-Based Systems 106, 51â73.
37
Lowe, R., Noseworthy, M., Serban, I. V., Angelard-Gontier, N., Bengio, Y. & Pineau, J. (2017), Towards an automatic Turing test: Learning to evaluate dialogue responses, in âACLâ. | 1709.02349#160 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 161 | Lowe, R., Pow, N., Serban, I. & Pineau, J. (2015), The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems, in âSIGDIALâ.
Lowe, R., Serban, I. V., Noseworthy, M., Charlin, L. & Pineau, J. (2016), âOn the evaluation of dialogue systems with next utterance classiï¬cationâ, arXiv preprint arXiv:1605.05414 .
Lowe, R. T., Pow, N., Serban, I. V., Charlin, L., Liu, C.-W. & Pineau, J. (2017), âTraining end-to-end dialogue systems with the ubuntu dialogue corpusâ, Dialogue & Discourse 8(1).
Marelli, M., Bentivogli, L., Baroni, M., Bernardi, R., Menini, S. & Zamparelli, R. (2014), Semeval- 2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment., in âSemEval Workshop, COLINGâ. | 1709.02349#161 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 162 | McGlashan, S., Fraser, N., Gilbert, N., Bilange, E., Heisterkamp, P. & Youd, N. (1992), Dialogue management for telephone information systems, in âANLCâ.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S. & Dean, J. (2013), Distributed representations of words and phrases and their compositionality, in âNIPSâ.
Miller, A. H., Feng, W., Fisch, A., Lu, J., Batra, D., Bordes, A., Parikh, D. & Weston, J. (2017), âParlai: A dialog research software platformâ, arXiv preprint arXiv:1705.06476 .
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D. & Riedmiller, M. (2013), âPlaying atari with deep reinforcement learningâ, arXiv preprint arXiv:1312.5602 .
Mordatch, I. & Abbeel, P. (2017), âEmergence of grounded compositional language in multi-agent populationsâ, arXiv preprint arXiv:1703.04908 . | 1709.02349#162 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 163 | Nair, V. & Hinton, G. E. (2010), Rectiï¬ed linear units improve restricted boltzmann machines, in âProceedings of the 27th international conference on machine learning (ICML-10)â, pp. 807â814.
Ng, A. Y., Harada, D. & Russell, S. (1999), Policy invariance under reward transformations: Theory and application to reward shaping, in âICMLâ, Vol. 99, pp. 278â287.
Nguyen, T., Rosenberg, M., Song, X., Gao, J., Tiwary, S., Majumder, R. & Deng, L. (2016), âMS MARCO: A Human Generated MAchine Reading COmprehension Datasetâ, arXiv preprint arXiv:1611.09268 .
Paek, T. (2006), Reinforcement learning for spoken dialogue systems: Comparing strengths and weaknesses for practical deployment, in âProc. Dialog-on-Dialog Workshop, Interspeechâ.
Pennington, J., Socher, R. & Manning, C. D. (2014), Glove: Global vectors for word representation., in âEMNLPâ, Vol. 14. | 1709.02349#163 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 164 | Pieraccini, R., Suendermann, D., Dayanidhi, K. & Liscombe, J. (2009), Are we there yet? research in commercial spoken dialog systems, in âText, Speech and Dialogueâ, Springer, pp. 3â13.
Precup, D. (2000), âEligibility traces for off-policy policy evaluationâ, Computer Science Department Faculty Publication Series .
Precup, D., Sutton, R. S. & Dasgupta, S. (2001), Off-policy temporal-difference learning with function approximation, in âICMLâ.
Raux, A., Bohus, D., Langner, B., Black, A. W. & Eskenazi, M. (2006), Doing research on a deployed spoken dialogue system: one year of letâs go! experience., in âINTERSPEECHâ.
Rezende, D. J., Mohamed, S. & Wierstra, D. (2014), Stochastic backpropagation and approximate inference in deep generative models, in âICMLâ, pp. 1278â1286. | 1709.02349#164 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 165 | Schatzmann, J., Thomson, B., Weilhammer, K., Ye, H. & Young, S. (2007), Agenda-based user simulation for bootstrapping a pomdp dialogue system, in âHuman Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papersâ, Association for Computational Linguistics, pp. 149â152.
38
Serban, I. V., Lowe, R., Charlin, L. & Pineau, J. (2016), Generative deep neural networks for dialogue: A short review, in âNIPS, Letâs Discuss: Learning Methods for Dialogue Workshopâ.
Serban, I. V. & Pineau, J. (2015), Text-based speaker identiï¬cation for multi-participant open-domain dialogue systems, in âNeural Information Processing Systems Workshop on Machine Learning for Spoken Language Understandingâ.
Serban, I. V., Sordoni, A., Lowe, R., Charlin, L., Pineau, J., Courville, A. & Bengio, Y. (2017), A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues, in âAAAIâ. | 1709.02349#165 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 166 | Shawar, B. A. & Atwell, E. (2007), Chatbots: are they really useful?, in âLDV Forumâ, Vol. 22.
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M. et al. (2016), âMastering the game of go with deep neural networks and tree searchâ, Nature 529(7587), 484â489.
Simpson, A. & Eraser, N. M. (1993), Black box and glass box evaluation of the sundial system, in âThird European Conference on Speech Communication and Technologyâ.
Singh, S., Litman, D., Kearns, M. & Walker, M. (2002), âOptimizing dialogue management with reinforcement learning: Experiments with the njfun systemâ, Journal of Artiï¬cial Intelligence Research 16, 105â133.
Singh, S. P., Kearns, M. J., Litman, D. J. & Walker, M. A. (1999), Reinforcement learning for spoken dialogue systems., in âNipsâ, pp. 956â962. | 1709.02349#166 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 167 | Socher, R., Perelygin, A., Wu, J. Y., Chuang, J., Manning, C. D., Ng, A. Y., Potts, C. et al. (2013), Recursive deep models for semantic compositionality over a sentiment treebank, in âProceedings of the conference on empirical methods in natural language processing (EMNLP)â, Vol. 1631, p. 1642.
Stolcke, A., Ries, K., Coccaro, N., Shriberg, E., Bates, R., Jurafsky, D., Taylor, P., Martin, R., Van Ess-Dykema, C. & Meteer, M. (2000), âDialogue act modeling for automatic tagging and recognition of conversational speechâ, Computational linguistics 26(3).
Stone, B. & Soper, S. (2014), âAmazon Unveils a Listening, Talking, Music-Playing Speaker for Your Homeâ, Bloomberg L.P . Retrieved 2014-11-07. | 1709.02349#167 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 168 | Su, P.-H., Gasic, M., Mrksic, N., Rojas-Barahona, L., Ultes, S., Vandyke, D., Wen, T.-H. & Young, S. (2016), âContinuously learning neural dialogue managementâ, arXiv preprint arXiv:1606.02689 .
Su, P.-H., Vandyke, D., GaÅ¡i´c, M., Kim, D., MrkÅ¡i´c, N., Wen, T.-H. & Young, S. (2015), Learning from real users: Rating dialogue success with neural networks for reinforcement learning in spoken dialogue systems., in âInterspeechâ.
Suendermann-Oeft, D., Ramanarayanan, V., Teckenbrock, M., Neutatz, F. & Schmidt, D. (2015), Halef: An open-source standard-compliant telephony-based modular spoken dialog system: A review and an outlook, in âNatural language dialog systems and intelligent assistantsâ, Springer.
Sukhbaatar, S., Fergus, R. et al. (2016), Learning multiagent communication with backpropagation, in âAdvances in Neural Information Processing Systemsâ, pp. 2244â2252. | 1709.02349#168 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 169 | Sutton, R. S. & Barto, A. G. (1998), Reinforcement learning: An introduction, number 1 in â1â, MIT Press Cambridge.
Traum, D., Marsella, S. C., Gratch, J., Lee, J. & Hartholt, A. (2008), Multi-party, multi-issue, multi-strategy negotiation for multi-modal virtual agents, in âInternational Workshop on Intelligent Virtual Agentsâ, Springer, pp. 117â130.
Truong, H. P., Parthasarathi, P. & Pineau, J. (2017), âMaca: A modular architecture for conversational agentsâ, arXiv preprint arXiv:1705.00673 .
Wallace, R. S. (2009), âThe anatomy of aliceâ, Parsing the Turing Test .
39
Ward, N. G., Rivera, A. G., Ward, K. & Novick, D. G. (2005), âRoot causes of lost time and user stress in a simple dialog systemâ.
Weizenbaum, J. (1966), âElizaâa computer program for the study of natural language communication between man and machineâ, ACM 9(1). | 1709.02349#169 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 170 | Weizenbaum, J. (1966), âElizaâa computer program for the study of natural language communication between man and machineâ, ACM 9(1).
Williams, J. D. (2011), An empirical evaluation of a statistical dialog system in public use, in âProceedings of the SIGDIAL 2011 Conferenceâ, Association for Computational Linguistics, pp. 130â141.
Williams, J. D., Raux, A. & Henderson, M. (2016), âIntroduction to the special issue on dialogue state trackingâ, Dialogue & Discourse 7(3), 1â3.
Williams, J. D. & Young, S. (2007), âPartially observable markov decision processes for spoken dialog systemsâ, Computer Speech & Language 21(2), 393â422.
Williams, J., Raux, A., Ramachandran, D. & Black, A. (2013), The dialog state tracking challenge, in âSIGDIALâ, pp. 404â413.
Williams, R. J. (1992), âSimple statistical gradient-following algorithms for connectionist reinforce- ment learningâ, Machine learning 8(3-4). | 1709.02349#170 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 171 | Williams, R. J. (1992), âSimple statistical gradient-following algorithms for connectionist reinforce- ment learningâ, Machine learning 8(3-4).
Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K. et al. (2016), âGoogleâs neural machine translation system: Bridging the gap between human and machine translationâ, arXiv preprint arXiv:1609.08144 .
Young, S., Gasic, M., Thomson, B. & Williams, J. D. (2013), âPomdp-based statistical spoken dialog systems: A reviewâ, Proceedings of the IEEE 101(5), 1160â1179.
Yu, L., Hermann, K. M., Blunsom, P. & Pulman, S. (2014), Deep learning for answer sentence selection, in âNIPS, Workshop on Deep Learningâ.
Yu, Z., Xu, Z., Black, A. W. & Rudnicky, A. I. (2016), Strategy and policy learning for non-task- oriented conversational systems., in âSIGDIALâ. | 1709.02349#171 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.02349 | 172 | Zhao, T. & Eskenazi, M. (2016), Towards end-to-end learning for dialog state tracking and manage- ment using deep reinforcement learning, in âSIGDIALâ.
Zhao, T., Lee, K. & Eskenazi, M. (2016), Dialport: Connecting the spoken dialog research community to real user data, in âSpoken Language Technology Workshop (SLT), 2016 IEEEâ, IEEE, pp. 83â90.
Zhu, Y., Kiros, R., Zemel, R., Salakhutdinov, R., Urtasun, R., Torralba, A. & Fidler, S. (2015), Aligning books and movies: Towards story-like visual explanations by watching movies and reading books, in âICCVâ.
40 | 1709.02349#172 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | [
{
"id": "1612.07182"
},
{
"id": "1705.06476"
},
{
"id": "1703.04908"
},
{
"id": "1705.00673"
},
{
"id": "1605.07133"
},
{
"id": "1605.05414"
},
{
"id": "1609.08144"
},
{
"id": "1606.01541"
},
{
"id": "1611.09268"
},
{
"id": "1606.02689"
}
] |
1709.01134 | 2 | # Abstract
For computer vision applications, prior works have shown the efï¬cacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the train- ing and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activa- tions without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of ï¬lter maps in a layer, and ï¬nd that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can signiï¬cantly improve the execution efï¬ciency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware sup- port. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
# 1 Introduction | 1709.01134#2 | WRPN: Wide Reduced-Precision Networks | For computer vision applications, prior works have shown the efficacy of
reducing numeric precision of model parameters (network weights) in deep neural
networks. Activation maps, however, occupy a large memory footprint during both
the training and inference step when using mini-batches of inputs. One way to
reduce this large memory footprint is to reduce the precision of activations.
However, past works have shown that reducing the precision of activations hurts
model accuracy. We study schemes to train networks from scratch using
reduced-precision activations without hurting accuracy. We reduce the precision
of activation maps (along with model parameters) and increase the number of
filter maps in a layer, and find that this scheme matches or surpasses the
accuracy of the baseline full-precision network. As a result, one can
significantly improve the execution efficiency (e.g. reduce dynamic memory
footprint, memory bandwidth and computational energy) and speed up the training
and inference process with appropriate hardware support. We call our scheme
WRPN - wide reduced-precision networks. We report results and show that WRPN
scheme is better than previously reported accuracies on ILSVRC-12 dataset while
being computationally less expensive compared to previously reported
reduced-precision networks. | http://arxiv.org/pdf/1709.01134 | Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20170904 | 20170904 | [] |
1709.01134 | 3 | # 1 Introduction
A promising approach to lower the compute and memory requirements of convolutional deep- learning workloads is through the use of low numeric precision algorithms. Operating in lower precision mode reduces computation as well as data movement and storage requirements. Due to such efï¬ciency beneï¬ts, there are many existing works which propose low-precision deep neural networks (DNNs) [25, 13, 15, 7, 22], even down to 2-bit ternary mode [27, 12, 23] and 1-bit binary mode [26, 4, 17, 5, 21]. However, the majority of existing works in low-precision DNNs sacriï¬ce accuracy over the baseline full-precision networks. Further, most prior works target reducing the precision of the model parameters (network weights). This primarily beneï¬ts the inference step only when batch sizes are small. | 1709.01134#3 | WRPN: Wide Reduced-Precision Networks | For computer vision applications, prior works have shown the efficacy of
reducing numeric precision of model parameters (network weights) in deep neural
networks. Activation maps, however, occupy a large memory footprint during both
the training and inference step when using mini-batches of inputs. One way to
reduce this large memory footprint is to reduce the precision of activations.
However, past works have shown that reducing the precision of activations hurts
model accuracy. We study schemes to train networks from scratch using
reduced-precision activations without hurting accuracy. We reduce the precision
of activation maps (along with model parameters) and increase the number of
filter maps in a layer, and find that this scheme matches or surpasses the
accuracy of the baseline full-precision network. As a result, one can
significantly improve the execution efficiency (e.g. reduce dynamic memory
footprint, memory bandwidth and computational energy) and speed up the training
and inference process with appropriate hardware support. We call our scheme
WRPN - wide reduced-precision networks. We report results and show that WRPN
scheme is better than previously reported accuracies on ILSVRC-12 dataset while
being computationally less expensive compared to previously reported
reduced-precision networks. | http://arxiv.org/pdf/1709.01134 | Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20170904 | 20170904 | [] |
1709.01134 | 4 | To improve both execution efï¬ciency and accuracy of low-precision networks, we reduce both the precision of activation maps and model parameters and increase the number of ï¬lter maps in a layer. We call networks using this scheme wide reduced-precision networks (WRPN) and ï¬nd that this scheme compensates or surpasses the accuracy of the baseline full-precision network. Although the number of raw compute operations increases as we increase the number of ï¬lter maps in a layer, the compute bits required per operation is now a fraction of what is required when using full-precision operations (e.g. going from FP32 AlexNet to 4-bits precision and doubling the number of ï¬lters increases the number of compute operations by 4x, but each operation is 8x more efï¬cient than FP32). | 1709.01134#4 | WRPN: Wide Reduced-Precision Networks | For computer vision applications, prior works have shown the efficacy of
reducing numeric precision of model parameters (network weights) in deep neural
networks. Activation maps, however, occupy a large memory footprint during both
the training and inference step when using mini-batches of inputs. One way to
reduce this large memory footprint is to reduce the precision of activations.
However, past works have shown that reducing the precision of activations hurts
model accuracy. We study schemes to train networks from scratch using
reduced-precision activations without hurting accuracy. We reduce the precision
of activation maps (along with model parameters) and increase the number of
filter maps in a layer, and find that this scheme matches or surpasses the
accuracy of the baseline full-precision network. As a result, one can
significantly improve the execution efficiency (e.g. reduce dynamic memory
footprint, memory bandwidth and computational energy) and speed up the training
and inference process with appropriate hardware support. We call our scheme
WRPN - wide reduced-precision networks. We report results and show that WRPN
scheme is better than previously reported accuracies on ILSVRC-12 dataset while
being computationally less expensive compared to previously reported
reduced-precision networks. | http://arxiv.org/pdf/1709.01134 | Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20170904 | 20170904 | [] |
1709.01134 | 5 | WRPN offers better accuracies, while being computationally less expensive compared to previously reported reduced-precision networks. We report results on AlexNet [11], batch-normalized Incep- tion [9], and ResNet-34 [8] on ILSVRC-12 [11] dataset. We ï¬nd 4-bits to be sufï¬cient for training deep and wide models while achieving similar or better accuracy than baseline network. With 4-bit activation and 2-bit weights, we ï¬nd the accuracy to be at-par with baseline full-precision. Making the networks wider and operating with 1-bit precision, we close the accuracy gap between previously
report binary networks and show state-of-the art results for ResNet-34 (69.85% top-1 with 2x wide) and AlexNet (48.04% top-1 with 1.3x wide). To the best of our knowledge, our reported accuracies with binary networks (and even 4-bit precision) are highest to date. | 1709.01134#5 | WRPN: Wide Reduced-Precision Networks | For computer vision applications, prior works have shown the efficacy of
reducing numeric precision of model parameters (network weights) in deep neural
networks. Activation maps, however, occupy a large memory footprint during both
the training and inference step when using mini-batches of inputs. One way to
reduce this large memory footprint is to reduce the precision of activations.
However, past works have shown that reducing the precision of activations hurts
model accuracy. We study schemes to train networks from scratch using
reduced-precision activations without hurting accuracy. We reduce the precision
of activation maps (along with model parameters) and increase the number of
filter maps in a layer, and find that this scheme matches or surpasses the
accuracy of the baseline full-precision network. As a result, one can
significantly improve the execution efficiency (e.g. reduce dynamic memory
footprint, memory bandwidth and computational energy) and speed up the training
and inference process with appropriate hardware support. We call our scheme
WRPN - wide reduced-precision networks. We report results and show that WRPN
scheme is better than previously reported accuracies on ILSVRC-12 dataset while
being computationally less expensive compared to previously reported
reduced-precision networks. | http://arxiv.org/pdf/1709.01134 | Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20170904 | 20170904 | [] |
1709.01134 | 6 | Our reduced-precision quantization scheme is hardware friendly allowing for efï¬cient hardware implementations. To this end, we evaluate efï¬ciency beneï¬ts of low-precision operations (4-bits to 1-bits) on Titan X GPU, Arria-10 FPGA and ASIC. We see that FPGA and ASIC can deliver signiï¬cant efï¬ciency gain over FP32 operations (6.5x to 100x), while GPU cannot take advantage of very low-precision operations.
# 2 Motivation for reduced-precision activation maps | 1709.01134#6 | WRPN: Wide Reduced-Precision Networks | For computer vision applications, prior works have shown the efficacy of
reducing numeric precision of model parameters (network weights) in deep neural
networks. Activation maps, however, occupy a large memory footprint during both
the training and inference step when using mini-batches of inputs. One way to
reduce this large memory footprint is to reduce the precision of activations.
However, past works have shown that reducing the precision of activations hurts
model accuracy. We study schemes to train networks from scratch using
reduced-precision activations without hurting accuracy. We reduce the precision
of activation maps (along with model parameters) and increase the number of
filter maps in a layer, and find that this scheme matches or surpasses the
accuracy of the baseline full-precision network. As a result, one can
significantly improve the execution efficiency (e.g. reduce dynamic memory
footprint, memory bandwidth and computational energy) and speed up the training
and inference process with appropriate hardware support. We call our scheme
WRPN - wide reduced-precision networks. We report results and show that WRPN
scheme is better than previously reported accuracies on ILSVRC-12 dataset while
being computationally less expensive compared to previously reported
reduced-precision networks. | http://arxiv.org/pdf/1709.01134 | Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20170904 | 20170904 | [] |
1709.01134 | 9 | !"#$%$%& %8-@2? %8>? )!#)% )â#!% $+#"% $,#,% â)#$% â$#+% â$#(% â,#,% !)#*% !"#$% &â#"% &(#&% !#!% &#â% &#(% (#&% + + + + $) $) $) $) !"#$%$"&$ %8-@2? %8>? 2 1 4 = 2 : : < ; 4 : 9 / 9 % &#$% $#â% 3 +)#*% +,#$% $&#(% ((#,% 8 ,â#"% â+#!% â(#$% â*#"% !â#(% !"#$% 8 *(#*% &&#+% ")#â% !#&% &+ " &+ &+ " " " &+ -./01/2 345) 4/61/27(" 4/61/27+"+ -./01/2 345+ 4/61/27$) 4/61/27")" | 1709.01134#9 | WRPN: Wide Reduced-Precision Networks | For computer vision applications, prior works have shown the efficacy of
reducing numeric precision of model parameters (network weights) in deep neural
networks. Activation maps, however, occupy a large memory footprint during both
the training and inference step when using mini-batches of inputs. One way to
reduce this large memory footprint is to reduce the precision of activations.
However, past works have shown that reducing the precision of activations hurts
model accuracy. We study schemes to train networks from scratch using
reduced-precision activations without hurting accuracy. We reduce the precision
of activation maps (along with model parameters) and increase the number of
filter maps in a layer, and find that this scheme matches or surpasses the
accuracy of the baseline full-precision network. As a result, one can
significantly improve the execution efficiency (e.g. reduce dynamic memory
footprint, memory bandwidth and computational energy) and speed up the training
and inference process with appropriate hardware support. We call our scheme
WRPN - wide reduced-precision networks. We report results and show that WRPN
scheme is better than previously reported accuracies on ILSVRC-12 dataset while
being computationally less expensive compared to previously reported
reduced-precision networks. | http://arxiv.org/pdf/1709.01134 | Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20170904 | 20170904 | [] |
1709.01134 | 10 | 2 1 4 = 2 : : < ; 4 : 9 / 9 %
3
8
8
Figure 1: Memory footprint of activations (ACTs) and weights (W) during training and inference for mini- batch sizes 1 and 32.
While most prior works proposing reduced-precision networks work with low precision weights (e.g. [4, 27, 26, 23, 12, 5, 21]), we ï¬nd that activation maps occupy a larger memory footprint when using mini-batches of inputs. Using mini-batches of inputs is typical in training of DNNs and cloud- based batched inference [10]. Figure 1 shows memory footprint of activation maps and ï¬lter maps as batch size changes for 4 different networks (AlexNet, Inception-Resnet-v2 [19], ResNet-50 and ResNet-101) during the training and inference steps.
W W W OFM /IFM OFM /IFM OFM /IFM OFM /IFM ! " ! " ! " ACT Layer !"# ACT Layer ! ACT Layer !%# ACT $! $" $! $" $! $" Grad Grad Grad Grad | 1709.01134#10 | WRPN: Wide Reduced-Precision Networks | For computer vision applications, prior works have shown the efficacy of
reducing numeric precision of model parameters (network weights) in deep neural
networks. Activation maps, however, occupy a large memory footprint during both
the training and inference step when using mini-batches of inputs. One way to
reduce this large memory footprint is to reduce the precision of activations.
However, past works have shown that reducing the precision of activations hurts
model accuracy. We study schemes to train networks from scratch using
reduced-precision activations without hurting accuracy. We reduce the precision
of activation maps (along with model parameters) and increase the number of
filter maps in a layer, and find that this scheme matches or surpasses the
accuracy of the baseline full-precision network. As a result, one can
significantly improve the execution efficiency (e.g. reduce dynamic memory
footprint, memory bandwidth and computational energy) and speed up the training
and inference process with appropriate hardware support. We call our scheme
WRPN - wide reduced-precision networks. We report results and show that WRPN
scheme is better than previously reported accuracies on ILSVRC-12 dataset while
being computationally less expensive compared to previously reported
reduced-precision networks. | http://arxiv.org/pdf/1709.01134 | Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20170904 | 20170904 | [] |
1709.01134 | 12 | As batch-size increases, because of ï¬lter reuse across batches of inputs, activation maps occupy signiï¬cantly larger fraction of memory compared to the ï¬lter weights. This aspect is illustrated in Figure 2 which shows the memory requirements of a canonical feed-forward DNN for a hardware accelerator based system (e.g. GPU, FPGA, PCIe connected ASIC device, etc.). During training, the sum of all the activation maps (ACT) and weight tensors (W) are allocated in device memory for forward pass along with memory for gradient maps during backward propagation. The total memory requirements for training phase is the sum of memory required for the activation maps, weights and the maximum of input gradient maps (δZ) and maximum of back-propagated gradients (δX). During inference, memory is allocated for input (IFM) and output feature maps (OFM) required by a single layer, and these memory allocations are reused for other layers. The total memory allocation during inference is then the maximum of IFM and maximum of OFM required across all the layers plus the sum of all W-tensors. At batch sizes 128 and more, activations start to occupy more than 98% of total memory footprint during training.
2 | 1709.01134#12 | WRPN: Wide Reduced-Precision Networks | For computer vision applications, prior works have shown the efficacy of
reducing numeric precision of model parameters (network weights) in deep neural
networks. Activation maps, however, occupy a large memory footprint during both
the training and inference step when using mini-batches of inputs. One way to
reduce this large memory footprint is to reduce the precision of activations.
However, past works have shown that reducing the precision of activations hurts
model accuracy. We study schemes to train networks from scratch using
reduced-precision activations without hurting accuracy. We reduce the precision
of activation maps (along with model parameters) and increase the number of
filter maps in a layer, and find that this scheme matches or surpasses the
accuracy of the baseline full-precision network. As a result, one can
significantly improve the execution efficiency (e.g. reduce dynamic memory
footprint, memory bandwidth and computational energy) and speed up the training
and inference process with appropriate hardware support. We call our scheme
WRPN - wide reduced-precision networks. We report results and show that WRPN
scheme is better than previously reported accuracies on ILSVRC-12 dataset while
being computationally less expensive compared to previously reported
reduced-precision networks. | http://arxiv.org/pdf/1709.01134 | Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20170904 | 20170904 | [] |
1709.01134 | 13 | 2
Overall, reducing precision of activations and weights reduces memory footprint, bandwidth and storage while also simplifying the requirements for hardware to efï¬ciently support these operations.
# 3 WRPN scheme and studies on AlexNet
Based on the observation that activations occupy more memory footprint compared to weights, we reduce the precision of activations to speed up training and inference steps as well as cut down on memory requirements. However, a straightforward reduction in precision of activation maps leads to signiï¬cant reduction in model accuracy [26, 17].
We conduct a sensitivity study where we reduce precision of activation maps and model weights for AlexNet running ILSVRC-12 dataset and train the network from scratch. Table 1 reports our ï¬ndings. Top-1 single-precision (32-bits weights and activations) accuracy is 57.2%. The accuracy with binary weights and activations is 44.2%. This is similar to what is reported in [17]. 32bA and 2bW data-point in this table is using TTQ technique [27]. All other data points are collected using our quantization scheme (described later in Section 5), all the runs have same hyper-parameters and training is carried out for the same number of epochs as baseline network. To be consistent with results reported in prior works, we do not quantize weights and activations of the ï¬rst and last layer. | 1709.01134#13 | WRPN: Wide Reduced-Precision Networks | For computer vision applications, prior works have shown the efficacy of
reducing numeric precision of model parameters (network weights) in deep neural
networks. Activation maps, however, occupy a large memory footprint during both
the training and inference step when using mini-batches of inputs. One way to
reduce this large memory footprint is to reduce the precision of activations.
However, past works have shown that reducing the precision of activations hurts
model accuracy. We study schemes to train networks from scratch using
reduced-precision activations without hurting accuracy. We reduce the precision
of activation maps (along with model parameters) and increase the number of
filter maps in a layer, and find that this scheme matches or surpasses the
accuracy of the baseline full-precision network. As a result, one can
significantly improve the execution efficiency (e.g. reduce dynamic memory
footprint, memory bandwidth and computational energy) and speed up the training
and inference process with appropriate hardware support. We call our scheme
WRPN - wide reduced-precision networks. We report results and show that WRPN
scheme is better than previously reported accuracies on ILSVRC-12 dataset while
being computationally less expensive compared to previously reported
reduced-precision networks. | http://arxiv.org/pdf/1709.01134 | Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20170904 | 20170904 | [] |
1709.01134 | 14 | We ï¬nd that, in general, reducing the precision of activation maps and weights hurts model accuracy. Further, reducing precision of activations hurts model accuracy much more than reducing precision of the ï¬lter parameters. We ï¬nd TTQ to be quite effective on AlexNet in that one can lower the precision of weights to 2b (while activations are still FP32) and not lose accuracy. However, we did not ï¬nd this scheme to be effective for other networks like ResNet or Inception.
Table 1: AlexNet top-1 validation set accuracy % as precision of activations (A) and weight(W) changes. All results are with end-to-end training is a data-point of the network from scratch. we did not experiment for.
Table 2: AlexNet 2x-wide top-1 validation set accuracy % as precision of activations (A) and weights (W) changes. | 1709.01134#14 | WRPN: Wide Reduced-Precision Networks | For computer vision applications, prior works have shown the efficacy of
reducing numeric precision of model parameters (network weights) in deep neural
networks. Activation maps, however, occupy a large memory footprint during both
the training and inference step when using mini-batches of inputs. One way to
reduce this large memory footprint is to reduce the precision of activations.
However, past works have shown that reducing the precision of activations hurts
model accuracy. We study schemes to train networks from scratch using
reduced-precision activations without hurting accuracy. We reduce the precision
of activation maps (along with model parameters) and increase the number of
filter maps in a layer, and find that this scheme matches or surpasses the
accuracy of the baseline full-precision network. As a result, one can
significantly improve the execution efficiency (e.g. reduce dynamic memory
footprint, memory bandwidth and computational energy) and speed up the training
and inference process with appropriate hardware support. We call our scheme
WRPN - wide reduced-precision networks. We report results and show that WRPN
scheme is better than previously reported accuracies on ILSVRC-12 dataset while
being computationally less expensive compared to previously reported
reduced-precision networks. | http://arxiv.org/pdf/1709.01134 | Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20170904 | 20170904 | [] |
1709.01134 | 15 | Table 2: AlexNet 2x-wide top-1 validation set accuracy % as precision of activations (A) and weights (W) changes.
32b A 8b A 4b A 2b A 1b A 32b A 8b A 4b A 2b A 1b A 32b W 57.2 8b W 4b W 2b W 1b W â â 57.5 56.8 54.3 54.5 54.2 50.2 â 54.4 53.2 54.4 50.5 â 52.7 51.5 52.4 51.3 â â â â â 44.2 32b W 60.5 8b W 4b W 2b W 1b W â â â â 58.9 59.0 58.8 57.6 â 58.6 58.8 58.6 57.2 â 57.5 57.1 57.3 55.8 â 52.0 50.8 â â 48.3 | 1709.01134#15 | WRPN: Wide Reduced-Precision Networks | For computer vision applications, prior works have shown the efficacy of
reducing numeric precision of model parameters (network weights) in deep neural
networks. Activation maps, however, occupy a large memory footprint during both
the training and inference step when using mini-batches of inputs. One way to
reduce this large memory footprint is to reduce the precision of activations.
However, past works have shown that reducing the precision of activations hurts
model accuracy. We study schemes to train networks from scratch using
reduced-precision activations without hurting accuracy. We reduce the precision
of activation maps (along with model parameters) and increase the number of
filter maps in a layer, and find that this scheme matches or surpasses the
accuracy of the baseline full-precision network. As a result, one can
significantly improve the execution efficiency (e.g. reduce dynamic memory
footprint, memory bandwidth and computational energy) and speed up the training
and inference process with appropriate hardware support. We call our scheme
WRPN - wide reduced-precision networks. We report results and show that WRPN
scheme is better than previously reported accuracies on ILSVRC-12 dataset while
being computationally less expensive compared to previously reported
reduced-precision networks. | http://arxiv.org/pdf/1709.01134 | Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20170904 | 20170904 | [] |
1709.01134 | 16 | To re-gain the model accuracy while working with reduced-precision operands, we increase the number of ï¬lter maps in a layer. Although the number of raw compute operations increase with widening the ï¬lter maps in a layer, the bits required per compute operation is now a fraction of what is required when using full-precision operations. As a result, with appropriate hardware support, one can signiï¬cantly reduce the dynamic memory requirements, memory bandwidth, computational energy and speed up the training and inference process.
Our widening of ï¬lter maps is inspired from Wide ResNet [24] work where the depth of the network is reduced and width of each layer is increased (the operand precision is still FP32). Wide ResNet re- quires a re-design of the network architecture. In our work, we maintain the depth parameter same as baseline network but widen the ï¬lter maps. We call our approach WRPN - wide reduced-precision networks. In practice, we ï¬nd this scheme to be very simple and effective - starting with a base- line network architecture, one can change the width of each ï¬lter map without changing any other network design parameter or hyper-parameters. Carefully reducing precision and simultaneously widening ï¬lters keeps the total compute cost of the network under or at-par with baseline cost.1 | 1709.01134#16 | WRPN: Wide Reduced-Precision Networks | For computer vision applications, prior works have shown the efficacy of
reducing numeric precision of model parameters (network weights) in deep neural
networks. Activation maps, however, occupy a large memory footprint during both
the training and inference step when using mini-batches of inputs. One way to
reduce this large memory footprint is to reduce the precision of activations.
However, past works have shown that reducing the precision of activations hurts
model accuracy. We study schemes to train networks from scratch using
reduced-precision activations without hurting accuracy. We reduce the precision
of activation maps (along with model parameters) and increase the number of
filter maps in a layer, and find that this scheme matches or surpasses the
accuracy of the baseline full-precision network. As a result, one can
significantly improve the execution efficiency (e.g. reduce dynamic memory
footprint, memory bandwidth and computational energy) and speed up the training
and inference process with appropriate hardware support. We call our scheme
WRPN - wide reduced-precision networks. We report results and show that WRPN
scheme is better than previously reported accuracies on ILSVRC-12 dataset while
being computationally less expensive compared to previously reported
reduced-precision networks. | http://arxiv.org/pdf/1709.01134 | Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20170904 | 20170904 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.