doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1701.06538
14
Mixing Data Parallelism and Model Parallelism: In a conventional distributed training setting, multiple copies of the model on different devices asynchronously process distinct batches of data, and parameters are synchronized through a set of parameter servers. In our technique, these different batches run synchronously so that they can be combined for the MoE layer. We distribute the standard layers of the model and the gating network according to conventional data-parallel schemes, but keep only one shared copy of each expert. Each expert in the MoE layer receives a combined batch consisting of the relevant examples from all of the data-parallel input batches. The same set of devices function as data-parallel replicas (for the standard layers and the gating networks) and as model-parallel shards (each hosting a subset of the experts). If the model is distributed over d devices, and each device processes a batch of size b, each expert receives a batch of approximately kbd n examples. Thus, we achieve a factor of d improvement in expert batch size. In the case of a hierarchical MoE (Section B), the primary gating network employs data parallelism, and the secondary MoEs employ model parallelism. Each secondary MoE resides on one device. 4 # Under review as a conference paper at ICLR 2017
1701.06538#14
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
15
4 # Under review as a conference paper at ICLR 2017 This technique allows us to increase the number of experts (and hence the number of parameters) by proportionally increasing the number of devices in the training cluster. The total batch size increases, keeping the batch size per expert constant. The memory and bandwidth requirements per device also remain constant, as do the step times, as does the amount of time necessary to process a number of training examples equal to the number of parameters in the model. It is our goal to train a trillion- parameter model on a trillion-word corpus. We have not scaled our systems this far as of the writing of this paper, but it should be possible by adding more hardware. Taking Advantage of Convolutionality: In our language models, we apply the same MoE to each time step of the previous layer. If we wait for the previous layer to finish, we can apply the MoE to all the time steps together as one big batch. Doing so increases the size of the input batch to the MoE layer by a factor of the number of unrolled time steps.
1701.06538#15
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
16
Increasing Batch Size for a Recurrent MoE: We suspect that even more powerful models may involve applying a MoE recurrently. For example, the weight matrices of a LSTM or other RNN could be replaced by a MoE. Sadly, such models break the convolutional trick from the last para- graph, since the input to the MoE at one timestep depends on the output of the MoE at the previous timestep. Gruslys et al. (2016) describe a technique for drastically reducing the number of stored activations in an unrolled RNN, at the cost of recomputing forward activations. This would allow for a large increase in batch size. 3.2 NETWORK BANDWIDTH
1701.06538#16
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
17
3.2 NETWORK BANDWIDTH Another major performance concern in distributed computing is network bandwidth. Since the ex- perts are stationary (see above) and the number of gating parameters is small, most of the communi- cation involves sending the inputs and outputs of the experts across the network. To maintain com- putational efficiency, the ratio of an expert’s computation to the size of its input and output must ex- ceed the ratio of computational to network capacity of the computing device. For GPUs, this may be thousands to one. In our experiments, we use experts with one hidden layer containing thousands of RELU-activated units. Since the weight matrices in the expert have sizes input_size×hidden_size and hidden_size × output_size, the ratio of computation to input and output is equal to the size of the hidden layer. Conveniently, we can increase computational efficiency simply by using a larger hidden layer, or more hidden layers. # 4 BALANCING EXPERT UTILIZATION
1701.06538#17
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
18
# 4 BALANCING EXPERT UTILIZATION We have observed that the gating network tends to converge to a state where it always produces large weights for the same few experts. This imbalance is self-reinforcing, as the favored experts are trained more rapidly and thus are selected even more by the gating network. Eigen et al. (2013) describe the same phenomenon, and use a hard constraint at the beginning of training to avoid this local minimum. Bengio et al. (2015) include a soft constraint on the batch-wise average of each gate.1 We take a soft constraint approach. We define the importance of an expert relative to a batch of training examples to be the batchwise sum of the gate values for that expert. We define an additional loss Limportance, which is added to the overall loss function for the model. This loss is equal to the square of the coefficient of variation of the set of importance values, multiplied by a hand-tuned scaling factor wimportance. This additional loss encourages all experts to have equal importance. Importance(X) = G(x) x∈X (6) Limportance(X) = wimportance · CV (Importance(X))2 (7)
1701.06538#18
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
19
Importance(X) = G(x) x∈X (6) Limportance(X) = wimportance · CV (Importance(X))2 (7) 1Bengio et al. (2015) also include two additional losses. One controls per-example sparsity, which we do not need since it is enforced by the fixed value of k. A third loss encourages diversity of gate values. In our experiments, we find that the gate values naturally diversify as the experts specialize (in a virtuous cycle), and we do not need to enforce diversity of gate values. 5 # Under review as a conference paper at ICLR 2017 While this loss function can ensure equal importance, experts may still receive very different num- bers of examples. For example, one expert may receive a few examples with large weights, and another may receive many examples with small weights. This can cause memory and performance problems on distributed hardware. To solve this problem, we introduce a second loss function, Lload , which ensures balanced loads. Appendix A contains the definition of this function, along with experimental results. # 5 EXPERIMENTS 1 BILLION WORD LANGUAGE MODELING BENCHMARK
1701.06538#19
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
20
# 5 EXPERIMENTS 1 BILLION WORD LANGUAGE MODELING BENCHMARK Dataset: This dataset, introduced by (Chelba et al., 2013) consists of shuffled unique sentences from news articles, totaling approximately 829 million words, with a vocabulary of 793,471 words. Previous State-of-the-Art: The best previously published results (Jozefowicz et al., 2016) use models consisting of one or more stacked Long Short-Term Memory (LSTM) layers (Hochreiter & Schmidhuber, 1997; Gers et al., 2000). The number of parameters in the LSTM layers of these models vary from 2 million to 151 million. Quality increases greatly with parameter count, as do computational costs. Results for these models form the top line of Figure 2-right. MoE Models: Our models consist of two stacked LSTM layers with a MoE layer between them (see Figure 1). We vary the sizes of the layers and the number of experts. For full details on model architecture, training regimen, additional baselines and results, see Appendix C.
1701.06538#20
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
21
Low Computation, Varied Capacity: To investigate the effects of adding capacity, we trained a series of MoE models all with roughly equal computational costs: about 8 million multiply-and- adds per training example per timestep in the forwards pass, excluding the softmax layer. We call this metric (ops/timestep). We trained models with flat MoEs containing 4, 32, and 256 experts, and models with hierarchical MoEs containing 256, 1024, and 4096 experts. Each expert had about 1 million parameters. For all the MoE layers, 4 experts were active per input. The results of these models are shown in Figure 2-left. The model with 4 always-active experts performed (unsurprisingly) similarly to the computationally-matched baseline models, while the largest of the models (4096 experts) achieved an impressive 24% lower perplexity on the test set. FF Baseline Models 45 |F-FF Flat MoE Models [FHT Hierarchical MoE Models 2 g 240 S a % & 35 10" 10° 10° 10° Model Parameters Excluding Embedding and Softmax 55 [EL isi Models VM MoE Models 50 45 2 g 2340 S a % 35 & 30 10° 10" 10° Computational Budget (ops/timestep)
1701.06538#21
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
22
55 [EL isi Models VM MoE Models 50 45 2 g 2340 S a % 35 & 30 10° 10" 10° Computational Budget (ops/timestep) FF Baseline Models 55 [EL isi Models 45 |F-FF Flat MoE Models VM MoE Models [FHT Hierarchical MoE Models 50 45 2 g 240 2340 S a % 35 & 35 30 10" 10° 10° 10° 10° 10" 10° Model Parameters Excluding Embedding and Softmax Computational Budget (ops/timestep) Figure 2: Model comparison on 1-Billion-Word Language-Modeling Benchmark. On the left, we plot test perplexity as a function of model capacity for models with similar computational budgets of approximately 8-million-ops-per-timestep. On the right, we plot test perplexity as a function of computational budget. The top line represents the LSTM models from (Jozefowicz et al., 2016). The bottom line represents 4-billion parameter MoE models with different computational budgets. Varied Computation, High Capacity: In addition to the largest model from the previous section, we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger and experts. Details 6 # Under review as a conference paper at ICLR 2017
1701.06538#22
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
23
6 # Under review as a conference paper at ICLR 2017 Table 1: Summary of high-capacity MoE-augmented models with varying computational budgets, vs. best previously published results (Jozefowicz et al., 2016). Details in Appendix C. Best Published Results Low-Budget MoE Model Medium-Budget MoE Model High-Budget MoE Model Test Test #Parameters Perplexity Perplexity excluding embedding 10 epochs 100 epochs and softmax layers 151 million 4303 million 4313 million 4371 million 34.7 34.1 31.3 28.0 30.6 Training Time 10 epochs 59 hours, 32 k40s 151 million 8.9 million 15 hours, 16 k40s 33.8 million 17 hours, 32 k40s 142.7 million 47 hours, 32 k40s ops/timestep TFLOPS /GPU 1.09 0.74 1.22 1.56 can be found in Appendix C.2. Results of these three models form the bottom line of Figure 2-right. Table 1 compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation.
1701.06538#23
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
24
Computational Efficiency: We trained our models using TensorFlow (Abadi et al., 2016) on clus- ters containing 16-32 Tesla K40 GPUs. For each of our models, we determine computational effi- ciency in TFLOPS/GPU by dividing the number of floating point operations required to process one training batch by the observed step time and the number of GPUs in the cluster. The operation counts used here are higher than the ones we report in our ops/timestep numbers in that we include the backwards pass, we include the importance-sampling-based training of the softmax layer, and we count a multiply-and-add as two separate operations. For all of our MoE models, the floating point operations involved in the experts represent between 37% and 46% of the total.
1701.06538#24
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
25
For our baseline models wtih no MoE, observed computational efficiency ranged from 1.07-1.29 TFLOPS/GPU. For our low-computation MoE models, computation efficiency ranged from 0.74- 0.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available parallelism. Our highest-computation MoE model was more efficient at 1.56 TFLOPS/GPU, likely due to the larger matrices. These numbers represent a significant fraction of the theoretical maximum of 4.29 TFLOPS/GPU claimed by NVIDIA. Detailed results are in Appendix C, Table 7. 100 BILLION WORD GOOGLE NEWS CORPUS on a >< Alter Training on 10B words @-@ After Training on 100B words a 3} Ps a Test Perplexity o - & 8 o is} 10" 10° 10° 10° 10" Model Parameters Excluding Embedding and Softmax Figure 3: Language modeling on a 100 billion word corpus. Models have similar computational budgets (8 million ops/timestep).
1701.06538#25
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
26
Figure 3: Language modeling on a 100 billion word corpus. Models have similar computational budgets (8 million ops/timestep). On the 1-billion-word corpus, adding additional capacity seems to produce diminishing returns as the number of parameters in the MoE layer exceeds 1 billion, as can be seen in Figure 2-left. We hypothesized that for a larger training set, even higher capacities would produce significant quality improvements. We constructed a similar training set consisting of shuffled unique sentences from Google’s internal news corpus, totalling roughly 100 billion words. Similarly to the previous section, we tested a series of models with similar computational costs of about 8 million ops/timestep. In addition to a baseline LSTM model, we trained models augmented with MoE layers containing 32, 256, 1024, 7 # Under review as a conference paper at ICLR 2017 4096, 16384, 65536, and 131072 experts. This corresponds to up to 137 billion parameters in the MoE layer. Details on architecture, training, and results are given in Appendix D.
1701.06538#26
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
27
Results: Figure 3 shows test perplexity as a function of capacity after training on 10 billion words (top line) and 100 billion words (bottom line). When training over the full 100 billion words, test perplexity improves significantly up to 65536 experts (68 billion parameters), dropping 39% lower than the computationally matched baseline, but degrades at 131072 experts, possibly a result of too much sparsity. The widening gap between the two lines demonstrates (unsurprisingly) that increased model capacity helps more on larger training sets. Even at 65536 experts (99.994% layer sparsity), computational efficiency for the model stays at a respectable 0.72 TFLOPS/GPU. 5.3 MACHINE TRANSLATION (SINGLE LANGUAGE PAIR)
1701.06538#27
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
28
5.3 MACHINE TRANSLATION (SINGLE LANGUAGE PAIR) Model Architecture: Our model was a modified version of the GNMT model described in (Wu et al., 2016). To reduce computation, we decreased the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We inserted MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). Each MoE layer contained up to 2048 experts each with about two million parameters, adding a total of about 8 billion parameters to the models. Further details on model architecture, testing procedure and results can be found in Appendix E. Datasets: We benchmarked our method on the WMT’14 En→Fr and En→De corpora, whose training sets have 36M sentence pairs and 5M sentence pairs, respectively. The experimental proto- cols were also similar to those in (Wu et al., 2016): newstest2014 was used as the test set to compare against previous work (Luong et al., 2015a; Zhou et al., 2016; Wu et al., 2016), while the combina- tion of newstest2012 and newstest2013 was used as the development set. We also tested the same model on a Google’s Production English to French data.
1701.06538#28
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
29
Table 2: Results on WMT’14 En→ Fr newstest2014 (bold values represent best results). Model Test Test ops/timenstep Training Total Time #Parameters 3 days/64 k40s 8.7B 8.7B 6 days/64 k40s 278M 6 days/96 k80s 278M 6 days/96 k80s Perplexity BLEU 40.35 40.56 39.22 39.92 37.0 31.5 33.1 37.7 39.2 MoE with 2048 Experts MoE with 2048 Experts (longer training) GNMT (Wu et al., 2016) GNMT+RL (Wu et al., 2016) PBMT (Durrani et al., 2014) LSTM (6-layer) (Luong et al., 2015b) LSTM (6-layer+PosUnk) (Luong et al., 2015b) DeepAtt (Zhou et al., 2016) DeepAtt+PosUnk (Zhou et al., 2016) 2.69 2.63 2.79 2.96 85M 85M 214M 214M Table 3: Results on WMT’14 En → De newstest2014 (bold values represent best results). Test Perplexity BLEU 26.03 24.91 24.66 20.7 20.6
1701.06538#29
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
30
Table 4: Results on the Google Production En→ Fr dataset (bold values represent best results). Model MoE with 2048 Experts GNMT (Wu et al., 2016) Test Perplexity BLEU Perplexity BLEU 36.57 35.56 Eval Eval Test 2.60 2.78 37.27 35.80 2.69 2.87 ops/timestep 85M 214M Total #Parameters 8.7B 278M Training Time 1 day/64 k40s 6 days/96 k80s 8 # Under review as a conference paper at ICLR 2017 Results: Tables 2, 3, and 4 show the results of our largest models, compared with published results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMT’14 En→Fr and En→De benchmarks. As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in (Wu et al., 2016). The perplexity scores are also better.2 On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time. 5.4 MULTILINGUAL MACHINE TRANSLATION
1701.06538#30
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
31
5.4 MULTILINGUAL MACHINE TRANSLATION Dataset: (Johnson et al., 2016) train a single GNMT (Wu et al., 2016) model on a very large com- bined dataset of twelve language pairs. Results are somewhat worse than those for 12 separately trained single-pair GNMT models. This is not surprising, given that the twelve models have 12 times the capacity and twelve times the aggregate training of the one model. We repeat this ex- periment with a single MoE-augmented model. See Appendix E for details on model architecture. We train our model on the same dataset as (Johnson et al., 2016) and process the same number of training examples (about 3 billion sentence pairs). Our training time was shorter due to the lower computational budget of our model.
1701.06538#31
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
32
Results: Results for the single-pair GNMT models, the multilingual GNMT model and the mul- tilingual MoE model are given in Table 5. The MoE model achieves 19% lower perplexity on the dev set than the multilingual GNMT model. On BLEU score, the MoE model significantly beats the multilingual GNMT model on 11 of the 12 language pairs (by as much as 5.84 points), and even beats the monolingual GNMT models on 8 of 12 language pairs. The poor performance on English → Korean seems to be a result of severe overtraining, as for the rarer language pairs a small number of real examples were highly oversampled in the training corpus. Table 5: Multilingual Machine Translation (bold values represent best results). GNMT-Mono GNMT-Multi
1701.06538#32
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
33
MoE-Multi MoE-Multi vs. GNMT-Multi Parameters 278M / model ops/timestep training time, hardware Perplexity (dev) French → English Test BLEU German → English Test BLEU Japanese → English Test BLEU Korean → English Test BLEU Portuguese → English Test BLEU Spanish → English Test BLEU English → French Test BLEU English → German Test BLEU English → Japanese Test BLEU English → Korean Test BLEU English → Portuguese Test BLEU English → Spanish Test BLEU 212M various 36.47 31.77 23.41 25.42 44.40 38.00 35.37 26.43 23.66 19.75 38.40 34.50 278M 212M 8.7B 102M 21 days, 96 k20s 12 days, 64 k40s 4.14 34.40 31.17 21.62 22.87 42.53 36.04 34.00 23.15 21.10 18.41 37.35 34.25 3.35 37.46 34.80 25.91 28.71 46.13 39.39 36.59 24.53 22.78 16.62 37.90 36.21 -19% +3.06 +3.63 +4.29 +5.84 +3.60 +3.35 +2.59 +1.38 +1.68 -1.79
1701.06538#33
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
35
# 6 CONCLUSION This work is the first to demonstrate major wins from conditional computation in deep networks. We carefully identified the design considerations and challenges of conditional computing and ad- dressed them with a combination of algorithmic and engineering solutions. While we focused on text, conditional computation may help in other domains as well, provided sufficiently large train- ing sets. We look forward to seeing many novel implementations and applications of conditional computation in the years to come. ACKNOWLEDGMENTS We would like to thank all of the members of the Google Brain and Google Translate teams who helped us with this project, in particular Zhifeng Chen, Yonghui Wu, and Melvin Johnson. Thanks also to our anonymous ICLR reviewers for the helpful suggestions on making this paper better. 2Reported perplexities relative to the tokenization used by both our models and GNMT. 9 # Under review as a conference paper at ICLR 2017 # REFERENCES
1701.06538#35
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
36
2Reported perplexities relative to the tokenization used by both our models and GNMT. 9 # Under review as a conference paper at ICLR 2017 # REFERENCES Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gre- gory S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Good- fellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Józefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Gor- don Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. CoRR, abs/1603.04467, 2016. URL http://arxiv.org/abs/1603.04467.
1701.06538#36
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
38
Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jing- dong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Y. Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Y. Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yo- gatama, Jun Zhan, and Zhenyao Zhu. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595, 2015. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, and Doina Precup. Conditional computation in neural networks for faster models. arXiv preprint arXiv:1511.06297, 2015.
1701.06538#38
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
39
Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005, 2013. K. Cho and Y. Bengio. Exponentially Increasing the Capacity-to-Computation Ratio for Conditional Computation in Deep Learning. ArXiv e-prints, June 2014. Ronan Collobert, Samy Bengio, and Yoshua Bengio. A parallel mixture of SVMs for very large scale problems. Neural Computing, 2002. Andrew Davis and Itamar Arel. Low-rank approximations for conditional feedforward computation in deep neural networks. arXiv preprint arXiv:1312.4461, 2013. Marc Peter Deisenroth and Jun Wei Ng. Distributed Gaussian processes. In ICML, 2015. John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization, 2010.
1701.06538#39
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
40
John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization, 2010. Nadir Durrani, Barry Haddow, Philipp Koehn, and Kenneth Heafield. Edinburgh’s phrase-based In Proceedings of the Ninth Workshop on Statistical machine translation systems for wmt-14. Machine Translation, 2014. David Eigen, Marc’Aurelio Ranzato, and Ilya Sutskever. Learning factored representations in a deep mixture of experts. arXiv preprint arXiv:1312.4314, 2013. Ekaterina Garmash and Christof Monz. Ensemble learning for multi-source neural machine transla- tion. In staff.science.uva.nl/c.monz, 2016. 10 # Under review as a conference paper at ICLR 2017 Felix A. Gers, Jürgen A. Schmidhuber, and Fred A. Cummins. Learning to forget: Continual pre- diction with lstm. Neural Computation, 2000.
1701.06538#40
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
41
Audrunas Gruslys, Rémi Munos, Ivo Danihelka, Marc Lanctot, and Alex Graves. Memory-efficient backpropagation through time. CoRR, abs/1606.03401, 2016. URL http://arxiv.org/ abs/1606.03401. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. IEEE Conference on Computer Vision and Pattern Recognition, 2015. Geoffrey Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 2012. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 1997. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, and Geoffrey E. Hinton. Adaptive mixtures of local experts. Neural Computing, 1991.
1701.06538#41
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
42
Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, and Geoffrey E. Hinton. Adaptive mixtures of local experts. Neural Computing, 1991. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda B. Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google’s multilingual neural machine translation system: Enabling zero-shot translation. CoRR, abs/1611.04558, 2016. URL http://arxiv.org/abs/1611.04558. Michael I. Jordan and Robert A. Jacobs. Hierarchical mixtures of experts and the EM algorithm. Neural Computing, 1994. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. Reinhard Kneser and Hermann. Ney. Improved backingoff for m-gram language modeling., 1995.
1701.06538#42
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
43
Reinhard Kneser and Hermann. Ney. Improved backingoff for m-gram language modeling., 1995. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convo- lutional neural networks. In NIPS, 2012. Quoc V. Le, Marc’Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S. Corrado, Jeffrey Dean, and Andrew Y. Ng. Building high-level features using large scale unsupervised learning. In ICML, 2012. Patrick Gallinari Ludovic Denoyer. Deep sequential neural network. arXiv preprint arXiv:1410.0510, 2014. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attention- based neural machine translation. EMNLP, 2015a. Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. Addressing the rare word problem in neural machine translation. ACL, 2015b. Carl Edward Rasmussen and Zoubin Ghahramani. Infinite mixtures of Gaussian process experts. NIPS, 2002.
1701.06538#43
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
44
Carl Edward Rasmussen and Zoubin Ghahramani. Infinite mixtures of Gaussian process experts. NIPS, 2002. Hasim Sak, Andrew W Senior, and Françoise Beaufays. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In INTERSPEECH, pp. 338–342, 2014. Mike Schuster and Kaisuke Nakajima. Japanese and Korean voice search. ICASSP, 2012. Babak Shahbaba and Radford Neal. Nonlinear models using dirichlet process mixtures. JMLR, 2009. 11 # Under review as a conference paper at ICLR 2017 Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In NIPS, 2014. Lucas Theis and Matthias Bethge. Generative image modeling using spatial LSTMs. In NIPS, 2015. Volker Tresp. Mixtures of Gaussian Processes. In NIPS, 2001.
1701.06538#44
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
45
Volker Tresp. Mixtures of Gaussian Processes. In NIPS, 2001. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. Bangpeng Yao, Dirk Walther, Diane Beck, and Li Fei-fei. Hierarchical mixture of classification experts uncovers interactions between brain regions. In NIPS. 2009. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
1701.06538#45
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
47
12 # Under review as a conference paper at ICLR 2017 # APPENDICES A LOAD-BALANCING LOSS As discussed in section 4, for load-balancing purposes, we want to define an additional loss function to encourage experts to receive roughly equal numbers of training examples. Unfortunately, the number of examples received by an expert is a discrete quantity, so it can not be used in back- propagation. Instead, we define a smooth estimator Load(X) of the number of examples assigned to each expert for a batch X of inputs. The smoothness allows us to back-propagate gradients through the estimator. This is the purpose of the noise term in the gating function. We define P (x, i) as the probability that G(x)i is nonzero, given a new random choice of noise on element i, but keeping the already-sampled choices of noise on the other elements. To compute P (x, i), we note that the G(x)i is nonzero if and only if H(x)i is greater than the kth-greatest element of H(x) excluding itself. The probability works out to be:
1701.06538#47
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
48
P(z,i) = Pr((e -W,); + StandardNormal() - Softplus((x - Whoise)i) A (8) > kth_excluding(H (2), k, i) Where kth_excluding(v, k, i) means the kth highest component of v, excluding component i. Sim- plifying, we get: . (a -W,)i — kth_excluding(H (x), k, i) P(x,i)=® (2,4) ( Softplus((x - Wnhoise)i) ) o Where Φ is the CDF of the standard normal distribution. Load(X)i = P (x, i) x∈X (10) We can now define the load loss to be the square of the coefficient of variation of the load vector, multiplied by a hand-tuned scaling factor wload. Lload(X) = wload · CV (Load(X))2 (11) Initial Load Imbalance: To avoid out-of-memory errors, we need to initialize the network in a state of approximately equal expert load (since the soft constraints need some time to work). To accomplish this, we initialize the matrices Wg and Wnoise to all zeros, which yields no signal and some noise.
1701.06538#48
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
49
Experiments: We trained a set of models with identical architecture (the MoE-256 model de- scribed in Appendix C), using different values of wimportance and wload. We trained each model for 10 epochs, then measured perplexity on the test set. We also measured the coefficients of variation in Importance and Load, as well as ratio of the load on the most overloaded expert to the average load. This last value is significant for load balancing purposes on distributed hardware. All of these metrics were averaged over several training batches. Table 6: Experiments with different combinations of losses. wimportance wload Test Perplexity CV (Importance(X)) CV (Load(X)) max(Load(X)) mean(Load(X)) 17.80 1.47 1.15 1.14 1.37 1.07 13 # Under review as a conference paper at ICLR 2017 Results: Results are reported in Table 6. All the combinations containing at least one the two losses led to very similar model quality, where having no loss was much worse. Models with higher values of wload had lower loads on the most overloaded expert. B HIERACHICAL MIXTURE OF EXPERTS
1701.06538#49
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
50
B HIERACHICAL MIXTURE OF EXPERTS If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted com- bination of “experts", each of which is itself a secondary mixture-of-experts with its own gating network.3 If the hierarchical MoE consists of a groups of b experts each, we denote the primary gat- ing network by Gprimary, the secondary gating networks by (G1, G2..Ga), and the expert networks by (E0,0, E0,1..Ea,b). The output of the MoE is given by: a b uit = S22 Gprimary (i+ Gil); + Bi (12) i=1 j=1 Our metrics of expert utilization change to the following: Importance y(X) i,j = > Gprimary(X)i + Gi(x);j (13) weX LoadH (X)i,j = Loadprimary(X)i · Loadi(X (i))j |X (i)| (14)
1701.06538#50
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
51
LoadH (X)i,j = Loadprimary(X)i · Loadi(X (i))j |X (i)| (14) Loadprimary and Loadi deonte the Load functions for the primary gating network and ith sec- ondary gating network respectively. X (i) denotes the subset of X for which Gprimary(x)i > 0. It would seem simpler to let LoadH (X)i,j = Loadi(Xi)j , but this would not have a gradient with respect to the primary gating network, so we use the formulation above. C 1 BILLION WORD LANGUAGE MODELING BENCHMARK - EXPERIMENTAL DETAILS 8-MILLION-OPERATIONS-PER-TIMESTEP MODELS
1701.06538#51
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
52
8-MILLION-OPERATIONS-PER-TIMESTEP MODELS Model Architecture: Our model consists of five layers: a word embedding layer, a recurrent Long Short-Term Memory (LSTM) layer (Hochreiter & Schmidhuber, 1997; Gers et al., 2000), a MoE layer, a second LSTM layer, and a softmax layer. The dimensionality of the embedding layer, the number of units in each LSTM layer, and the input and output dimensionality of the MoE layer are all equal to 512. For every layer other than the softmax, we apply drouput (Zaremba et al., 2014) to the layer output, dropping each activation with probability DropP rob, otherwise dividing by (1 − DropP rob). After dropout, the output of the previous layer is added to the layer output. This residual connection encourages gradient flow (He et al., 2015).
1701.06538#52
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
53
MoE Layer Architecture: Each expert in the MoE layer is a feed forward network with one ReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contains [512 ∗ 1024] + [1024 ∗ 512] = 1M parameters. The output of the MoE layer is passed through a sigmoid function before dropout. We varied the number of experts between models, using ordinary MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts. We call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096- h. For the hierarchical MoE layers, the first level branching factor was 16, corresponding to the number of GPUs in our cluster. We use Noisy-Top-K Gating (see Section 2.1) with k = 4 for the ordinary MoE layers and k = 2 at each level of the hierarchical MoE layers. Thus, each example is processed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2M ops/timestep each for the desired total of 8M. 3 We have not found the need for deeper hierarchies. 14 # Under review as a conference paper at ICLR 2017
1701.06538#53
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
54
3 We have not found the need for deeper hierarchies. 14 # Under review as a conference paper at ICLR 2017 Computationally-Matched Baselines: The MoE-4 model does not employ sparsity, since all 4 experts are always used. In addition, we trained four more computationally-matched baseline models with no sparsity: • MoE-1-Wide: The MoE layer consists of a single "expert" containing one ReLU-activated hidden layer of size 4096. • MoE-1-Deep: The MoE layer consists of a single "expert" containing four ReLU-activated hidden layers, each with size 1024. 4xLSTM-512: We replace the MoE layer with two additional 512-unit LSTM layers. • LSTM-2048-512: The model contains one 2048-unit LSTM layer (and no MoE). The out- put of the LSTM is projected down to 512 dimensions (Sak et al., 2014). The next timestep of the LSTM receives the projected output. This is identical to one of the models published in (Jozefowicz et al., 2016). We re-ran it to account for differences in training regimen, and obtained results very similar to the published ones.
1701.06538#54
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
55
Training: The models were trained on a cluster of 16 K40 GPUs using the synchronous method described in Section 3. Each batch consisted of a set of sentences totaling roughly 300,000 words. In the interest of time, we limited training to 10 epochs, (27,000 steps). Training took 12-16 hours for all models, except for MoE-4, which took 18 hours (since all the expert computation was performed on only 4 of 16 GPUs). We used the Adam optimizer (Kingma & Ba, 2015). The base learning rate was increased linearly for the first 1000 training steps, and decreased after that so as to be proportional to the inverse square root of the step number. The Softmax output layer was trained efficiently using importance sampling similarly to the models in (Jozefowicz et al., 2016). For each model, we performed a hyper-parmeter search to find the best dropout probability, in increments of 0.1. To ensure balanced expert utilization we set wimportance = 0.1 and wload = 0.1, as described in Section 4 and Appendix A.
1701.06538#55
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
56
To ensure balanced expert utilization we set wimportance = 0.1 and wload = 0.1, as described in Section 4 and Appendix A. Results: We evaluate our model using perplexity on the holdout dataset, used by (Chelba et al., 2013; Jozefowicz et al., 2016). We follow the standard procedure and sum over all the words in- cluding the end of sentence symbol. Results are reported in Table 7. For each model, we report the test perplexity, the computational budget, the parameter counts, the value of DropP rob, and the computational efficiency. Table 7: Model comparison on 1 Billion Word Language Modeling Benchmark. Models marked with * are from (Jozefowicz et al., 2016).
1701.06538#56
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
57
Model Kneser-Ney 5-gram* LSTM-512-512* LSTM-1024-512* LSTM-2048-512* LSTM-2048-512 4xLSTM-512 MoE-1-Wide MoE-1-Deep MoE-4 MoE-32 MoE-256 MoE-256-h MoE-1024-h MoE-4096-h 2xLSTM-8192-1024* MoE-34M MoE-143M Test Test Perplexity Perplexity 10 epochs (final) 67.6 54.1 48.2 43.7 45.0 44.7 46.0 46.1 45.7 45.0 39.7 35.7 36.0 34.6 34.1 34.7 31.3 28.0 30.6 Total Drop- TFLOPS per GPU (observed) ops/timestep #Params excluding (millions) embed. & softmax #Params P rob (millions) (billions) 1.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.9 1.1 1.1 1.9 5.1 1.8 6.0 6.0 0.00001 2.4 4.7 9.4
1701.06538#57
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
59
15 # Under review as a conference paper at ICLR 2017 C.2 MORE EXPENSIVE MODELS We ran two additional models (MoE-34M and MoE-143M) to investigate the effects of adding more computation in the presence of a large MoE layer. These models have computation budgets of 34M and 143M ops/timestep. Similar to the models above, these models use a MoE layer between two LSTM layers. The dimensionality of the embedding layer, and the input and output dimensionality of the MoE layer are set to 1024 instead of 512. For MoE-34M, the LSTM layers have 1024 units. For MoE-143M, the LSTM layers have 4096 units and an output projection of size 1024 (Sak et al., 2014). MoE-34M uses a hierarchical MoE layer with 1024 experts, each with a hidden layer of size 2048. MoE-143M uses a hierarchical MoE layer with 256 experts, each with a hidden layer of size 8192. Both models have 4B parameters in the MoE layers. We searched for the best DropP rob for each model, and trained each model for 10 epochs.
1701.06538#59
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
60
The two models achieved test perplexity of 31.3 and 28.0 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table 7. The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by 18%. D 100 BILLION WORD GOOGLE NEWS CORPUS - EXPERIMENTAL DETAILS Model Architecture: The models are similar in structure to the 8-million-operations-per-timestep models described in the previous section. We vary the number of experts between models, using an ordinary MoE layer with 32 experts and hierarchical MoE layers with 256, 1024, 4096, 16384, 65536 and 131072 experts. For the hierarchical MoE layers, the first level branching factors are 32, 32, 64, 128, 256 and 256, respectively. Training: Models are trained on a cluster of 32 Tesla K40 GPUs, except for the last two models, which are trained on clusters of 64 and 128 GPUs so as to have enough memory for all the param- eters. For all models, training batch sizes are approximately 2.5 million words. Models are trained once-through over about 100 billion words.
1701.06538#60
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
61
We implement several memory optimizations in order to fit up to 1 billion parameters per GPU. First, we do not store the activations of the hidden layers of the experts, but instead recompute them on the backwards pass. Secondly, we modify the optimizer on the expert parameters to require less auxiliary storage: The Adam optimizer (Kingma & Ba, 2015) keeps first and second moment estimates of the per- parameter gradients. This triples the required memory. To avoid keeping a first-moment estimator, we set β1 = 0. To reduce the size of the second moment estimator, we replace it with a factored approximation. For a matrix of parameters, instead of maintaining a full matrix of second-moment estimators, we maintain vectors of row-wise and column-wise averages of that matrix. At each step, the matrix of estimators is taken to be the outer product of those two vectors divided by the mean of either one. This technique could similarly be applied to Adagrad (Duchi et al., 2010). Table 8: Model comparison on 100 Billion Word Google News Dataset
1701.06538#61
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
62
Table 8: Model comparison on 100 Billion Word Google News Dataset Model Kneser-Ney 5-gram 4xLSTM-512 MoE-32 MoE-256-h MoE-1024-h MoE-4096-h MoE-16384-h MoE-65536-h MoE-131072-h Test Test Perplexity Perplexity 1 epoch .1 epochs 45.3 67.1 47.0 54.5 40.4 48.5 35.3 42.8 32.7 40.3 30.9 38.9 38.2 29.7 28.9 38.2 29.2 39.8 ops/timestep #Params excluding TFLOPS per GPU (billions) (observed) Total (millions) embed. & softmax #Params (millions) 0.00001 8.4 8.4 8.4 8.5 8.6 8.8 9.2 9.7 8.4 37.8 272.9 1079.0 4303.4 17201.0 68791.0 137577.6 76.0 0.1 0.1 0.4 1.2 4.4 17.3 68.9 137.7 1.23 0.83 1.11 1.14 1.07 0.96 0.72 0.30
1701.06538#62
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
63
Results: We evaluate our model using perplexity on a holdout dataset. Results are reported in Table 8. Perplexity after 100 billion training words is 39% lower for the 68-billion-parameter MoE 16 # Under review as a conference paper at ICLR 2017 model than for the baseline model. It is notable that the measured computational efficiency of the largest model (0.30 TFLOPS/GPU) is very low compared to the other models. This is likely a result of the fact that, for purposes of comparison to the other models, we did not increase the training batch size proportionally to the number of GPUs. For comparison, we include results for a computationally matched baseline model consisting of 4 LSTMs, and for an unpruned 5-gram model with Kneser-Ney smoothing (Kneser & Ney, 1995).4 E MACHINE TRANSLATION - EXPERIMENTAL DETAILS
1701.06538#63
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
64
E MACHINE TRANSLATION - EXPERIMENTAL DETAILS Model Architecture for Single Language Pair MoE Models: Our model is a modified version of the GNMT model described in (Wu et al., 2016). To reduce computation, we decrease the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We insert MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). We use an attention mechanism between the encoder and decoder, with the first decoder LSTM receiving output from and providing input for the attention 5. All of the layers in our model have input and output dimensionality of 512. Our LSTM layers have 2048 hidden units, with a 512-dimensional output projection. We add residual connections around all LSTM and MoE layers to encourage gradient flow (He et al., 2015). Similar to GNMT, to effectively deal with rare words, we used sub- word units (also known as “wordpieces") (Schuster & Nakajima, 2012) for inputs and outputs in our system. We use a shared source and target vocabulary of 32K wordpieces. We also used the same beam search technique as proposed in (Wu et al., 2016).
1701.06538#64
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
65
We use a shared source and target vocabulary of 32K wordpieces. We also used the same beam search technique as proposed in (Wu et al., 2016). We train models with different numbers of experts in the MoE layers. In addition to a baseline model with no MoE layers, we train models with flat MoE layers containing 32 experts, and models with hierarchical MoE layers containing 512 and 2048 experts. The flat MoE layers use k = 4 and the hierarchical MoE models use k = 2 at each level of the gating network. Thus, each input is processed by exactly 4 experts in each MoE layer. Each expert in the MoE layer is a feed forward network with one hidden layer of size 2048 and ReLU activation. Thus, each expert contains [512 ∗ 2048] + [2048 ∗ 512] = 2M parameters. The output of the MoE layer is passed through a sigmoid function. We use the strictly-balanced gating function described in Appendix F.
1701.06538#65
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
66
Model Architecture for Multilingual MoE Model: We used the same model architecture as for the single-language-pair models, with the following exceptions: We used noisy-top-k gating as described in Section 2.1, not the scheme from Appendix F. The MoE layers in the encoder and decoder are non-hierarchical MoEs with n = 512 experts, and k = 2. Each expert has a larger hidden layer of size 8192. This doubles the amount of computation in the MoE layers, raising the computational budget of the entire model from 85M to 102M ops/timestep. Training: We trained our networks using the Adam optimizer (Kingma & Ba, 2015). The base learning rate was increased linearly for the first 2000 training steps, held constant for an additional 8000 steps, and decreased after that so as to be proportional to the inverse square root of the step number. For the single-language-pair models, similarly to (Wu et al., 2016), we applied dropout (Zaremba et al., 2014) to the output of all embedding, LSTM and MoE layers, using DropP rob = 0.4. Training was done synchronously on a cluster of up to 64 GPUs as described in section 3. Each training batch consisted of a set of sentence pairs containing roughly 16000 words per GPU.
1701.06538#66
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
67
To ensure balanced expert utilization we set wimportance = 0.01 and wload = 0.01, as described in Section 4 and Appendix A. Metrics: We evaluated our models using the perplexity and the standard BLEU score metric. We reported tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public implementation of Moses (on Github), which was also used in (Luong et al., 2015a). 4While the original size of the corpus was 130 billion words, the neural models were trained for a maximum of 100 billion words. The reported Kneser-Ney 5-gram models were trained over 13 billion and 130 billion words respectively, giving them a slight advantage over the other reported results. 5For performance reasons, we use a slightly different attention function from the one described in (Wu et al., 2016) - See Appendix G 17 # Under review as a conference paper at ICLR 2017 Results: Tables 2, 3 and 4 in Section 5.3 show comparisons of our results to other published methods. Figure 4 shows test perplexity as a function of number of words in the (training data’s) source sentences processed for models with different numbers of experts. As can be seen from the Figure, as we increased the number of experts to approach 2048, the test perplexity of our model continued to improve.
1701.06538#67
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
68
oe HExperts=0 yoy HExperts=32 aoa HExperts=512 moa HExperts=2048 o* HExperts=0 soa HExperts=2048 2 3 Ba) 5 \ See: a cme | as ide ee 4 30] tay y Saree ae NN ty A enn erty | 25| ey ey 20 2 a 40 3s 78 15 20 Number of source words processed 109 Number of source words processed toto Figure 4: Perplexity on WMT’14 En→ Fr (left) and Google Production En→ Fr (right) datasets as a function of number of words processed. The large differences between models at the beginning of training are due to different batch sizes. All models incur the same computational budget (85M ops/timestep) except the one with no experts. We found that the experts indeed become highly specialized by syntax and/or semantics, as can be seen in Table 9. For example, one expert is used when the indefinite article “a" introduces the direct object in a verb phrase indicating importance or leadership.
1701.06538#68
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
69
Table 9: Contexts corresponding to a few of the 2048 experts in the MoE layer in the encoder portion of the WMT’14 En→ Fr translation model. For each expert i, we sort the inputs in a training batch in decreasing order of G(x)i, and show the words surrounding the corresponding positions in the input sentences. Expert 381 ... with researchers , ... ... to innovation . ... tics researchers . ... the generation of ... ... technology innovations is ... ... technological innovations , ... ... support innovation throughout ... ... role innovation will ... ... research scienti st ... ... promoting innovation where ... ... Expert 752 ... plays a core ... ... plays a critical ... ... provides a legislative ... ... play a leading ... ... assume a leadership ... ... plays a central ... ... taken a leading ... ... established a reconciliation ... ... played a vital ... ... have a central ... ... Expert 2004 ... with rapidly growing ... ... under static conditions ... ... to swift ly ... ... to dras tically ... ... the rapid and ... ... the fast est ... ... the Quick Method ... ... rec urrent ) ... ... provides quick access ... ... of volatile organic ... ... F STRICTLY BALANCED GATING
1701.06538#69
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
70
F STRICTLY BALANCED GATING Due to some peculiarities in our infrastructure which have since been fixed, at the time we ran some of the machine translation experiments, our models ran faster if every expert received exactly the same batch size. To accommodate this, we used a different gating function which we describe below. Recall that we define the softmax gating function to be: Gσ(x) = Sof tmax(x · Wg) (15) 18 # Under review as a conference paper at ICLR 2017 Sparse Gating (alternate formulation): To obtain a sparse gating vector, we multiply Gσ(x) component-wise with a sparse mask M (Gσ(x)) and normalize the output. The mask itself is a function of Gσ(x) and specifies which experts are assigned to each input example: Ga(2):M(Go(0))i Say Go(@)M(Go(@)); G(x): (16) Top-K Mask: To implement top-k gating in this formulation, we would let M (v) = T opK(v, k), where: 1 if v; is in the top k elements of v. 17 0 otherwise. ay TopK(v, k); = {
1701.06538#70
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
71
1 if v; is in the top k elements of v. 17 0 otherwise. ay TopK(v, k); = { Batchwise Mask: To force each expert to receive the exact same number of examples, we intro- duce an alternative mask function, Mbatchwise(X, m), which operates over batches of input vectors. Instead of keeping the top k values per example, we keep the top m values per expert across the training batch, where m = k|X| 1 if Xj, is in the top m values for to expert 7 18 0 otherwise (18) Mbatchwise(X;™)j,i = { As our experiments suggest and also observed in (Ioffe & Szegedy, 2015), using a batchwise func- tion during training (such as Mbatchwise) requires modifications to the inference when we may not have a large batch of examples. Our solution to this is to train a vector T of per-expert threshold values to approximate the effects of the batchwise mask. We use the following mask at inference time: 1 ifa; >T; 0 otherwise a9) Mthreshota(,T)i = { To learn the threshold values, we apply an additional loss at training time which is minimized when the batchwise mask and the threshold mask are identical.
1701.06538#71
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06538
72
To learn the threshold values, we apply an additional loss at training time which is minimized when the batchwise mask and the threshold mask are identical. |X| n Loatchwise(X,T,m) = Ss So (Minresnota(, T)i — Moatchwise(X,m)j,i)(Xj,i -— Ti) (20) j=l i=l G ATTENTION FUNCTION The attention mechanism described in GNMT (Wu et al., 2016) involves a learned “Attention Func- tion" A(xi, yj) which takes a “source vector" xi and a “target vector" yj, and must be computed for every source time step i and target time step j. In GNMT, the attention function is implemented as a feed forward neural network with a hidden layer of size n. It can be expressed as: n Aanar(%is yj) = Ss Vatanh((xiU )a + (yjW)a) (21) d=1 Where U and W are trainable weight matrices and V is a trainable weight vector. For performance reasons, in our models, we used a slightly different attention function: A(ais yj) = 3° Vatanh((a;U)a)tanh((yjW)a) (22) d=1 19
1701.06538#72
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
http://arxiv.org/pdf/1701.06538
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
cs.LG, cs.CL, cs.NE, stat.ML
null
null
cs.LG
20170123
20170123
[ { "id": "1502.03167" }, { "id": "1606.04199" }, { "id": "1602.02410" }, { "id": "1609.08144" }, { "id": "1511.06297" }, { "id": "1512.02595" } ]
1701.06049
0
3 2 0 2 n a J 8 2 ] I A . s c [ 2 v 9 4 0 6 0 . 1 0 7 1 : v i X r a # Interactive Learning from Policy-Dependent Human Feedback # James MacGlashan 1 Mark K Ho 2 Robert Loftin 3 Bei Peng 4 Guan Wang 2 David L. Roberts 3 Matthew E. Taylor 4 Michael L. Littman 2 # Abstract This paper investigates the problem of interac- tively learning behaviors communicated by a hu- man teacher using positive and negative feed- back. Much previous work on this problem has made the assumption that people provide feed- back for decisions that is dependent on the be- havior they are teaching and is independent from the learner’s current policy. We present empirical results that show this assumption to be false— whether human trainers give a positive or neg- ative feedback for a decision is influenced by the learner’s current policy. Based on this in- sight, we introduce Convergent Actor-Critic by Humans (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behav- iors on a physical robot. # 1. Introduction
1701.06049#0
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
1
# 1. Introduction Programming robots is very difficult, in part because the real world is inherently rich and—to some degree— unpredictable. In addition, our expectations for physical agents are quite high and often difficult to articulate. Nev- ertheless, for robots to have a significant impact on the lives of individuals, even non-programmers need to be able to specify and customize behavior. Because of these complex- ities, relying on end-users to provide instructions to robots programmatically seems destined to fail. Reinforcement learning (RL) from human trainer feedback provides a compelling alternative to programming because agents can learn complex behavior from very simple posi- tive and negative signals. Furthermore, real-world animal training is an existence proof that people can train complex behavior using these simple signals. Indeed, animals have been successfully trained to guide the blind, locate mines in the ocean, detect cancer or explosives, and even solve complex, multi-stage puzzles.
1701.06049#1
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
2
Despite success when learning from environmental reward, traditional reinforcement-learning algorithms have yielded limited success when the reward signal is provided by hu- mans. This failure underscores the importance that algo- rithms for learning from humans are based on appropriate models of human-feedback. Indeed, much human-centered RL work has investigated and employed different mod- els of human-feedback (Knox & Stone, 2009b; Thomaz & Breazeal, 2006; 2007; 2008; Griffith et al., 2013; Loftin et al., 2015). Many of these algorithms leverage the ob- servation that people tend to give feedback that is best in- terpreted as guidance on the policy the agent should be fol- lowing, rather than as a numeric value to be maximized by the agent. However, these approaches assume models of feedback that are independent of the policy the agent is currently following. We present empirical results that demonstrate that this assumption is incorrect and further demonstrate cases in which policy-independent learning al- gorithms suffer from this assumption. Following this result, we present Convergent Actor-Critic by Humans (COACH), an algorithm for learning from policy-dependent human feedback. COACH is based on the insight that the
1701.06049#2
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
3
we present Convergent Actor-Critic by Humans (COACH), an algorithm for learning from policy-dependent human feedback. COACH is based on the insight that the ad- vantage function (a value roughly corresponding to how much better or worse an action is compared to the current policy) provides a better model of human feedback, cap- turing human-feedback properties like diminishing returns, rewarding improvement, and giving 0-valued feedback a semantic meaning that combats forgetting. We compare COACH to other approaches in a simple domain with sim- ulated feedback. Then, to validate that COACH scales to complex problems, we train five different behaviors on a TurtleBot robot.
1701.06049#3
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
4
*Equal contribution 1Cogitai 2Brown University 3North Car- olina State University 4Washington State University. Correspon- dence to: James MacGlashan <[email protected]>. Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). # 2. Background For modeling the underlying decision-making problem of an agent being taught by a human, we adopt the Markov Decision Process (MDP) formalism. An MDP is a 5-tuple: (S,A,T, R,), where S is the set of possible states of the Interactive Learning from Policy-Dependent Human Feedback environment; A is the set of actions available to the agent; T(s'|s, a) is the transition function, which defines the prob- ability of the environment transitioning to state s’ when the agent takes action a in environment state s; R(s, a, s’) is the reward function specifying the numeric reward the agent receives for taking action a in state s and transition- ing to state s’; and y € [0, 1] is a discount factor specifying how much immediate rewards are preferred to more distant rewards.
1701.06049#4
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
5
A stochastic policy « for an MDP is a per-state action probability distribution that defines an agent’s behavior; nm: Sx A - [0,1], where 7,24 7(s,a) = 1,Vs € S. In the MDP setting, the goal is to find the optimal pol- icy 7*, which maximizes the expected future discounted reward when the agent selects actions in each state ac- cording to 7*; 7* = argmax, E[77297'ri], where r, is the reward received at time t. Two important con- cepts in MDPs are the value function (V™) and action— value function (Q”). The value function defines the ex- pected future discounted reward from each state when fol- lowing some policy and the action—value function defines the expected future discounted reward when an agent takes some action in some state and then follows some policy 7 thereafter. These equations can be recursively defined via the Bellman equation: V"(s) = $°, 7(s,a)Q*(s,a) and Q7(s,a) = Y,, T(s'|s, a) [R(s, a, 8’) + yV7(s’)]. For shorthand, the value functions for the optimal policies are usually denoted V* and Q*.
1701.06049#5
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
6
policy by giving numeric feedback as the agent acts in the environment. The goal of the agent is to learn the target policy π∗ from the feedback. To define a learning algorithm for this problem, we first characterize how human trainers typically use numeric feedback to teach target policies. If feedback is stationary and intended to be maximized, it can be treated as a re- ward function and standard RL algorithms used. Although this approach has had some success (Pilarski et al., 2011; Isbell et al., 2001), there are complications that limit its ap- plicability. In particular, a trainer must take care that the feedback they give contains no unanticipated exploits, con- straining the feedback strategies they can use. Indeed, prior research has shown that interpreting human feedback like a reward function often induces positive reward cycles that lead to unintended behaviors (Knox, 2012; Ho et al., 2015). The issues with interpreting feedback as reward have led to the insight that human feedback is better interpreted as commentary on the agent’s behavior; for example, positive feedback roughly corresponds to “that was good” and neg- ative feedback roughly corresponds to “that was bad.” In the next section, we review existing HCRL approaches that build on this insight. # 4. Related Work
1701.06049#6
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
7
# 4. Related Work In reinforcement learning (RL), an agent interacts with an environment modeled as an MDP, but does not have di- rect access to the transition function or reward function and instead must learn a policy from environment obser- vations. A common class of RL algorithms are actor- critic algorithms. Bhatnagar et al. (2009) provide a gen- eral template for these algorithms. Actor-critic algorithms are named for the two main components of the algorithms: The actor is a parameterized policy that dictates how the agent selects actions; the critic estimates the value func- tion for the actor and provides critiques at each time step that are used to update the policy parameters. Typically, the critique is the temporal difference (TD) error: δt = rt + γV (st) − V (st−1), which describes how much better or worse a transition went than expected.
1701.06049#7
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
8
A number of existing approaches to HCRL and RL that includes human feedback has been explored in the past. The most similar to ours, and a primary inspiration for In this work, is the TAMER framework (Knox, 2012). TAMER, trainers provide interactive numeric feedback as the learner takes actions. The learner attempts to estimate a target reward function by interpreting trainer feedback as exemplars of this function. When the agent makes rapid decisions, TAMER divides the feedback among the recent state–action pairs according to a probability distribution. TAMER makes decisions by myopically choosing the ac- tion with the highest reward estimate. Because the agent myopically maximizes reward, the feedback can also be thought of as exemplars of Q∗. Later work also investi- gated non-myopically maximizing the learned reward func- tion with a planning algorithm (Knox & Stone, 2013), but this approach requires a model of the environment and spe- cial treatment of termination conditions. # 3. Human-centered Reinforcement Learning In this work, a human-centered reinforcement-learning (HCRL) problem is a learning problem in which an agent is situated in an environment described by an MDP but in which rewards are generated by a human trainer instead of from a stationary MDP reward function that the agent is meant to maximize. The trainer has a target policy π∗ they are trying to teach the agent. The trainer communicates this
1701.06049#8
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
9
Two other closely related approaches are SABL (Loftin et al., 2015) and Policy Shaping (Griffith et al., 2013). Both of these approaches treat feedback as discrete probabilis- tic evidence of the trainer’s target parameterized policy. SABL’s probabilistic model additionally includes (learn- able) parameters for describing how often a trainer is ex- pected to give explicit positive or negative feedback. There have also been some domains in which treating huInteractive Learning from Policy-Dependent Human Feedback man feedback as reward signals to maximize has had some success, such as in shaping the control for a prosthetic arm (Pilarski et al., 2011) and learning how to interact in an online chat room from multiple users’ feedback (Isbell et al., 2001). Some complications with how people give feedback have been reported, however.
1701.06049#9
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
10
Some research has also examined combining human feed- back with more traditional environmental rewards (Knox & Stone, 2010; Tenorio-Gonzalez et al., 2010; Clouse & Utgoff, 1992; Maclin et al., 2005). A challenge in this context in practice is that rewards do not naturally come from the environment and must be programmatically de- fined. However, it is appealing because the agent can learn in the absence of an active trainer. We believe our approach to HCRL could also straightforwardly incorporate learning from environmental reward as well, but we leave this inves- tigation for future work. Click 'Go' to start today's training. Punish Reward a a Vr 1 i i 1 i Figure 1. The training interface shown to AMT users. based on the wrong assumption may result in unexpected responses to feedback. Consequently, we were interested in investigating which model better fits human feedback.
1701.06049#10
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
11
based on the wrong assumption may result in unexpected responses to feedback. Consequently, we were interested in investigating which model better fits human feedback. Finally, a related research area is learning from demonstra- tion (LfD), in which a human provides examples of the de- sired behavior. There are a number of different approaches to solving this problem surveyed by Argall et al. (2009). We see these approaches as complementary to HCRL be- cause it is not always possible, or convenient, to provide demonstrations. LfD approaches that learn a parameter- ized policy could also operate with COACH, allowing the agent to have their policy seeded by demonstrations, and then fine tuned with interactive feedback. Note that the policy-dependent feedback we study here is viewed as essential in behavior analysis reinforcement schedules (Miltenberger, 2011). Trainers are taught to provide diminishing returns (gradual decreases in posi- tive feedback for good actions as the agent adopts those actions), differential feedback (varied magnitude of feed- backs depending on the degree of improvement or deterio- ration in behavior), and policy shaping (positive feedback for suboptimal actions that improve behavior and then neg- ative feedback after the improvement has been made), all of which are policy dependent. # 5. Policy-dependent Feedback
1701.06049#11
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
12
# 5. Policy-dependent Feedback A common assumption of existing HCRL algorithms is that feedback depends only on the quality of an agent’s action selection. An alternative hypothesis is that feedback also depends on the agent’s current policy. That is, an action se- lection may be more greatly rewarded or punished depend- ing on how often the agent would typically be inclined to select it. For example, more greatly rewarding the agent for improving its performance than maintaining the status quo. We call the former model of feedback policy-independent and the latter policy-dependent. If people are more nat- urally inclined toward one model of feedback, algorithms Despite existing HCRL algorithms assuming policy- independent feedback, evidence of policy-dependent feed- back can be found in prior works with these algorithms. For example, it was often observed that trainers taper their feedback over the course of learning (Ho et al., 2015; Knox et al., 2012; Isbell et al., 2001). Although diminishing feed- back is a property that is explained by people’s feedback being policy-dependent—as the learner’s performance im- proves, trainer feedback is decreased—an alternative expla- nation is simply trainer fatigue. To further make the case for human feedback being policy dependent, we provide a stronger result showing that trainers—for the same state– action pair—choose positive or negative feedback depend- ing on their perception of the learner’s behavior.
1701.06049#12
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
13
# 5.1. Empirical Results We had Amazon Mechanical Turk (AMT) participants teach an agent in a simple sequential task, illustrated in Fig- ure 1. Participants were instructed to train a virtual dog to walk to the yellow goal location in a grid world as fast as possible but without going through the green cells. They were additionally told that, as a result of prior training, their dog was already either “bad,” “alright,” or “good” at the task and were shown examples of each behavior before training. In all cases, the dog would start in the location shown in Figure 1. “Bad” dogs walked straight through the green cells to the yellow cell. “Alright” dogs first moved left, then up, and then to the goal, avoiding green but not taking the shortest route. “Good” dogs took the shortest path to yellow without going through green. During training, participants saw the dog take an action from one tile to another and then gave feedback after ev- ery action using a continuous labeled slider as shown. The slider always started in the middle of the scale on each trial, and several points were labeled with different levels Interactive Learning from Policy-Dependent Human Feedback
1701.06049#13
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
14
Interactive Learning from Policy-Dependent Human Feedback of reward (praise and treats) and punishment (scolding and a mild electric shock). Participants went through a brief tutorial using this interface. Responses were coded as a numeric value from −50 to 50, with “Do Nothing” as the zero-point. During the training phase, participants trained a dog for three episodes that all started in the same position and ended at the goal. The dog’s behavior was pre-programmed in such a way that the first step of the final episode would reveal if feedback was policy dependent. Each user was placed into one of three different conditions: improving, steady, or degrading. For all three conditions, the dog’s behavior in the final episode was “alright,” regardless of any prior feedback. The conditions differed in terms of the behavior users observed in the first two episodes. In the first two episodes, users observed bad behavior in the im- proving condition (improving to alright); alright behavior in the steady condition; and good behavior in the degrad- ing condition. If feedback is policy-dependent, we would expect more positive feedback in the final episode for the improving condition, but not for policy-independent feed- back since it was the same final behavior for all conditions.
1701.06049#14
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
15
a So Nn a Final Episode First Response a o i a oS Steady Condition Improving Degrading Figure 2. The feedback distribution for first step of the final episode for each condition. Feedback tended to be positive for improving behavior, but negative otherwise. we present the general update rule for COACH and its con- vergence. Finally, we present Real-time COACH, which in- cludes mechanisms for providing variable magnitude feed- back and learning in problems with a high-frequency deci- sion cycle.
1701.06049#15
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
16
Figure 2 shows boxplots and individual responses for the first step of the final episode under each of the three con- ditions. These results indicate that the sign of feedback is sensitive to the learner’s policy, as predicted. The mean and median feedback under the improving condition is slightly positive (Mean = 9.8, Median = 24, S.D. = 22.2; planned Wilcoxon one-sided signed-rank test: Z = 1.71, p < 0.05), whereas it is negative for the steady condition (Mean = −18.3, Median = −23.5, S.D. = 24.6; planned Wilcoxon two-sided signed-rank test: Z = −3.15, p < 0.01) and degrading condition (Mean = −10.8, Median = −18.0, S.D. = 20.7; planned Wilcoxon one-sided signed-rank test: Z = −2.33, p < 0.05). There was a main effect across the three conditions (p < 0.01, Kruskal-Wallace Test), and pairwise comparisons indicated that only the improving condition differed from steady and degrading conditions (p < 0.01 for both, Bonferroni-corrected, Mann-Whitney Pairwise test).
1701.06049#16
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
17
# 6. Convergent Actor-Critic by Humans In this section, we introduce Convergent Actor-Critic by Humans (COACH), an actor-critic-based algorithm capa- ble of learning from policy-dependent feedback. COACH is based on the insight that the advantage function is a good model of human feedback and that actor–critic algorithms update a policy using the critic’s TD error, which is an unbi- ased estimate of the advantage function. Consequently, an agent’s policy can be directly modified by human feedback without a critic component. We first define the advantage function and its interpretation as trainer feedback. Then, # 6.1. The Advantage Function and Feedback The advantage function (Baird, 1995) Aπ is defined as Aπ(s, a) = Qπ(s, a) − V π(s). (1)
1701.06049#17
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
18
Aπ(s, a) = Qπ(s, a) − V π(s). (1) Roughly speaking, the advantage function describes how much better or worse an action selection is compared to the agent’s performance under policy 7. The function is closely related to the update used in policy iteration (Put- erman, 1994): defining x’(s) = argmax, A”(s, a) is guar- anteed to produce an improvement over 7 whenever 7 is suboptimal. It can also be used in policy gradient meth- ods to gradually improve the performance of a policy, as described later. It is worth nothing that feedback produced by the advan- tage function is consistent with that recommended in be- havior analysis. It trivially results in differential feedback since it is defined as the magnitude of improvement of an action over its current policy. It induces diminishing returns because, as π improves opportunities to improve on it decrease. Indeed, once π is optimal, all advantage- function-based feedback is zero or negative. Finally, ad- vantage function feedback induces policy shaping in that whether feedback is positive or negative for an action de- pends on whether it is a net improvement over the current behavior. # 6.2. Convergence and Update Rule
1701.06049#18
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
19
# 6.2. Convergence and Update Rule Given a performance metric ρ, Sutton et al. (1999) derive a policy gradient algorithm of the form: ∆θ = α∇θρ. Here, Interactive Learning from Policy-Dependent Human Feedback θ represents the parameters that control the agent’s behav- ior and α is a learning rate. Under the assumption that ρ is the discounted expected reward from a fixed start state distribution, they show that ∇θρ = dπ(s) ∇θπ(s, a)Qπ(s, a), s a where dπ(s) is the component of the (discounted) station- ary distribution at s. A benefit of this form of the gradient is that, given that states are visited according to dπ(s) and actions are taken according to π(s, a), the update at time t can be made as:
1701.06049#19
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
20
Algorithm 1 Real-time COACH Require: policy 79,, trace set A, delay d, learning rate @ Initialize traces ey, — OVAE A observe initial state so for t = 0 to co do select and execute action a; ~ 7, (S;,°) observe next state s,41, sum feedback f;41, and A for ’ € Ado ey + Ney + end for O41 — O + afi41ey end for 1 Fos (emantnay © 8:70. (Stas Mea) ∆θt = αt∇θπ(st, at) ft+1 π(st, at) , (2) where E[ft+1] = Qπ(st, at) − v(s) for any action- independent function v(s).
1701.06049#20
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
21
where E[ft+1] = Qπ(st, at) − v(s) for any action- independent function v(s). In the context of the present paper, ft+1 represents the feedback provided by the trainer. It follows trivially that if the trainer chooses the policy-dependent feedback ft = Qπ(st, at), we obtain a convergent learning algorithm that (locally) maximizes discounted expected reward. In addi- tion, feedback of the form ft = Qπ(st, at) − V π(st) = Aπ(st, at) also results in convergence. Note that for the trainer to provide feedback in the form of Qπ or Aπ, they would need to “peer inside” the learner and observe its pol- icy. In practice, the trainer estimates π by observing the agent’s actions. # 6.3. Real-time COACH always want to influence a long history of actions. Conse- quently, Real-time COACH maintains multiple eligibility traces with different temporal decay rates and the trainer chooses which eligibility trace to use for each update. This trace choice may be handled implicitly with the feedback value selection or explicitly.
1701.06049#21
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
22
Due to reaction time, human feedback is typically delayed by about 0.2 to 0.8 seconds from the event to which they meant to give feedback (Knox, 2012). To handle this delay, feedback in Real-time COACH is associated with events from d steps ago to cover the gap. Eligibility traces further smooth the feedback to older events. Finally, we note that just as there are numerous variants of actor-critic update rules, similar variations can be used in the context of COACH. There are challenges in implementing Equation 2 for real- time use in practice. Specifically, the interface for provid- ing variable magnitude feedback needs to be addressed, and the question of how to handle sparseness and the timing of feedback needs to be answered. Here, we introduce Real- time COACH, shown in Algorithm 1, to address these is- sues. For providing variable magnitude reward, we use reward aggregation (Knox & Stone, 2009b). In reward aggrega- tion, a trainer selects from a discrete set of feedback values and further raises or lowers the numeric value by giving multiple feedbacks in succession that are summed together. # 7. Comparison of Update Rules
1701.06049#22
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
23
# 7. Comparison of Update Rules To understand the behavior of COACH under different types of trainer feedback strategies, we carried out a con- trolled comparison in a simple grid world. The domain is essentially an expanded version of the dog domain used in our human-subject experiment. It is a 8 × 5 grid in which the agent starts in 0, 0 and must get to 7, 0, which yields +5 reward. However, from 1, 0 to 6, 0 are cells the agent needs to avoid, which yield −1 reward. # 7.1. Learning Algorithms and Feedback Strategies While sparse feedback is not especially problematic (be- cause no feedback results in no change in policy), it may slow down learning unless the trainer is provided with a mechanism to allow feedback to affect a history of actions. We use eligibility traces (Barto et al., 1983) to help apply feedback to the relevant transitions. An eligibility trace is a vector that keeps track of the policy gradient and decays exponentially with a parameter λ. Policy parameters are then updated in the direction of the trace, allowing feed- back to affect earlier decisions. However, a trainer may not
1701.06049#23
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
24
Three types of learning algorithms were tested. Each main- tains an internal data structure, which it updates with feed- back of the form (s,a, f,s’), where s is a state, a is an action taken in that state, f is the feedback received from the trainer, and s’ is the resulting next state. The algorithm also must produce an action for each state encountered. The first algorithm, Q learning (Watkins & Dayan, 1992), represents a standard value-function-based RL algorithm designed for reward maximization under delayed feedback. It maintains a data structure Q(s, a), initially 0. Its update Interactive Learning from Policy-Dependent Human Feedback rule has the form: # 7.2. Results AQ(s,a) = alf + ymax Q(s', a’) —Q(s,a)]. (3) Actions are chosen using the rule: argmaxa Q(s, a), where ties are broken randomly. We tested a handful of parame- ters and used the best values: discount factor γ = 0.99 and learning rate α = 0.2.
1701.06049#24
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
25
In TAMER (Knox & Stone, 2009a), a trainer provides inter- active numeric feedback that is interpreted as an exemplar of the reward function for the demonstrated state—action pair as the learner takes actions. We assumed that each feedback applies to the last action, and thus used a simpli- fied version of the algorithm that did not attempt to spread updates over multiple transitions. TAMER maintains a data structure R7;(s,a) for the predicted reward in each state, initially 0. It is updated by: ARy(s,a) = af. We used a = 0.2. Actions are chosen via an e-greedy rule on Ri(s, a) with € = 0.2. Lastly, we examined COACH, which is also designed to work well with human-generated feedback. We used a soft- max policy with a single λ = 0 trace. The parameters were a matrix of values θ(s, a), initially zero. The stochastic policy defined by these parameters was n(s,a) = 08 0(5:9) / S~ e880), a with β = 1. Parameters were updated via ∆θ = α∇θπ(s, a) f π(s, a) , (4) where α is a learning rate. We used α = 0.05.
1701.06049#25
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
26
where α is a learning rate. We used α = 0.05. In effect, each of these learning rules makes an assump- tion about the kind of feedback it expects trainers to use. We wanted to see how they would behave with feedback strategies that matched these assumptions and those that did not. The first feedback strategy we studied is the clas- sical task-based reward function (“task”) where the feed- back is sparse: +5 reward when the agent reaches the goal state, −1 for avoidance cells, and 0 for all other transi- tions. Q-learning is known to converge to optimal behav- ior with this type of feedback. The second strategy pro- vides policy-independent feedback for each state–action pair (“action”): +5 when the agent reaches termination, +1 reward when the selected action matches an optimal policy, −1 for reaching an avoidance cell, and 0 other- wise. This type of feedback serves TAMER well. The third strategy (“improvement”) used feedback defined by the advantage function of the learner’s current policy π, Aπ(s, a) = Qπ(s, a) − V π(s), where the value functions are defined based on the task rewards. This type of feed- back is very well suited to COACH.
1701.06049#26
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
27
Each combination of algorithm and feedback strategy was run 99 times with the median value of the number of steps needed to reach the goal reported. Episodes were ended after 1, 000 steps if the goal was not reached. Figure 3(a) shows the steps needed to reach the goal for the three algorithms trained with task feedback. The figure shows that TAMER can fail to learn in this setting. COACH also performs poorly with λ = 0, which prevents feedback from influencing earlier decisions. We did a subsequent ex- periment (not shown) with λ = 0.9 and found that COACH converged to reasonable behavior, although not as quickly as Q learning. This result helps justify using traces to com- bat the challenges of delayed feedback. Figure 3(b) shows results with action feedback. This time, Q learning fails to perform well, a consequence of this feed- back strategy inducing positive behavior cycles as it tries to avoid ending the trial, the same kind of problem that HCRL algorithms have been designed to avoid. Both TAMER and COACH perform well with this feedback strategy. TAMER performs slightly better than COACH, as this is precisely the kind of feedback TAMER was designed to handle.
1701.06049#27
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
28
Figure 3(c) shows the results of the three algorithms with improvement feedback, which is generated via the advan- tage function defined on the learner’s current policy. These results tells a different story. Here, COACH performs the best. Q-learning largely flounders for most of the time, but with enough training sometimes start to converge. (Al- though, 14% of the time, Q learning fails to do well even after 100 training episodes). TAMER, on the other hand, performs very badly at first. While the median score in the plot shows TAMER suddenly performing more compara- bly to COACH after about 10 episodes, 29% of our training trials completely failed to improve and timed-out across all 100 episodes. # 8. Robotics Case Study
1701.06049#28
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
29
# 8. Robotics Case Study In this section, we present qualitative results on Real-time COACH applied to a TurtleBot robot. The goal of this study was to test that COACH can scale to a complex do- main involving multiple challenges, including training an agent that operates on a fast decision cycle (33ms), noisy non-Markov observations from a camera, and agent per- ception that is hidden from the trainer. To demonstrate the flexibility of COACH, we trained it to perform five differ- ent behaviors involving a pink ball and cylinder with an orange top using the same parameter selections. We dis- cuss these behaviors below. We also contrast the results to training with TAMER. We chose TAMER as a comparison because, to our knowledge, it is the only HCRL algorithm with success on a similar platform (Knox et al., 2013). Interactive Learning from Policy-Dependent Human Feedback —QLeaming —— TAMER — COACH 8 6 z 8 o 2 Pt 4 g £ 3 = 2 0 20 40 60 80 100 Episodes The that the agent feedback, +4, ing. nels around orange be generated network first (a) Task feedback
1701.06049#29
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
30
(a) Task feedback —QLeaming —— TAMER — COACH 8 6 z 8 Lo) 2 ® 4 g 3 g Nee mares = 20 40 60 80 100 Episodes ing to each through by point. the color 2 x The the and has and (b) Action feedback — QLeaming —— TAMER —— COACH 8 6 z 3 Lo} 2 ° 4 = @ £ 2 0 20 40 60 80 100 Episodes the the training ing tion In all inforce tuning. For ball Then, (c) Improvement feedback Figure 3. Steps to goal for Q learning (blue), TAMER (red), and COACH (yellow) in Cliff world under different feedback strate- gies. The y-axis is on a logarithmic scale.
1701.06049#30
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
31
The TurtleBot is a mobile base with two degrees of freedom that senses the world from a Kinect camera. We discretized the action space to five actions: forward, backward, rotate clockwise, rotate counterclockwise, and do nothing. The agent selects one of these actions every 33ms. To deliver feedback, we used a Nintendo Wii controller to give +1, +4, or −1 numeric feedback, and pause and continue train- ing. For perception, we used only the RGB image chan- nels from the Kinect. Because our behaviors were based around a relocatable pink ball and a fixed cylinder with an orange top, we hand constructed relevant image features to be used by the learning algorithms. These features were generated using techniques similar to those used in neural network architectures. The features were constructed by first transforming the image into two color channels asso- ciated with the colors of the ball and cylinder. Sum pool- ing to form a lower-dimensional 8 × 8 grid was applied to each color channel. Each sum-pooling unit was then passed through three different normalized threshold units defined by Ti(x) = min( x , 1), where φi specifies the saturation φi
1701.06049#31
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
33
The five behaviors we trained were push–pull, hide, ball following, alternate, and cylinder navigation. In push–pull, the TurtleBot is trained to navigate to the ball when it is far, and back away from it when it is near. The hide behavior has the TurtleBot back away from the ball when it is near and turn away from it when it is far. In ball following, the TurtleBot is trained to navigate to the ball. In the alternate task, the TurtleBot is trained to go back and forth between the cylinder and ball. Finally, cylinder navigation involves the agent navigating to the cylinder. We further classify training methods for each of these behaviors as flat, involv- ing the push–pull, hide, and ball following behaviors; and compositional, involving the alternate and cylinder naviga- tion behaviors.
1701.06049#33
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
34
In all cases, our human trainer (one of the co-authors) used differential feedback and diminishing returns to quickly re- inforce behaviors and restrict focus to the areas needing tuning. However, in alternate and cylinder navigation, they attempted more advanced compositional training methods. For alternate, the agent was first trained to navigate to the ball when it sees it, and then turn away when it is near. Then, the same was independently done for the cylinder. After training, introducing both objects would cause the agent to move back and forth between them. For cylin- der navigation, they attempted to make use of an animal- training method called lure training in which an animal is first conditioned to follow a lure object, which is then used to guide it through more complex behaviors. In cylinder navigation, they first trained the ball to be a lure, used it to Interactive Learning from Policy-Dependent Human Feedback guide the TurtleBot to the cylinder, and finally gave a +4 reward to reinforce the behaviors it took when following the ball (turning to face the cylinder, moving toward it, and stopping upon reaching it). The agent would then navigate to the cylinder without requiring the ball to be present.
1701.06049#34
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
35
For COACH parameters, we used a softmax parameterized policy, where each action preference value was a linear function of the image features, plus tanh(9,), where 0, is a learnable parameter for action a, providing a prefer- ence in the absence of any stimulus. We used two eligi- bility traces with A = 0.95 for feedback +1 and —1, and A = 0.9999 for feedback +4. The feedback-action delay d was set to 6, which is 0.198 seconds. Additionally, we used an actor-critic parameter-update rule variant in which action preference values are directly modified (along its gradient), rather than by the gradient of the policy (Sut- ton & Barto, 1998). This variant more rapidly commu- nicates stimulus—response preferences. For TAMER, we used typical parameter values for fast decision cycle prob- lems: delay-weighted aggregate TAMER with uniform dis- tribution credit assignment over 0.2 to 0.8 seconds, €, = 0, and Cin = 1 (Knox, 2012). (See prior work for parameter meaning.) TAMER’s reward-function approximation used the same representation as COACH.
1701.06049#35
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
36
lustrate this problem, we constructed a well-defined sce- nario in which TAMER consistently unlearns behavior. In this scenario, the goal was for the TurtleBot to always stay whenever the ball was present, and move forward if just the cylinder was present. We first trained TAMER to stay when the ball alone was present using many rapid rewards (yield- ing a large aggregated signal). Next, we trained it to move forward when the cylinder alone was present. We then in- troduced both objects, and the TurtleBot correctly stayed. After rewarding it for staying with a single reward (weaker than the previously-used many rapid rewards), the Turtle- Bot responded by moving forward—the positive feedback actually caused it to unlearn the rewarded behavior. This counter-intuitive response is a consequence of the small re- ward decreasing its reward-function target for the stay ac- tion to a point lower than the value for moving forward. Roughly, because TAMER does not treat zero reward as special, a positive reward can be a negative influence if it is less than expected. COACH does not exhibit this problem—any positive reward for staying will strengthen the behavior. # 9. Conclusion # 8.1. Results and Discussion
1701.06049#36
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
37
# 9. Conclusion # 8.1. Results and Discussion COACH was able to successfully learn all five be- haviors and a video showing its learning is available online at https://www.youtube.com/watch?v= e2Ewxumy8EA. Each of these behaviors were trained in less than two minutes, including the time spent verifying that a behavior worked. Differential feedback and dimin- ishing returns allowed only the behaviors in need of tuning to be quickly reinforced or extinguished without any ex- plicit division between training and testing. Moreover, the agent successfully benefited from the compositional train- ing methods, correctly combining subbehaviors for alter- nate, and quickly learning cylinder navigation with the lure. TAMER only successfully learned the behaviors using the flat training methodology and failed to learn the composi- tionally trained behaviors. In all cases, TAMER tended to forget behavior, requiring feedback for previous decisions it learned to be resupplied after it learned a new decision. For the alternate behavior, this forgetting led to failure: af- ter training the behavior for the cylinder, the agent forgot some of the ball-related behavior and ended up drifting off course when it was time to go to the ball. TAMER also failed to learn from lure training because TAMER does not allow reinforcing a long history of behaviors.
1701.06049#37
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
38
We believe TAMER’s forgetting is a result of interpret- ing feedback as reward-function exemplars in which new feedback in similar contexts can change the target. To ilIn this work, we presented empirical results that show that the numeric feedback people give agents in an inter- active training paradigm is influenced by the agent’s cur- rent policy and argued why such policy-dependent feed- back enables useful training strategies. We then intro- duced COACH, an algorithm that, unlike existing human- centered reinforcement-learning algorithms, converges to a local optimum when trained with policy-dependent feed- back. We showed that COACH learns robustly in the face of multiple feedback strategies and finally showed that COACH can be used in the context of robotics with ad- vanced training methods. There are a number of exciting future directions to ex- tend this work. In particular, because COACH is built on the actor-critic paradigm, it should be possible to combine it straightforwardly with learning from demonstration and environmental rewards, allowing an agent to be trained in a variety of ways. Second, because people give policy- dependent feedback, investigating how people model the current policy of the agent and how their model differs from the agent’s actual policy may produce even greater gains. # Acknowledgements
1701.06049#38
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
39
# Acknowledgements We thank the anonymous reviewers for their useful sug- gestions and comments. This research has taken place in part at the Intelligent Robot Learning (IRL) Lab, Wash- IRL’s support includes NASA ington State University. NNX16CD07C, NSF IIS-1149917, NSF IIS-1643614, and USDA 2014-67021-22174. Interactive Learning from Policy-Dependent Human Feedback # References Argall, Brenna D, Chernova, Sonia, Veloso, Manuela, and Browning, Brett. A survey of robot learning from demonstration. Robotics and autonomous systems, 57 (5):469–483, 2009. Baird, Leemon. Residual algorithms: Reinforcement learn- ing with function approximation. In Proceedings of the twelfth international conference on machine learning, pp. 30–37, 1995. Barto, A.G., Sutton, R.S., and Anderson, C.W. Neuron- like adaptive elements that can solve difficult learning control problems. Systems, Man and Cybernetics, IEEE Transactions on, SMC-13(5):834 –846, sept.-oct. 1983.
1701.06049#39
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
40
Knox, W Bradley, Glass, Brian D, Love, Bradley C, Mad- dox, W Todd, and Stone, Peter. How humans teach agents. International Journal of Social Robotics, 4(4): 409–421, 2012. Knox, W Bradley, Stone, Peter, and Breazeal, Cynthia. Training a robot via human feedback: A case study. In Social Robotics, pp. 460–470. Springer, 2013. Knox, W. Bradley Knox and Stone, Peter. Combining manual feedback with subsequent MDP reward signals In Proc. of 9th Int. Conf. for reinforcement learning. on Autonomous Agents and Multiagent Systems (AAMAS 2010), May 2010. Knox, William Bradley. Learning from human-generated reward. PhD thesis, University of Texas at Austin, 2012. Bhatnagar, Shalabh, Sutton, Richard S, Ghavamzadeh, Mo- hammad, and Lee, Mark. Natural actor–critic algo- rithms. Automatica, 45(11):2471–2482, 2009. Clouse, Jeffery A and Utgoff, Paul E. A teaching method the for reinforcement Ninth International Conference on Machine Learning (ICML’92), pp. 92–101, 1992.
1701.06049#40
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
41
Loftin, Robert, Peng, Bei, MacGlashan, James, Littman, Michael L., Taylor, Matthew E., Huang, Jeff, and Roberts, David L. Learning behaviors via human- delivered discrete feedback: modeling implicit feedback strategies to speed up learning. Autonomous Agents and Multi-Agent Systems, 30(1):30–59, 2015. Griffith, Shane, Subramanian, Kaushik, Scholz, Jonathan, Isbell, Charles, and Thomaz, Andrea L. Policy shaping: Integrating human feedback with reinforcement learn- ing. In Advances in Neural Information Processing Sys- tems, pp. 2625–2633, 2013. Maclin, Richard, Shavlik, Jude, Torrey, Lisa, Walker, Trevor, and Wild, Edward. Giving advice about pre- ferred actions to reinforcement learners via knowledge- based kernel regression. In Proceedings of the National Conference on Artificial intelligence, volume 20, pp. 819, 2005. Ho, Mark K, Littman, Michael L., Cushman, Fiery, and Austerweil, Joseph L. Teaching with rewards and pun- In Pro- ishments: Reinforcement or communication? ceedings of the 37th Annual Meeting of the Cognitive Science Society, 2015.
1701.06049#41
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
42
Isbell, Charles, Shelton, Christian R, Kearns, Michael, Singh, Satinder, and Stone, Peter. A social reinforcement learning agent. In Proceedings of the fifth international conference on Autonomous agents, pp. 377–384. ACM, 2001. Knox, W Bradley and Stone, Peter. Interactively shaping agents via human reinforcement: The TAMER frame- work. In Proceedings of the Fifth International Confer- ence on Knowledge Capture, pp. 9–16, 2009a. Miltenberger, Raymond G. Behavior modification: Princi- ples and procedures. Cengage Learning, 2011. Pilarski, Patrick M, Dawson, Michael R, Degris, Thomas, Fahimi, Farbod, Carey, Jason P, and Sutton, Richard S. Online human training of a myoelectric prosthesis controller via actor-critic reinforcement learning. In 2011 IEEE International Conference on Rehabilitation Robotics, pp. 1–7. IEEE, 2011. Puterman, Martin L. Markov Decision Processes— Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., New York, NY, 1994. Sutton, Richard S and Barto, Andrew G. Reinforcement learning: An introduction, volume 1. MIT press Cam- bridge, 1998.
1701.06049#42
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
43
Sutton, Richard S and Barto, Andrew G. Reinforcement learning: An introduction, volume 1. MIT press Cam- bridge, 1998. Knox, W Bradley and Stone, Peter. Interactively shaping agents via human reinforcement: The tamer framework. In Proceedings of the fifth international conference on Knowledge capture, pp. 9–16. ACM, 2009b. Sutton, Richard S, McAllester, David A, Singh, Satinder P, Mansour, Yishay, et al. Policy gradient methods for re- inforcement learning with function approximation. In NIPS, volume 99, pp. 1057–1063, 1999. Learning non- myopically from human-generated reward. In Proceed- ings of the 2013 international conference on Intelligent user interfaces, pp. 191–202. ACM, 2013. Tenorio-Gonzalez, Ana C, Morales, Eduardo F, and Vil- lase˜nor-Pineda, Luis. Dynamic reward shaping: training a robot by voice. In Advances in Artificial Intelligence– IBERAMIA 2010, pp. 483–492. Springer, 2010. Interactive Learning from Policy-Dependent Human Feedback
1701.06049#43
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.06049
44
Interactive Learning from Policy-Dependent Human Feedback Thomaz, Andrea L and Breazeal, Cynthia. Robot learn- In Development ing via socially guided exploration. and Learning, 2007. ICDL 2007. IEEE 6th International Conference on, pp. 82–87. IEEE, 2007. Thomaz, Andrea L and Breazeal, Cynthia. Teachable robots: Understanding human teaching behavior to build more effective robot learners. Artificial Intelligence, 172:716–737, 2008. Thomaz, Andrea Lockerd and Breazeal, Cynthia. Rein- forcement learning with human teachers: Evidence of feedback and guidance with implications for learning performance. In AAAI, volume 6, pp. 1000–1005, 2006. Watkins, Christopher J. C. H. and Dayan, Peter. Q-learning. Machine Learning, 8(3):279–292, 1992.
1701.06049#44
Interactive Learning from Policy-Dependent Human Feedback
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
http://arxiv.org/pdf/1701.06049
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
cs.AI, I.2.6
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
cs.AI
20170121
20230128
[]
1701.05517
1
{tim, karpathy, peter, dpkingma}@openai.com # ABSTRACT PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://github.com/openai/pixel-cnn. Our implementation contains a number of modifications to the original model that both simplify its structure and improve its performance. 1) We use a discretized logistic mixture likelihood on the pixels, rather than a 256-way softmax, which we find to speed up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels, simplifying the model structure. 3) We use downsampling to efficiently capture structure at multiple resolutions. 4) We introduce additional short-cut connec- tions to further speed up optimization. 5) We regularize the model using dropout. Finally, we present state-of-the-art log likelihood results on CIFAR-10 to demon- strate the usefulness of these modifications. # 1 INTRODUCTION
1701.05517#1
PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications
PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://github.com/openai/pixel-cnn. Our implementation contains a number of modifications to the original model that both simplify its structure and improve its performance. 1) We use a discretized logistic mixture likelihood on the pixels, rather than a 256-way softmax, which we find to speed up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels, simplifying the model structure. 3) We use downsampling to efficiently capture structure at multiple resolutions. 4) We introduce additional short-cut connections to further speed up optimization. 5) We regularize the model using dropout. Finally, we present state-of-the-art log likelihood results on CIFAR-10 to demonstrate the usefulness of these modifications.
http://arxiv.org/pdf/1701.05517
Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma
cs.LG, stat.ML
null
null
cs.LG
20170119
20170119
[]