doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1708.05552 | 24 | # 4. Framework and Training Details
# 4.1. Distributed Asynchronous Framework
To speed up the learning of agent, we use a distributed asynchronous framework as illustrated in Fig. 7. It consists of three parts: master node, controller node and compute nodes. The agent ï¬rst samples a batch of block structures in master node. Afterwards, we store them in a controller node which uses the block structures to build the entire networks and allocates these networks to compute nodes. It can be regarded as a simpliï¬ed parameter-server [5, 18]. Specif- ically, the network is trained in parallel on each of com- pute nodes and returns the validation accuracy as reward by controller nodes to update agent. With this framework, we
Master Node p Controller Node r
Figure 7. The distributed asynchronous framework. three parts: master node, controller node and compute nodes.
⬠1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Tters 95, 7 7 7 10 10 10 10 10 12
Table 2. Epsilon Schedules. The number of iteration the agent trains at each epsilon(e) state.
can generate network efï¬ciently on multiple machines with multiple GPUs.
# 4.2. Training Details | 1708.05552#24 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 25 | can generate network efï¬ciently on multiple machines with multiple GPUs.
# 4.2. Training Details
Epsilon-greedy Strategy. The agent is trained using Q- learning with experience replay [19] and epsilon-greedy strategy [21]. With epsilon-greedy strategy, the random ac- tion is taken with probability ⬠and the greedy action is cho- sen with probability 1 â «. We decrease epsilon from 1.0 to 0.1 following the epsilon schedule as shown in Table 2 such that the agent can transform smoothly from exploration to exploitation. We find that the result goes better with a longer exploration, since the searching scope would become larger and the agent can see more block structures in the random exploration period.
Experience Replay. Following [2], we employ a replay memory to store the validation accuracy and block descrip- tion after each iteration. Within a given interval, i.e. each training iteration, the agent samples 64 blocks with their corresponding validation accuracies from the memory and updates Q-value 64 times.
# BlockQNN Generation. | 1708.05552#25 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 26 | # BlockQNN Generation.
In the Q-learning update process, the learning rate α is set to 0.01 and the discount factor γ is 1. We set the hy- perparameters µ and Ï in the redeï¬ned reward function as 1 and 8, respectively. The agent samples 64 sets of NSC vec- tors at a time to compose a mini-batch and the maximum layer index for a block is set to 23. We train the agent with 178 iterations, i.e. sampling 11, 392 blocks in total.
During the block searching phase, the compute nodes train each generated network for a ï¬xed 12 epochs on CIFAR-100 using the early top strategy as described in Sec- tion 3.3. CIFAR-100 contains 60, 000 samples with 100 classes which are divided into training and test set with the | 1708.05552#26 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 27 | ratio of 5 : 1. We train the network without any data aug- mentation procedure. The batch size is set to 256. We use Adam optimizer [15] with β1 = 0.9, β2 = 0.999, ε = 10â8. The initial learning rate is set to 0.001 and is reduced with a factor of 0.2 every 5 epochs. All weights are initialized as in [9]. If the training result after the ï¬rst epoch is worse than the random guess, we reduce the learning rate by a factor of 0.4 and restart training, with a maximum of 3 times for restart-operations.
After obtaining one optimal block structure, we build the whole network with stacked blocks and train the net- work until converging to get the validation accuracy as the criterion to pick the best network. In this phase, we aug- ment data with randomly cropping the images with size of 32 à 32 and horizontal ï¬ipping. All models use the SGD optimizer with momentum rate set to 0.9 and weight decay set to 0.0005. We start with a learning rate of 0.1 and train the models for 300 epochs, reducing the learning rate in the 150-th and 225-th epoch. The batch size is set to 128 and all weights are initialized with MSRA initialization [9]. | 1708.05552#27 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 28 | Transferable BlockQNN. We also evaluate the transfer- ability of the best auto-generated block structure searched on CIFAR-100 to a smaller dataset, CIFAR-10, with only 10 classes and a larger dataset, ImageNet, containing 1.2M images with 1000 classes. All the experimental settings are the same as those on the CIFAR-100 stated above. The training is conducted with a mini-batch size of 256 where each image has data augmentation of randomly cropping and ï¬ipping, and is optimized with SGD strategy. The ini- tial learning rate, weight decay and momentum are set as 0.1, 0.0001 and 0.9, respectively. We divide the learning rate by 10 twice, at the 30-th and 60-th epochs. The net- work is trained with a total of 90 epochs. We evaluate the accuracy on the test images with center crop.
Our framework is implemented under the PyTorch sci- entiï¬c computing platform. We use the CUDA backend and cuDNN accelerated library in our implementation for high-performance GPU acceleration. Our experiments are carried out on 32 NVIDIA TitanX GPUs and took about 3 days to complete searching.
# 5. Results
# 5.1. Block Searching Analysis | 1708.05552#28 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 29 | # 5. Results
# 5.1. Block Searching Analysis
Fig. 8(a) provides early stop accuracies over 178 batches on CIFAR-100, each of which is averaged over 64 auto- generated block-wise network candidates within in each mini-batch. After random exploration, the early stop ac- curacy grows steadily till converges. The mean accuracy within the period of random exploration is 56% while fi- nally achieves 65% in the last stage with e = 0.1. We choose top-100 block candidates and train their respective networks to verify the best block structure. We show top6 Block-QNN-A Input Mean Accuracy 62 Block-ONN-B 54 10m 41 Gt oat aon 141 161 eration (batch) Random Exploration Start Exploitation âAccuracy (%) (a) Q-learning performance (b) Block-QNN-A Trput Taput () Block-QNN-B (d) Block-QNN-S | 1708.05552#29 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 30 | Figure 8. (a) Q-learning performance on CIFAR-100. The accuracy goes up with the epsilon decrease and the top models are all found in the ï¬nal stage, show that our agent can learn to generate better block structures instead of random searching. (b-c) Topology of the Top-2 block structures generated by our approach. We call them Block-QNN-A and Block-QNN-B. (d) Topology of the best block structures generated with limited parameters, named Block-QNN-S.
Q-learning Performance with Different Structure Codes
x 3 âPCC{ReLU,Conv,BN} â separate ReLU,BN,Conv âAccuracy (%) Buu ao & $$ 4&8 B Ss 6 1 21 41 61 81 101 121 141 161 Iteration (batch)
Figure 9. Q-learning result with different NSC on CIFAR-100. The red line refers to searching with PCC, i.e. combination of ReLU, Conv and BN. The blue stands for separate searching with ReLU, BN and Conv. The red line is better than blue from the beginning with a big gap. | 1708.05552#30 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 31 | Method Depth Para C-10 C-100 VGG [25] - 7.25 - ResNet [10] 110 1.7M 6.61 - Wide ResNet [36] 28 36.5M 4.17 20.5 ResNet (pre-activation) [11] 1001 10.2M 4.62 22.71 DenseNet (k = 12) [13] 40 1.0M 5.24 24.42 DenseNet (k = 12) [13] 100 7.0M 4.10 20.20 DenseNet (k = 24) [13] 100 27.2M 3.74 19.25 DenseNet-BC (k = 40) [13] 190 25.6M 3.46 17.18 MetaQNN (ensemble) [2] - - 7.32 - MetaQNN (top model) [2] - 11.2M 6.92 27.14 NAS v1 [37] NAS v2 [37] NAS v3 [37] NAS v3 more ï¬lters [37] 15 20 39 39 4.2M 5.50 2.5M 6.01 7.1M 4.47 37.4M 3.65 - - - - Block-QNN-A, N=4 25 - 3.60 18.64 Block-QNN-B, N=4 37 - 3.80 | 1708.05552#31 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 33 | 2 block structures in Fig. 8(b-c), denoted as Block-QNN- A and Block-QNN-B. As shown in Fig. 8(a), both top-2 blocks are found in the ï¬nal stage of the Q-learning pro- cess, which proves the effectiveness of the proposed method in searching optimal block structures rather than randomly searching a large amount of models. Furthermore, we ob- serve that the generated blocks share similar properties with those state-of-the-art hand-crafted networks. For example, Block-QNN-A and Block-QNN-B contain short-cut con- nections and multi-branch structures which have been man- ually designed in residual-based and inception-based net- works. Compared to other auto-generated methods, the net- works generated by our approach are more elegant and can automatically and effectively reveal the beneï¬cial proper- ties for optimal network structure.
Table 3. Block-QNNâs results (error rate) compare with state-of- the-art methods on CIFAR-10 (C-10) and CIFAR-100 (C-100) dataset. | 1708.05552#33 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 34 | batch normalization (BN). We show the superiority of the PCC, searching a combination of three components, in Fig. 9, compared to the separate search of each component. Searching the three components separately is more likely to generate âbadâ blocks and also needs more searching space and time to pursue âgoodâ blocks.
# 5.2. Results on CIFAR
To squeeze the searching space, as stated in Section 3.1, we deï¬ne a Pre-activation Convolutional Cell (PCC) con- i.e. ReLU, convolution and sists of three components,
Due to the small size of images (i.e. 32 Ã 32) in CIFAR, we set block stack number as N = 4. We compare our generated best architectures with the state-of-the-art handcrafted networks or auto-generated networks in Table 3.
Comparison with hand-crafted networks - It shows that our Block-QNN networks outperform most hand-crafted net- works. The DenseNet-BC [13] uses additional 1 Ã 1 con- volutions in each composite function and compressive tran- sition layer to reduce parameters and improve performance, which is not adopted in our design. Our performance can be further improved by using this prior knowledge. | 1708.05552#34 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 35 | Comparison with auto-generated networks - Our approach achieves a signiï¬cant improvement to the MetaQNN [2], and even better than NASâs best model (i.e. NASv3 more ï¬lters) [37] proposed by Google brain which needs an ex- pensive costs on time and GPU resources. As shown in Ta- ble 4, NAS trains the whole system on 800 GPUs in 28 days while we only need 32 GPUs in 3 days to get state- of-the-art performance.
Transfer block from CIFAR-100 to CIFAR-10 - We trans- fer the top blocks learned from CIFAR-100 to CIFAR-10 dataset, all experiment settings are the same. As shown in Table 3, the blocks can also achieve state-of-the-art re- sults on CIFAR-10 dataset with 3.60% error rate that proved Block-QNN networks have powerful transferable ability. | 1708.05552#35 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 36 | Analysis on network parameters - The networks generated by our method might be complex with a large amount of pa- rameters since we do not add any constraints during train- ing. We further conduct an experiment on searching net- works with limited parameters and adaptive block num- bers. We set the maximal parameter number as 10M and obtain an optimal block (i.e. Block-QNN-S) which outper- forms NASv3 with less parameters, as shown in Fig. 8(d). In addition, when involving more ï¬lters in each convolu- tional layer (e.g. from [32,64,128] to [80,160,320]), we can achieve even better result (3.54%).
# 5.3. Transfer to ImageNet
To demonstrate the generalizability of our approach, we transfer the block structure learned from CIFAR to Ima- geNet dataset. | 1708.05552#36 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 37 | # 5.3. Transfer to ImageNet
To demonstrate the generalizability of our approach, we transfer the block structure learned from CIFAR to Ima- geNet dataset.
For the ImageNet task, we set block repeat number N = 3 and add more down sampling operation before blocks, the ï¬lters for convolution layers in different level blocks are [64,128,256,512]. We use the best blocks struc- ture learned from CIFAR-100 directly without any ï¬ne- tuning, and the generated network initialized with MSRA initialization as same as above. The experimental results are shown in Table 5. The network generated by our frame- work can get competitive result compared with other human designed models. The recently proposed methods such as Xception [4] and ResNext [35] use special depth-wise con- volution operation to reduce their total number of parame- ters and to improve performance. In our work, we do not use this new convolution operation, so it canât be compared
Method Best Model on CIFAR10 GPUs Time(days) MetaQNN [2] 6.92 10 10 NAS [37] 3.65 800 28 Our approach 3.54 32 3
Table 4. The required computing resource and time of our ap- proach compare with other automatic designing network methods. | 1708.05552#37 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 38 | Table 4. The required computing resource and time of our ap- proach compare with other automatic designing network methods.
Method Input Size Depth Top-1 Top-5 VGG [25] 224x224 16 28.5 9.90 Inception V1 [30] 224x224 22 27.8 10.10 Inception V2 [14] 224x224 22 25.2 7.80 ResNet-50 [11] 224x224 50 24.7 7.80 ResNet-152 [11] 224x224 152 23.0 6.70 Xception(our test) [4] 224x224 50 23.6 7.10 ResNext-101(64x4d) [35] 224x224 101 20.4 5.30 Block-QNN-B, N=3 224x224 38 24.3 7.40 Block-QNN-S, N=3 224x224 38 22.6 6.46
Table 5. Block-QNNâs results (single-crop error rate) compare with modern methods on ImageNet-1K Dataset.
fairly, and we will consider this in our future work to further improve the performance. | 1708.05552#38 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 39 | fairly, and we will consider this in our future work to further improve the performance.
As far as we known, most previous works of automatic network generation did not report competitive result on large scale image classiï¬cation datasets. With the con- ception of block learning, we can transfer our architecture learned in small datasets to big dataset like ImageNet task easily. In the future experiments, we will try to apply the generated blocks in other tasks such as object detection and semantic segmentation.
# 6. Conclusion | 1708.05552#39 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 40 | # 6. Conclusion
In this paper, we show how to efï¬ciently design high per- formance network blocks with Q-learning. We use a dis- tributed asynchronous Q-learning framework and an early stop strategy focusing on fast block structures searching. We applied the framework to automatic block generation for constructing good convolutional network. Our Block- QNN networks outperform modern hand-crafted networks as well as other auto-generated networks in image classi- ï¬cation tasks. The best block structure which achieves a state-of-the-art performance on CIFAR can be transfer to the large-scale dataset ImageNet easily, and also yield a competitive performance compared with best hand-crafted networks. We show that searching with the block design strategy can get more elegant and model explicable network architectures. In the future, we will continue to improve the proposed framework from different aspects, such as using more powerful convolution layers and making the searching process faster. We will also try to search blocks with limited FLOPs and conduct experiments on other tasks such as detection or segmentation.
# Acknowledgments
This work has been supported by the National Natural Science Foundation of China (NSFC) Grants 61721004 and 61633021.
# Appendix
# A. Efï¬ciency of BlockQNN | 1708.05552#40 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 41 | # Appendix
# A. Efï¬ciency of BlockQNN
We demonstrate the effectiveness of our proposed Block- QNN on network architecture generation on the CIFAR-100 dataset as compared to random search given an equivalent amount of training iterations, i.e. number of sampled net- works. We deï¬ne the effectiveness of a network architec- ture auto-generation algorithm as the increase in top auto- generated network performance from the initial random ex- ploration to exploitation, since we aim to getting optimal auto-generated network instead of promoting the average performance.
Figure 10 shows the performance of BlockQNN and ran- dom search (RS) for a complete training process, i.e. sam- pling 11, 392 blocks in total. We can ï¬nd that the best model generated by BlockQNN is markedly better than the best model found by RS by over 1% in the exploitation phase on CIFAR-100 dataset. We observe this in the mean perfor- mance of the top-5 models generated by BlockQNN com- pares to RS. Note that the compared random search method start from the same exploration phase as BlockQNN for fairness. | 1708.05552#41 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 42 | the performance of BlockQNN with limited parameters and adaptive block numbers (BlockQNN-L) and random search with limited parameters and adaptive block numbers (RS-L) for a complete training process. We can see the same phenomenon, BlockQNN- L outperform RS-L by over 1% in the exploitation phase. These results prove that our BlockQNN can learn to gener- ate better network architectures rather than random search.
# B. Evolutionary Process of Auto-Generated Blocks
We sample the block structures with median perfor- mance generated by our approach in different stage, i.e. at iteration [1, 30, 60, 90, 110, 130, 150, 170], to show the evo- lutionary process. As illustrated in Figure 12 and Fig- ure 13, i.e. BlockQNN and BlockQNN-L respectively, the block structures generated in the random exploration stage is much simpler than the structures generated in the ex- ploitation stage.
In the exploitation stage, the multi-branch structures ap- pear frequently. Note that the connection numbers is gradu68 67 Accuracy (%) & i Start Exploitation ââRS Top1 ===-RS Tops ' âBlockQNN Top1 = = BlockQNN Tops 63 1 2100 «44s Ss. Iteration (batch) | 1708.05552#42 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 43 | Figure 10. Measuring the efï¬ciency of BlockQNN to random search (RS) for learning neural architectures. The x-axis measures the training iterations (batch size is 64), i.e. total number of archi- tectures sampled, and the y-axis is the early stop performance after 12 epochs on CIFAR-100 training. Each pair of curves measures the mean accuracy across top ranking models generated by each algorithm. Best viewed in color.
66 65 Req ee : = Fy 2 63 ââRS-LTop1 | StartExploitation as tops 62 | âBlockONN-L Top1 = = BlockQNN-L Tops 61 1 21 41 6181.61 Iteration (batch)
Figure 11. Measuring the efï¬ciency of BlockQNN with limited parameters and adaptive block numbers (BlockQNN-L) to ran- dom search with limited parameters and adaptive block numbers (RS-L) for learning neural architectures. The x-axis measures the training iterations (batch size is 64), i.e. total number of architec- tures sampled, and the y-axis is the early stop performance after 12 epochs on CIFAR-100 training. Each pair of curves measures the mean accuracy across top ranking models generated by each algorithm. Best viewed in color. | 1708.05552#43 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 44 | ally increase and the block tend choose âConcatâ as the last layer. And we can ï¬nd that the short-cut connections and elemental add layers are common in the exploitation stage. Additionally, blocks generated by BlockQNN-L have less âConv,5â layers, i.e. convolution layer with kernel size of 5, since the limitation of the parameters.
These prove that our approach can learn the universal de- sign concepts for good network blocks. Compare to other automatic network architecture design methods, our gener- ated networks are more elegant and model explicable.
Inpot Tapat Input Tee Es oe inpat inpot
# Trae
Random Exploration
Exploitation from epsilon=0.9 to epsilon=0.1
Figure 12. Evolutionary process of blocks generated by BlockQNN. We sample the block structures with median performance at iteration [1, 30, 60, 90, 110, 130, 150, 170] to compare the difference between the blocks in the random exploration stage and the blocks in the exploitation stage.
Trp Topo Taper ee Topo Input
Random Exploration
Exploitation from epsilon=0.9to epsilon=0.1 | 1708.05552#44 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 45 | Trp Topo Taper ee Topo Input
Random Exploration
Exploitation from epsilon=0.9to epsilon=0.1
Figure 13. Evolutionary process of blocks generated by BlockQNN with limited parameters and adaptive block numbers (BlockQNN-L). We sample the block structures with median performance at iteration [1, 30, 60, 90, 110, 130, 150, 170] to compare the difference between the blocks in the random exploration stage and the blocks in the exploitation stage.
# C. Additional Experiment
We also use BlockQNN to generate optimal model on person key-points task. The training process is conducted on MPII dataset, and then, we transfer the best model found in MPII to COCO challenge. It costs 5 days to complete the searching process. The auto-generated network for key-points task outperform the state-of-the-art hourglass 2 stacks network, i.e. 70.5 AP compares to 70.1 AP on COCO validation dataset.
[5] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V. Le, et al. Large scale dis- tributed deep networks. In Advances in neural information processing systems, pages 1223â1231, 2012. 5 | 1708.05552#45 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 46 | [6] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248â255. IEEE, 2009. 2
[7] T. Domhan, J. T. Springenberg, and F. Hutter. Speeding up automatic hyperparameter optimization of deep neural net- works by extrapolation of learning curves. In IJCAI, pages 3460â3468, 2015. 2
# References
[1] M. Andrychowicz, M. Denil, S. Gomez, M. W. Hoffman, D. Pfau, T. Schaul, and N. de Freitas. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems, pages 3981â3989, 2016. 3 [2] B. Baker, O. Gupta, N. Naik, and R. Raskar. Designing neu- ral network architectures using reinforcement learning. In 6th International Conference on Learning Representations, 2017. 1, 2, 4, 5, 6, 7, 8 | 1708.05552#46 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 47 | [3] J. S. Bergstra, R. Bardenet, Y. Bengio, and B. K´egl. Al- gorithms for hyper-parameter optimization. In Advances in Neural Information Processing Systems, pages 2546â2554, 2011. 3
[4] F. Chollet. Xception: Deep learning with depthwise separa- ble convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017. 8
[8] K. He and J. Sun. Convolutional neural networks at con- strained time cost. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5353â 5360, 2015. 5
[9] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectiï¬ers: Surpassing human-level performance on imagenet classiï¬cation. In Proceedings of the IEEE international con- ference on computer vision, pages 1026â1034, 2015. 6 [10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. In Proceedings of the IEEE con- ference on computer vision and pattern recognition, pages 770â778, 2016. 1, 2, 7 | 1708.05552#47 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 48 | [11] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In European Conference on Com- puter Vision, pages 630â645. Springer, 2016. 1, 2, 4, 7, 8 [12] S. Hochreiter, A. S. Younger, and P. R. Conwell. Learning to learn using gradient descent. In International Conference on
Artiï¬cial Neural Networks, pages 87â94. Springer, 2001. 3 [13] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten. In Proceed- Densely connected convolutional networks. ings of the IEEE conference on computer vision and pattern recognition, 2017. 7, 8
[14] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pages 448â 456, 2015. 1, 2, 8
[15] D. Kingma and J. Ba. Adam: A method for stochastic opti- mization. In 3rd International Conference for Learning Rep- resentations, 2015. 6
Imagenet classiï¬cation with deep convolutional neural networks. In Proc. Advances in Neural Information Processing Systems, pages 1097â1105, 2012. 1 | 1708.05552#48 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 49 | Imagenet classiï¬cation with deep convolutional neural networks. In Proc. Advances in Neural Information Processing Systems, pages 1097â1105, 2012. 1
[17] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436â444, 2015. 1
[18] M. Li, L. Zhou, Z. Yang, A. Li, F. Xia, D. G. Andersen, and A. Smola. Parameter server for distributed machine learning. In Big Learning NIPS Workshop, volume 6, page 2, 2013. 5 [19] L.-J. Lin. Reinforcement learning for robots using neural networks. Technical report, Carnegie-Mellon Univ Pitts- burgh PA School of Computer Science, 1993. 2, 6
[20] M. Lin, Q. Chen, and S. Yan. Network in network. In In- ternational Conference on Learning Representations, 2013. 4 | 1708.05552#49 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 50 | [20] M. Lin, Q. Chen, and S. Yan. Network in network. In In- ternational Conference on Learning Representations, 2013. 4
[21] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep rein- forcement learning. Nature, 518(7540):529â533, 2015. 2, 6
[22] A. Y. Ng, D. Harada, and S. Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In ICML, volume 99, pages 278â287, 1999. 5 [23] S. Saxena and J. Verbeek. Convolutional neural fabrics. In Advances in Neural Information Processing Systems, pages 4053â4061, 2016. 2
[24] J. D. Schaffer, D. Whitley, and L. J. Eshelman. Combinations of genetic algorithms and neural networks: A survey of the state of the art. In Combinations of Genetic Algorithms and Neural Networks, 1992., COGANN-92. International Work- shop on, pages 1â37. IEEE, 1992. 2 | 1708.05552#50 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 51 | [25] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In 3rd Interna- tional Conference for Learning Representations, 2015. 1, 7, 8
A hypercube-based encoding for evolving large-scale neural networks. Artiï¬cial life, 15(2):185â212, 2009. 2
[27] K. O. Stanley and R. Miikkulainen. Evolving neural net- works through augmenting topologies. Evolutionary compu- tation, 10(2):99â127, 2002. 2
[28] M. Suganuma, S. Shirakawa, and T. Nagao. A genetic pro- gramming approach to designing convolutional neural net- work architectures. In Proceedings of the Genetic and Evo- lutionary Computation Conference, pages 497â504, 2017. 2 | 1708.05552#51 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 52 | [29] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998. 5 [30] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 1â9, 2015. 1, 2, 8
[31] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818â2826, 2016. 1, 2 [32] R. Vilalta and Y. Drissi. A perspective view and survey of meta-learning. Artiï¬cial Intelligence Review, 18(2):77â95, 2002. 3
[33] C. J. C. H. Watkins. Learning from delayed rewards. PhD thesis, Kingâs College, Cambridge, 1989. 1
[34] L. Xie and A. Yuille. Genetic cnn. In Proceedings of the | 1708.05552#52 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 53 | [34] L. Xie and A. Yuille. Genetic cnn. In Proceedings of the
International Conference on Computer Vision, 2017. 2 [35] S. Xie, R. Girshick, P. Doll´ar, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. In 2017 IEEE Conference on Computer Vision and Pattern Recogni- tion (CVPR), pages 5987â5995. IEEE, 2017. 8
[36] S. Zagoruyko and N. Komodakis. Wide residual networks. In British Machine Vision Conference, 2016. 7
[37] B. Zoph and Q. V. Le. Neural architecture search with re- In 6th International Conference on inforcement learning. Learning Representations, 2017. 1, 2, 5, 7, 8 | 1708.05552#53 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.04782 | 0 | 7 1 0 2
g u A 6 1 ] G L . s c [
1 v 2 8 7 4 0 . 8 0 7 1 : v i X r a
# StarCraft II: A New Challenge for Reinforcement Learning
Oriol Vinyals Timo Ewalds Sergey Bartunov Petko Georgiev Alexander Sasha Vezhnevets Michelle Yeo Alireza Makhzani Heinrich K ¨uttler John Agapiou Karen Simonyan Julian Schrittwieser John Quan Hado van Hasselt DeepMind Tom Schaul Stephen Gaffney David Silver Stig Petersen Timothy Lillicrap
Kevin Calderone Paul Keet Anthony Brunasso David Lawrence Anders Ekermo Jacob Repp Blizzard Rodney Tsing
# Abstract | 1708.04782#0 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 1 | This paper introduces SC2LE (StarCraft II Learning Environment), a reinforce- ment learning environment based on the game StarCraft II. This domain poses a new grand challenge for reinforcement learning, representing a more difï¬cult class of problems than considered in most prior work. It is a multi-agent problem with multiple players interacting; there is imperfect information due to a partially observed map; it has a large action space involving the selection and control of hundreds of units; it has a large state space that must be observed solely from raw input feature planes; and it has delayed credit assignment requiring long-term strategies over thousands of steps. We describe the observation, action, and reward speciï¬cation for the StarCraft II domain and provide an open source Python-based interface for communicating with the game engine. In addition to the main game maps, we provide a suite of mini-games focusing on different elements of Star- Craft II gameplay. For the main game maps, we also provide an accompanying dataset of game replay data from human expert players. We give initial baseline results for neural networks trained from this data to predict game outcomes and player actions. Finally, we present initial baseline results for canonical deep rein- forcement | 1708.04782#1 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 2 | give initial baseline results for neural networks trained from this data to predict game outcomes and player actions. Finally, we present initial baseline results for canonical deep rein- forcement learning agents applied to the StarCraft II domain. On the mini-games, these agents learn to achieve a level of play that is comparable to a novice player. However, when trained on the main game, these agents are unable to make signiï¬- cant progress. Thus, SC2LE offers a new and challenging environment for explor- ing deep reinforcement learning algorithms and architectures. | 1708.04782#2 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 3 | # Introduction
Recent progress in areas such as speech recognition [7], computer vision [16], and natural language processing [38] can be attributed to the resurgence of deep learning [17], which provides a power- ful toolkit for non-linear function approximation using neural networks. These techniques have also proven successful in reinforcement learning problems, yielding signiï¬cant successes in Atari [20], the game of Go [32], three-dimensional virtual environments [3] and simulated robotics domains [18, 29]. Many of the successes have been stimulated by the availability of simulated domains with an appropriate level of difï¬culty. Benchmarks have been critical to measuring and therefore advanc- ing deep learning and reinforcement learning (RL) research [4, 20, 28, 8]. It is therefore important to ensure the availability of domains that are beyond the capabilities of current methods in one or more dimensions.
In this paper we introduce SC2LE1 (StarCraft II Learning Environment), a challenging domain for reinforcement learning, based on the StarCraft II video game. StarCraft is a real-time strategy (RTS) game that combines fast paced micro-actions with the need for high-level planning and execution. Over the previous two decades, StarCraft I and II have been pioneering and enduring e-sports,2 with millions of casual and highly competitive professional players. Defeating top human players therefore becomes a meaningful and measurable long-term objective. | 1708.04782#3 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 4 | From a reinforcement learning perspective, StarCraft II also offers an unparalleled opportunity to explore many challenging new frontiers. First, it is a multi-agent problem in which several players compete for inï¬uence and resources. It is also multi-agent at a lower-level: each player controls hundreds of units, which need to collaborate to achieve a common goal. Second, it is an imperfect information game. The map is only partially observed via a local camera, which must be actively moved in order for the player to integrate information. Furthermore, there is a âfog-of-warâ, ob- scuring the unvisited regions of the map, and it is necessary to actively explore the map in order to determine the opponentâs state. Third, the action space is vast and diverse. The player selects actions among a combinatorial space of approximately 108 possibilities (depending on the game resolution), using a point-and-click interface. There are many different unit and building types, each with unique local actions. Furthermore, the set of legal actions varies as the player progresses through a tree of possible technologies. Fourth, games typically last for many thousands of frames and actions, and the player must make early decisions (such as which units to build) with consequences that may not be seen until much later in the game (when the playersâ armies meet), leading to a rich set of challenges in temporal credit assignment and exploration. | 1708.04782#4 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 5 | This paper introduces an interface intended to make RL in StarCraft straightforward: observations and actions are deï¬ned in terms of low resolution grids of features; rewards are based on the score from the StarCraft II engine against the built-in computer opponent; and several simpliï¬ed mini- games are also provided in addition to the full game maps. Future releases will extend the interface for the full challenge of StarCraft II: observations and actions will expose RGB pixels; agents will be ranked by the ï¬nal win/loss outcome in multi-player games; and evaluation will be restricted to full game maps used in competitive human play.
In addition, we provide a large dataset based on game replays recorded from human players, which will increase to millions of replays as people play the game. We believe that the combination of the interface and this dataset will provide a useful benchmark to test not only existing and new RL algorithms, but also interesting aspects of perception, memory and attention, sequence prediction, and modelling uncertainty, all of which are active areas of machine learning research.
Several environments [1, 34, 33] already exist for reinforcement learning in the original version of StarCraft. Our work differs from these previous environments in several regards: it focuses on the newer version StarCraft II; observations and actions are based on the human user interface rather than being programmatic; and it is directly supported by the game developers, Blizzard Entertain- ment, on Windows, Mac, and Linux. | 1708.04782#5 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 6 | The current best artiï¬cial StarCraft bots, based on the built-in AI or research on previous environ- ments, can be defeated by even amateur players [cf. 6, and later versions of the AIIDE competition]. This fact, coupled with StarCraftâs interesting set of game-play properties and large player base, makes it an ideal research environment for exploring deep reinforcement learning algorithms.
# 2 Related Work
Computer games provide a compelling solution to the issue of evaluating and comparing different learning and planning approaches on standardised tasks, and is an important source of challenges for research in artiï¬cial intelligence (AI). These games offer multiple advantages: 1. They have clear objective measures of success; 2. Computer games typically output rich streams of observational data, which are ideal inputs for deep networks; 3. They are externally deï¬ned to be difï¬cult and interesting for a human to play. This ensures that the challenge itself is not tuned by the researcher to make the problem easier for the algorithms being developed; 4. Games are designed to be run anywhere with the same interface and game dynamics, making it easy to share a challenge precisely
# 1Pronounced: âschoolâ. 2https://en.wikipedia.org/wiki/Professional_StarCraft_competition
2 | 1708.04782#6 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 7 | # 1Pronounced: âschoolâ. 2https://en.wikipedia.org/wiki/Professional_StarCraft_competition
2
Actions select_rect(pl, p2) or build supply (p3) or ... SC2LE St StarCraft Il Binary StarCraft II API Agent resources available_actions -1/0/+1 build_queue Non-spatial Screen Minimap Reward features features features
Figure 1: The StarCraft II Learning Environment, SC2LE, shown with its components plugged into a neural agent.
with other researchers; 5. In some cases a pool of avid human players exists, making it possible to benchmark against highly skilled individuals. 6. Since games are simulations, they can be controlled precisely, and run at scale.
A well known example of games driving reinforcement learning research is the Arcade Learning Environment (ALE [4]), which allows easy and replicable experiments with Atari video games. This standardised set of tasks has been an incredible boon to recent research in AI. Scores on games in this environment can be compared across publications and algorithms, allowing for direct measurement and comparison. The ALE is a prominent example in a rich tradition of video game benchmarks for AI [31], including Super Mario [36], Ms Pac-Man [27], Doom [14], Unreal Tournament [11], as well as general video game-playing frameworks [30, 5] and competitions [24]. | 1708.04782#7 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 8 | The genre of RTS games has attracted a large amount of AI research, including on the original StarCraft (Broodwar). We recommend the surveys by Ontanon et al. [22] and Robertson & Watson [26] for an overview. Many of those research directions focus on speciï¬c aspects of the game (e.g., build order, or combat micro-management) or speciï¬c AI techniques (e.g., MCTS planning). We are not aware of efforts to solve full games with an end-to-end RL approach. Tackling full versions of RTS games has seemed daunting because of the rich input and output spaces as well as the very sparse reward structure (i.e., game outcome).
The standard API for StarCraft thus far has been BWAPI [1], and related wrappers [33]. Simpliï¬ed versions of RTS games have also been developed for AI research, most notably microRTS3 or the more recent ELF [35]. Previous work has applied RL approaches to the Wargus RTS game with reduced state and action spaces [12], and learning based agents have also been explored in micro- management mini-games [23, 37], and learning game outcome or build orders from replay data [9, 13].
# 3 The SC2LE Environment | 1708.04782#8 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 9 | # 3 The SC2LE Environment
The main contribution of our paper is the release of SC2LE, which exposes StarCraft II as a re- search environment. The release consists of three sub-components: a Linux StarCraft II binary, the StarCraft II API, and PySC2 (see ï¬gure 1).
# 3https://github.com/santiontanon/microrts
3
The StarCraft II API4 allows programmatic control of StarCraft II. The API can be used to start a game, get observations, take actions, and review replays. This API into the normal game is available on Windows and Mac OS, but we also provide a limited headless build that runs on Linux especially for machine learning and distributed use cases. Using this API we built PySC25, an open source environment that is optimised for RL agents. PySC2 is a Python environment that wraps the StarCraft II API to ease the interaction between Python rein- forcement learning agents and StarCraft II. PySC2 deï¬nes an action and observation speciï¬cation, and includes a random agent and a handful of rule-based agents as examples. It also includes some mini-games as challenges and visualisation tools to understand what the agent can see and do. | 1708.04782#9 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 10 | StarCraft II updates the simulation 16 (at ânormal speedâ) or 22.4 (at âfast speedâ) times per second. The game is mostly deterministic, but it does have some randomness mainly for cosmetic reasons; the two main random elements are weapon speed and update order. These sources of randomness can be removed/mitigated by setting a random seed.
We now describe the environment which was used for all of the experiments in this paper.
# 3.1 Full Game Description and Reward Structure
In the full 1v1 game of StarCraft II, two opponents spawn on a map which contains resources and other elements such as ramps, bottlenecks, and islands. To win a game, a player must: 1. Accumulate resources (minerals and vespene gas), 2. Construct production buildings, 3. Amass an army, and 4. Eliminate all of the opponentâs buildings. A game typically lasts from a few minutes to one hour, and early actions taken in the game (e.g., which buildings and units are built) have long term consequences. Players have imperfect information since they can typically only see the portion of the map where they have units. If they want to understand and react to their opponentâs strategy they must send units to scout. As we describe later in this section, the action space is also quite unique and challenging. | 1708.04782#10 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 11 | Most people play online against other human players. The most common games are 1v1, but team games are possible too (2v2, 3v3 or 4v4), as are more complicated games with unbalanced teams or more than two teams. Here we focus on the 1v1 format, the most popular form of competitive StarCraft, but may consider more complicated situations in the future.
StarCraft II includes a built-in AI which is based on a set of handcrafted rules and comes with 10 lev- els of difï¬culty (the three strongest of which cheat by getting extra resources or privileged vision). Unfortunately, the fact that they are rule-based means their strategies are fairly narrow and thus eas- ily exploitable. Nevertheless, they are a reasonable ï¬rst challenge for a purely learned approach like the baselines we investigate in sections 4 and 5; they play far better than random, play very quickly with little compute, and offer consistent baselines to compare against. | 1708.04782#11 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 12 | We deï¬ne two different reward structures: ternary 1 (win) / 0 (tie) / â1 (loss) received at the end of a game (with all-zero rewards during the game), and Blizzard score. The ternary win/tie/loss score is the real reward that we care about. The Blizzard score is the score seen by players on the victory screen at the end of the game. While players can only see this score at the end of the game, we provide access to the running Blizzard score at every step during the game so that the change in score can be used as a reward for reinforcement learning. It is computed as the sum of current resources and upgrades researched, as well as units and buildings currently alive and being built. This means that the playerâs cumulative reward increases with more mined resources, decreases when losing units/buildings, and all other actions (training units, building buildings, and researching) do not affect it. The Blizzard score is not zero-sum since it is player-centric, it is far less sparse than the ternary reward signal, and it correlates to some extent with winning or losing.
# 3.2 Observations | 1708.04782#12 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 13 | # 3.2 Observations
StarCraft II uses a game engine which renders graphics in 3D. Whilst utilising the underlying game engine which simulates the whole environment, the StarCraft II API does not currently render RGB pixels. Rather, it generates a set of âfeature layersâ, which abstract away from the RGB images seen
4https://github.com/Blizzard/s2client-proto 5https://github.com/deepmind/pysc2
4
ini pee eae scram stn mop
Figure 2: The PySC2 viewer shows a human interpretable view of the game on the left, and coloured versions of the feature layers on the right. For example, terrain height, fog-of-war, creep, camera location, and player identity, are shown in the top row of feature layers. A video can be found at https://youtu.be/-fKUyT14G-8.
during human play, while maintaining the core spatial and graphical concepts of StarCraft II (see Figure 2). | 1708.04782#13 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 14 | during human play, while maintaining the core spatial and graphical concepts of StarCraft II (see Figure 2).
Thus, the main observations come as sets of feature layers which are rendered at N à M pixels (where N and M are conï¬gurable, though in our experiments we always used N = M ). Each of these layers represents something speciï¬c in the game, for example: unit type, hit points, owner, or visibility. Some of these (e.g., hit points, height map) are scalars, while others (e.g., visibility, unit type, owner) are categorical. There are two sets of feature layers: the minimap is a coarse representation of the state of the entire world, and the screen is a detailed view of a subsection of the world corresponding to the playerâs on-screen view, and in which most actions are executed. Some features (e.g., owner or visibility) exist for both the screen and minimap, while others (e.g., unit type and hit points) exist only on the screen. See the environment documentation6 for a complete description of all observations provided. | 1708.04782#14 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 15 | In addition to the screen and minimap, the human interface for the game provides various non-spatial observations. These include the amount of gas and minerals collected, the set of actions currently available (which depends on game context, e.g., which units are selected), detailed information about selected units, build queues, and units in a transport vehicle. These observations are also exposed by PySC2, and are fully described in the environment documentation. The audio channel is not exposed as a wave form but important notiï¬cations will be exposed as part of the observations.
In the retail game engine the screen is rendered with a full 3D perspective camera at high resolution. This leads to complicated observations with units getting smaller as they get âhigherâ on the screen, and with more world real estate being visible in the back than the front. To simplify this, feature layers are rendered via a camera that uses a top down orthographic projection. This means that each pixel in a feature layer corresponds to precisely the same amount of world real estate, and as a consequence all units will be the same size regardless where they are in view. Unfortunately, it also means the feature layer rendering does not quite match what a human would see. An agent sees a little more in the front and a little less in the back. This does mean some actions that humans make in replays cannot be fully represented. | 1708.04782#15 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 16 | In future releases we will expose a rendered API allowing agents to play from RGB pixels. This will allow us to study the effects of learning from raw pixels versus learning from feature layers and make closer comparisons to human play. In the mean time, we played the game with feature layers to verify that agents are not severely handicapped. Though the game-play experience is obviously
# 6https://github.com/deepmind/pysc2/blob/master/docs/environment.md
5
altered we found that a resolution of N, M ⥠64 is sufï¬cient to allow a human player to select and individually control small units such as Zerglings. The reader is encouraged to try this using pysc2 play7. See also Figure 2.
# 3.3 Actions
We designed the environment action space to mimic the human interface as closely as possible whilst maintaining some of the conventions employed in other RL environments, such as Atari [4]. Figure 3 shows a short sequence of actions as produced by a player and by an agent. | 1708.04782#16 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 17 | Many basic manoeuvres in the game are compound actions. For example, to move a selected unit across the map a player must ï¬rst choose to move it by pressing m, then possibly choose to queue the action by holding shift, then click a point on the screen or minimap to execute the action. Instead of asking agents to produce those 3 key/mouse presses as a sequence of three separate actions we give it as an atomic compound function action: move screen(queued, screen). More formally, an action a is represented as a composition of a function identiï¬er a0 and a sequence of arguments which that function identiï¬er requires: a1, a2, . . . , aL. For in- stance, consider selecting multiple units by drawing a rectangle. The intended action is then select rect(select add, (x1, y1), (x2, y2)). The ï¬rst argument select add is binary. The other arguments are integers that deï¬ne coordinates â their allowed range is the same as the resolution of the observations. This action is fed to the environment in the form [select rect, [[select add], [x1, y1], [x2, y2]]]. | 1708.04782#17 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 18 | To represent the full action space we deï¬ne approximately 300 action-function identiï¬ers with 13 possible types of arguments (ranging from binary to specifying a point on the discretised 2D screen). See the environment documentation for a more detailed speciï¬cation and description of the actions available through PySC2, and Figure 3 for an example of a sequence of actions.
In StarCraft, not all the actions are available in every game state. For example, the move command is only available if a unit is selected. Human players can see which actions are available in the âcommand cardâ on the screen. Similarly, we provide a list of available actions via the observations given to the agent at each step. Taking an action that is not available is considered an error, so agents should ï¬lter their action choices so that only legal actions are taken.
Humans typically make between 30 and 300 actions per minute (APM), roughly increasing with player skill, with professional players often spiking above 500 APM. In all our RL experiments, we act every 8 game frames, equivalent to about 180 APM, which is a reasonable choice for intermediate players. | 1708.04782#18 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 19 | We believe these early design choices make our environment a promising testbed for developing complex RL agents. In particular, the ï¬xed-size feature layer input space and human-like action space are natural for neural network based agents. This is in contrast to other recent work [33, 23], where the game is accessed on a unit-per-unit basis and actions are individually speciï¬ed to each unit. While there are advantages to both interface styles, PySC2 offers the following:
⢠Learning from human replays becomes simpler.
⢠We do not require unrealistic/super-human actions per minute to issue instructions individ- ually to each unit.
⢠The game was designed to be played with this UI, and the balance between strategic high level decisions, managing your economy, and controlling the army makes the game more interesting.
# 3.4 Mini-Games Task Description
To investigate elements of the game in isolation, and to provide further ï¬ne-grained steps towards playing the full game, we built several mini-games. These are focused scenarios on small maps that have been constructed with the purpose of testing a subset of actions and/or game mechanics with a clear reward structure. Unlike the full game where the reward is just win/lose/tie, the reward
# 7https://github.com/deepmind/pysc2/blob/master/pysc2/bin/play.py
6 | 1708.04782#19 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 20 | # 7https://github.com/deepmind/pysc2/blob/master/pysc2/bin/play.py
6
Left_Click_Hold (p1) r | Press B- B Human Actions IDLE IDLE Y @ Release (p2) r | Left_Click (p3) r | Agent Actions no_op select_rect(p1, p2) build_supply(p3) no_op Base Base Base Base action Point Point action Point Point action Point Point action Point Point roo @ oop oop noop @ i i rectangle rectangle rectangle rectangle Available Actions toe @» ERB! mae @>e lity mae Opti a ei Build @| . Buld > supply supply
Figure 3: Comparison between how humans act on StarCraft II and the actions exposed by PySC2. We designed the action space to be as close as possible to human actions. The ï¬rst row shows the game screen, the second row the human actions, the third row the logical action taken in PySC2, and the fourth row the actions a exposed by the environment (and, in red, what the agent selected at each time step). Note that the ï¬rst two columns do not feature the âbuild supplyâ action, as it is not yet available to the agent in those situations as a worker has to be selected ï¬rst. | 1708.04782#20 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 21 | structure for mini-games can reward particular behaviours (as deï¬ned in a corresponding .SC2Map ï¬le).
We encourage the community to build modiï¬cations or new mini-games with the powerful StarCraft Map Editor. This allows for more than just designing a broad range of smaller challenge domains. It permits sharing identical setups and evaluations with other researchers and obtaining directly comparable evaluation scores. The restricted action sets, custom reward functions and/or time limits are deï¬ned directly in the resulting .SC2Map ï¬le, which is easy to share. We therefore encourage users to use this method of deï¬ning new tasks, rather than customising on the agent side.
The seven mini-games that we are releasing are as follows:
⢠MoveToBeacon: The agent has a single marine that gets +1 each time it reaches a beacon. This map is a unit test with a trivial greedy strategy.
⢠CollectMineralShards: The agent starts with two marines and must select and move them to pick up mineral shards spread around the map. The more efï¬ciently it moves the units, the higher the score.
⢠FindAndDefeatZerglings: The agent starts with 3 marines and must explore a map to ï¬nd and defeat individual Zerglings. This requires moving the camera and efï¬cient exploration. | 1708.04782#21 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 22 | ⢠DefeatRoaches: The agent starts with 9 marines and must defeat 4 roaches. Every time it defeats all of the roaches it gets 5 more marines as reinforcements and 4 new roaches spawn. The reward is +10 per roach killed and â1 per marine killed. The more marines it can keep alive, the more roaches it can defeat.
⢠DefeatZerglingsAndBanelings: The same as DefeatRoaches, except the opponent has Zer- glings and Banelings, which give +5 reward each when killed. This requires a different strategy because the enemy units have different abilities.
⢠CollectMineralsAndGas: The agent starts with a limited base and is rewarded for the total resources collected in a limited time. A successful agent must build more workers and expand to increase its resource collection rate.
7
⢠BuildMarines: The agent starts with a limited base and is rewarded for building marines. It must build workers, collect resources, build Supply Depots, build Barracks, and then train marines. The action space is limited to the minimum action set needed to accomplish this goal.
All mini-games have a ï¬xed time limit and are described in more detail online: https://github. com/deepmind/pysc2/blob/master/docs/mini_games.md.
# 3.5 Raw API | 1708.04782#22 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 23 | # 3.5 Raw API
StarCraft II also has a raw API, which is similar to the Broodwar API (BWAPI [1]). In this case, the observations are a list of all visible units on the map along with the properties (unit type, owner, coordinates, health, etc.), but without any visual component. Fog-of-war still exists, but there is no camera, so you can see all visible units simultaneously. This is a simpler and more precise represen- tation, but it does not correspond to how humans perceive the game. For the purposes of comparing against humans this is considered âcheatingâ since it offers signiï¬cant additional information.
Using the raw API, actions control units or groups of units individually by a unit identiï¬er. There is no need to select individuals or groups of units before issuing actions. This allows much more precise actions than the human interface allows, and thus yields the possibility of super-human behaviour via this API.
Although we have not used any data from the raw API to train our agents, it is included in the release in order to support other use cases. PySC2 uses it for visualization while both Blizzardâs SC2 API examples8 and CommandCenter9 use it to for rule-based agents.
# 3.6 Performance | 1708.04782#23 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 24 | # 3.6 Performance
We can often run the environment faster than real time. Observations are rendered at a speed that depends on several factors: the map complexity, the screen resolution, the number of non-rendered frames per action, and the number of threads.
For complex maps (e.g., full ladder maps) the computation is dominated by simulation speed. Taking actions less often, allowing for fewer rendered frames, reduces the compute, but diminishing returns kicks in fairly quickly meaning there is little gain above 8 steps per action. Given little time is spent rendering, a higher resolution does not hurt. Running more instances in parallel threads scales quite well.
For simpler maps (e.g., CollectMineralShards) the world simulation is quick, so rendering the ob- servations dominates. In this case increasing the frames per action and decreasing the resolution can have a large effect. The bottleneck then becomes the Python interpreter, negating gains above roughly 4 threads with a single interpreter. | 1708.04782#24 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 25 | With a resolution of 64 Ã 64 and acting at a rate of 8 frames per action, the single-threaded speed of a ladder map varies from 200â700 game steps per wall-clock second, which is more than an order of magnitude faster than real-time. The exact speeds depends on multiple factors, including: the stage of the game, the number of units in play, and the computer it runs on. On CollectMineralShards the same settings permit 1600â2000 game steps per wall-clock second.
# 4 Reinforcement Learning: Baseline Agents
This section provides baseline results that serve to calibrate the map difï¬culty, and demonstrate that established RL algorithms can learn useful policies, at least on the mini-games, but also that many challenges remain. For the mini-games we additionally provide scores for two human players: a DeepMind game tester (novice level) and a StarCraft GrandMaster (professional level) (see Table 1).
8https://github.com/Blizzard/s2client-api 9https://github.com/davechurchill/CommandCenter
8
# 4.1 Learning Algorithm | 1708.04782#25 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 26 | 8https://github.com/Blizzard/s2client-api 9https://github.com/davechurchill/CommandCenter
8
# 4.1 Learning Algorithm
Our reinforcement learning agents are built using a deep neural network with parameters 6, which defines a policy 79. At time step ¢ the agent receives observations s;, selects an action a, with probability 79 (a,|s;), and then receives a reward r; from the environment. The goal of the agent is to maximise the return G; = Sro 7"rt4n41, Where 7 is a discount factor. For notational clarity we assume that policy is conditioned only on the observation s;, but without loss of generality it might be conditioned on all previous states, e.g., via a hidden memory state as we describe below.
The parameters of the policy are learnt using Asynchronous Advantage Actor Critic (A3C), as de- scribed by Mnih et al. [21], which was shown to produce state-of-the-art results on a diverse set of environments. A3C is a policy gradient method, which performs an approximate gradient ascent on E [Gt]. The A3C gradient is deï¬ned as follows: | 1708.04782#26 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 27 | (Gt â vo(s+)) Vo log m9 (az|sz) +8 (Gt â vo(se))Vove(St) +n>> mo(a\sz) log me(alse), (1) SS a , f - a policy gradient value estimation gradient entropy regularisation
where vg(s) is a value function estimate of the expected return E [G; | se = = s] produced by the same network. Instead of the full return, we use an n-step return G; = an 7 Kr ead + Y"vo(St¢n) in the gradient above, where n is a hyper-parameter. The last term regularises the policy towards larger entropy, which promotes exploration, and 8 and 7 are hyper-parameters that trade off the importance of the loss components. For details we refer the reader to the original paper and the references therein.
# 4.2 Policy Representation | 1708.04782#27 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 28 | # 4.2 Policy Representation
As described in section 3, the API exposes actions as a nested list a which contains a function identiï¬er a0 and a set of arguments. Since all arguments including pixel coordinates on screen and minimap are discrete, a naive parametrisation of a policy Ïθ(a|s) would require millions of values to specify the joint distribution over a, even for a low spatial resolution. We could instead represent the policy in an auto-regressive manner, utilising the chain rule10:
L s) = [[ 7o(a' as", 8). (2) l=0
This representation, if implemented efficiently, is arguably simpler as it transforms the problem of choosing a full action a to a sequence of decisions for each argument a!. In the straightfor- ward RL baselines reported here, we make a further simplification and use policies that choose the aot identifier, 2°, and all the arguments, a!, independently from one another: so, 79(a|s) Te 29 ⢠). Note that, depending on the function identifier a°, the number of required arguments Lis pt Adi Some actions (e.g., the no-op action) do not require any arguments, whereas others (e.g., move_screen(x, y)) do. See Figure|3|for an example. | 1708.04782#28 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 29 | In line with the human UI, we ensure that unavailable actions are never chosen by our agents. To do so we mask out the function identiï¬er choice a0 such that only the available subset can be sampled. We implement this by masking out actions and renormalising the probability distribution over a0.
# 4.3 Agent Architectures
This section presents several agent architectures with the purpose of producing straightforward base- lines. We take established architectures from the literature [20, 21] and adapt them to ï¬t the speciï¬cs of the environment, in particular the action space. Figure 4 illustrates the proposed architectures.
10Note that for the auto-regressive case one could use an arbitrary permutation over arguments to deï¬ne an order in which the chain rule is applied. But there is also a ânaturalâ ordering over arguments that can be used since decisions about where to click on a screen depend on the purpose of the click, that is, the identiï¬er of the function being called.
9
(a) Atari-net (b) FullyConv
Figure 4: Network architectures of the basic agents considered in the paper. | 1708.04782#29 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 30 | 9
(a) Atari-net (b) FullyConv
Figure 4: Network architectures of the basic agents considered in the paper.
Input pre-processing All the baseline agents share the same pre-processing of input feature lay- ers. We embed all feature layers containing categorical values into a continuous space, which is equivalent to using a one-hot encoding in the channel dimension followed by a 1 Ã 1 convolution. We also re-scale numerical features with a logarithmic transformation as some of them such as hit-points or minerals might attain substantially high values. | 1708.04782#30 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 31 | Atari-net Agent The ï¬rst baseline is a simple adaptation of the architecture successfully used for the Atari [4] benchmark and DeepMind Lab environments [3]. It processes screen and minimap feature layers with the same convolutional network as in [21] â two layers with 16, 32 ï¬lters of size 8, 4 and stride 4, 2 respectively. The non-spatial features vector is processed by a linear layer with a tanh non-linearity. The results are concatenated and sent through a linear layer with a ReLU activation. The resulting vector is then used as input to linear layers that output policies over the action function id a0 and each action-function argument {al}L l=0 independently. For spatial actions (screen or minimap coordinates) we independently model policies to select (discretised) x and y coordinates. | 1708.04782#31 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 32 | FullyConv Agent Convolutional networks for reinforcement learning (such as the Atari-net base- line above) usually reduce the spatial resolution of the input with each layer and ultimately ï¬nish with a fully connected layer that discards spatial structure completely. This allows spatial informa- tion to be abstracted away before actions are inferred. In StarCraft, though, a major challenge is to infer spatial actions (i.e. clicking on the screen and minimap). As these spatial actions act within the same space as the inputs, it might be detrimental to discard the spatial structure of the input. | 1708.04782#32 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 33 | Here we propose a fully convolutional network agent, which predicts spatial actions directly through a sequence of resolution-preserving convolutional layers. The network we propose has no stride and uses padding at every layer, thereby preserving the resolution of the spatial information in the input. For simplicity, we assume the screen and minimap inputs have the same resolution. We pass screen and minimap observations through separate 2-layer convolutional networks with 16, 32 ï¬lters of size 5 à 5, 3 à 3 respectively. The state representation is then formed by the concatenation of the screen and minimap network outputs, as well as the broadcast vector statistics, along the channel dimension. Note that this is likely non-optimal since the screen and minimap do not have the same spatial extent â future work could improve on this arrangement. To compute the baseline and policies over categorical (non-spatial) actions, the state representation is ï¬rst passed through a fully-connected layer with 256 units and ReLU activations, followed by fully-connected linear layers. Finally, a policy over spatial actions is obtained using 1 à 1 convolution of the state representation with a single output channel. See Figure 4 for a visual representation of this computation. | 1708.04782#33 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 34 | FullyConv LSTM Agent Both of the above baselines are feed-forward architectures and therefore have no memory. While this is sufï¬cient for some tasks, we cannot expect it to be enough for the full complexity of StarCraft. Here we introduce a baseline architecture based on a convolutional LSTM. We follow the fully-convolutional agentâs pipeline described above and simply add a convolutional
10
LSTM module after the minimap and screen feature channels are concatenated with the non-spatial features.
Random agents We use two random baselines. Random policy is an agent that picks uniformly at random among all valid actions, which highlights the difï¬culty of stumbling onto a successful episode in a very large action space. The random search baseline is based on the FullyConv agent and works by taking many independent, randomly initialised policy networks (with a low softmax temperature that induces near-deterministic actions), evaluating each for 20 episodes and keeping the one with the highest mean score. This is complementary in that it samples in policy space rather than action space.
# 4.4 Results | 1708.04782#34 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 35 | # 4.4 Results
In A3C, we truncate the trajectory and run backpropagation after K = 40 forward steps of a network or if a terminal signal is received. The optimisation process runs 64 asynchronous threads using shared RMSProp. For each method, we ran 100 experiments, each using randomly sampled hyper- parameters. Learning rate was sampled from a form(10â5, 10â3) interval. The learning rate was linearly annealed from a sampled value to half the initial rate for all agents. We use an independent entropy penalty of 10â3 for the action function and each action-function argument. We act at a ï¬x rate every 8 game steps, which is equivalent to about three actions per second or 180 APM. All experiments were run for 600M steps (or 8Ã600M game steps).
# 4.4.1 Full Game
AbyssalReef ternary score AbyssalReef Blizzard score -0.4 5500 05 5000 4500 -0.6 4000 ââ Atari-net -0.7 3500 â FullyConv 0.8 3000 â FullyConvLSTM 2500 0.9 Pe mph f\ 2000 -1.0 1500 OM 100M 200M 300M 400M 500M 600M OM 100M 200M 300M 400M 500M 600M | 1708.04782#35 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 36 | Figure 5: Performance on the full game of the best hyper-parameters versus the easy built-in AI player as the opponent (TvT on the Abyssal Reef LE ladder map): 1. Using outcome (-1 = lose, 0 = tie, 1 = win) as the reward; 2. Using the native game score provided by Blizzard as the reward. Notably, baseline agents do not learn to win even a single game. Architectures: (a) the original Atari architecture used for DQN, (b) a network which uses a convnet to preserve spatial information for screen and minimap actions, (c) same as in (b) but with a Convolutional LSTM at one layer. Lines are smoothed for visibility.
For experiments on the full game, we selected the Abyssal Reef LE ladder map used in ranked online games as well as in professional matches. The agent played against the easiest built-in AI in a Terran versus Terran match-up. Maximum game length was set to 30 minutes, after which a tie was declared, and the episode terminated. | 1708.04782#36 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 37 | Results of the experiments are shown on Figure 5. Unsurprisingly, none of the agents trained with sparse ternary rewards developed a viable strategy for the full game. The most successful agent, based on the fully convolutional architecture without memory, managed to avoid constant losses by using the Terran ability to lift and move buildings out of attack range. This makes it difï¬cult for the easy AI to win within the 30 minute time limit.
Agents trained with the Blizzard score converged to trivial strategies that avoid distracting workers from mining minerals. Most agents converged to simply preserving the initial mining process with- out building further units or structures (this behaviour was also observed in the economic mini-game proposed below).
These results suggest that the full game of StarCraft II is indeed a challenging RL domain, especially without access to other sources of information such as human replays.
11
4.4.2 Mini-Games
MoveToBeacon
30 25 20 15 10 5 0 a . OM 100M 200M 300M 400M 500M 600M FindAndDefeatZerglings
120
CollectMineralShards
100 80 60 40 20 0 OM 100M 200M 300M 400M 500M 600M DefeatRoaches
60
140
50 40 30 20 10 ie) -10 OM 100M 200M 300M 400M 500M 600M | 1708.04782#37 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 38 | # OM
100M
200M
300M
400M
500M
120
# DefeatZerglingsAndBanelings
4500
# CollectMineralsAndGas
100 80 60 40 20 ie) om 100M 200M 300M 400M 500M 600M 16 BuildMarines 14 12 10 8 6 4 2
4000 3500 3000 2500 2000 1500 1000 500 ie) â OM 100M 200M 300M 400M 500M 600M
ââ
# Atari-net best mean
ââ
â
# FullyConv best mean FullyConvLSTM best mean
0
om 100M 200M 300M 400M 500M 600M
Figure 6: Training process for baseline agent architectures. Displayed lines are mean scores as a function of game steps. The three network architectures are the same as used in Figure 5. Faint lines show all 100 runs with different hyper-parameters; the solid line is the run with the best mean. Lines are smoothed for visibility.
12
600M
Table 1: Aggregated results for human baselines and agents on mini-games. All agents were trained for 600M steps. MEAN corresponds to the average agent performance, BEST MEAN is the average performance of the best agent across different hyper-parameters, MAX corresponds to the maximum observed individual episode score. | 1708.04782#38 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 39 | S G N AGENT RANDOM POLICY RANDOM SEARCH DEEPMIND HUMAN PLAYER STARCRAFT GRANDMASTER ATARI-NET FULLYCONV FULLYCONV LSTM METRIC MEAN MAX MEAN MAX MEAN MAX MEAN MAX BEST MEAN MAX BEST MEAN MAX BEST MEAN MAX N O C A E B O T E V O M 1 6 25 29 26 28 28 28 25 33 26 45 26 35 S D R A H S L A R E N M T C E L L O C I 17 35 32 57 133 142 177 179 96 131 103 134 104 137 S G N I L G R E Z T A E F E D D N A D N I F 4 19 21 33 46 49 61 61 49 59 45 56 44 57 S E H C A O R T A E F E D 1 46 51 241 41 81 215 363 101 351 100 355 98 373 I L E N A B D N A S G N I L G R E Z T A E F E D 23 118 55 159 729 757 727 848 81 352 62 251 96 444 S A G D N A S L A R E N M T C E L L O C I S E N I R A M D L I U B 12 < 1 5 750 8 2318 46 3940 138 6880 142 6952 133 7566 7566 133 3356 < | 1708.04782#39 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 41 | As described in section 3, one can avoid the complexity of the full game by deï¬ning a set of mini- games which focus on certain aspects of the game (see section 3 for a high-level description of each mini-game).
We trained our agents on each mini-game. The aggregated training results are shown in Figure 6 and the ï¬nal results with comparisons to human baselines can be found in Table 1. A video showcasing our agents can also be found at https://youtu.be/6L448yg0Sm0.
Overall, fully convolutional agents performed the best across the non-human baselines. Somewhat surprisingly, the Atari-net agent appeared to be quite a strong competitor on mini-games involv- ing combat, namely FindAndDefeatZerlings, DefeatRoaches and DefeatZerlingsAndBanelings. On CollectMineralsAndGas, only the best Convolutional agent learned to increase the initial resource income by producing more worker units and assigning them to mining. | 1708.04782#41 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 42 | We found BuildMarines to be the most strategically demanding mini-game and perhaps the closest of all to the full game of StarCraft. The best results on this game were achieved by FullyConv LSTM and Random Search, while Atari-Net failed to learn a strategy to consistently produce marines during each episode. It should be noted that, without the restrictions on action space imposed by this map, it would be signiï¬cantly more diffucult to learn a to produce marines in this mini-game.
All agents performed sub-optimally when compared against the GrandMaster player, except for in simplest MoveToBeacon mini-game, which only requires good mechanics and reaction time â which artiï¬cial agents are expected to be good at. However, in some games like DefeatRoaches and FindAndDefeatZerglings, our agents did fare well versus the DeepMind game tester.
13
The results of our baseline agents demonstrate that even relatively simple mini-games present inter- esting challenges for existing RL algorithms.
# 5 Supervised Learning from Replays | 1708.04782#42 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 43 | 13
The results of our baseline agents demonstrate that even relatively simple mini-games present inter- esting challenges for existing RL algorithms.
# 5 Supervised Learning from Replays
Game replays are a crucial resource used by professional and amateur players alike, who learn new strategies, ï¬nd critical mistakes made in a game, or simply enjoy watching others play as a form of entertainment. Replays are especially important in StarCraft because of hidden information: the fog-of-war hides all of the opponentâs units unless they are within view of one of your own. Thus, among professional players it is standard practice to review and analyse every game they play, even when they win.
The use of supervised data such as replays or human demonstrations has been successful in robotics [2, 25], the game of Go [19, 32], and Atari [10]. It has also been used in the context of StarCraft I (e.g., [13]), though not to train a policy over basic actions, but rather to discover build orders. StarCraft II provides the opportunity to collect and learn from a large and growing set of human replays. Whereas there has been no central and standardised mechanism for collecting replays for StarCraft I, large numbers of anonymised StarCraft II games are readily available via Blizzardâs online 1v1 ladder. As well, more games will be added to this set on a regular basis as a relatively stable player pool plays new games. | 1708.04782#43 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 44 | Learning from replays should be useful to bootstrap or complement reinforcement learning. In iso- lation, it could also serve as a benchmark for sequence modelling or memory architectures having to deal with long term correlations. Indeed, to understand a game as it unfolds, one must integrate in- formation across many time steps efï¬ciently. Furthermore, due to partial observability, replays could also be used to study models of uncertainty such as (but not limited to) variational autoencoders [15]. Finally, comparing performance on outcome/action prediction may help guide the design of neural architectures with suitable inductive biases for RL in the domain.
In the rest of this section, we provide baselines using the architectures described in Section 4, but using a set of 800K games to learn both a value function (i.e., predicting the winner of the game from game observations), and a policy (i.e., predicting the action taken from game observations). The games contain all possible matchups in StarCraft II (i.e., we do not restrict the agent to play a single race).
8000 10°, 7000 : 10°k : â 107 i* 6000 ae £10 ec 5000, = = 3B 10% 5 4000 a. © 10 a 3000 10° 2000 38 107 % 1000 10° = 0 100 200 300 400 500 600 700 0 50.100 150.200 250-300 APM Action index in sorted order | 1708.04782#44 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 45 | 10°, 10°k 107 i* ae £10 = 3B 10% a. © 10 a 10° 107 % 10° = 0 50.100 150.200 250-300 Action index in sorted order
8000 7000 : : â 6000 ec 5000, = 5 4000 3000 2000 38 1000 0 100 200 300 400 500 600 700 APM
Figure 7: Statistics of the replay set we used for supervised training of our policy and value nets. (Left) Distribution of player rating (MMR) as a function of APM. (Right) Distribution of actions sorted by probability of usage by human players.
Figure 7 shows statistics for the replays we used. We summarize some of the most interesting stats here: 1. The skill level of players, measured by the Match Making Rating (MMR), varies from casual gamer, to high-end amateur, on through to professionals. 2. The average number of Actions Per Minute (APM) is 153, and mean MMR is 3789. 3. The replays are not ï¬ltered, and instead all ârankedâ league games played on BattleNet are used 11. 4. Less than one percent are Masters level replays from top players. 5. We also show the distribution of actions sorted by their frequency of use by human players. The most frequent action, taken 43% of the time, is moving the camera. | 1708.04782#45 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 46 | # 11http://wiki.teamliquid.net/starcraft2/Battle.net_Leagues
14
6. Overall, the action distribution has a heavy tail with a few commonly used actions (e.g., move camera, select rectangle, attack screen) and a large number of actions that are used infrequently (e.g., building an engineering bay).
We train dual-headed networks that predict both the game outcome (1 = win vs. 0 = loss or tie), and the action taken by the player at each time step. Sharing the body of the network makes it necessary to balance the weights for the two loss functions, but it also allows value and policy predictions to inform one another. We did not make ties a separate game outcome class in the supervised training setup, since the number of ties in the dataset is very low (< 1%) compared to victory and defeat
# 5.1 Value Predictions
Predicting the outcome of a game is a challenging task. Even professional StarCraft II commentators often fail to predict the winner despite having a full access to the game state (i.e., not being limited by partial observability). Value functions that accurately predict game outcomes are desirable because they can be used to alleviate the challenge of learning from sparse rewards. From given state, a well trained value function can suggest which neighbouring states would be worth moving into long before seeing the game outcome. | 1708.04782#46 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 47 | Our setup for supervised learning begins with the straightforward baseline architectures described in Section 4: Atari-net and FullyConv. The networks do not take into account previous observations, i.e., they predict the outcome from a single frame (this is clearly sub-optimal). Furthermore, the observation does not include any privileged information: an agent has to produce value predictions based only on what it can see at any given time step (i.e. fog-of-war is enabled). Thus, if the opponent has managed to secretly produce many units that are very effective against the army that the agent has built, it may mistakenly believe that its position is stronger than it is.
0.70 0.70 a 50-65 0-65 a 0.60 S 0.60 o 0.55 $0.55 â i- o â i- 0.50 Atari-net = 0.50 Atari-net â FullyConv 3 â FullyConv 0.45 ââ arFullyConv iz o45 ââ arFullyConv 0.40 0.40 0.0 02 04 06 08 10 1.2 14 <3 5 7 9 11 13 15 17 19 20+ Observed training frames 1e7 In-game time (min)
# u © 5 oO 2 § S | 1708.04782#47 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 48 | # u © 5 oO 2 § S
Figure 8: The accuracy of predicting the outcome of StarCraft games using a network that operates on the screen and minimap feature planes as well as the scalar player stats. (Left) Train curves for three different network architectures. (Right) Accuracy over game time. At the beginning of the game (before 2 minutes), the network has 50% accuracy (equivalent to chance). This is expected since the outcome is less clear earlier in the game. By the 15 minute mark, the network is able to correctly predict the winner 65% of the time.
The networks proposed in Section 4 produce the action identiï¬er and its arguments independently. However, the accuracy of predicting a point on the screen can be improved by conditioning on the base action, e.g., building an extra base versus moving an army. Thus, in addition to the Atari-net and FullyConv architecture, we have arFullyConv which uses the auto-regressive policy introduction introduced in Section 4.2, i.e. using the function identiï¬er a0 and previously sampled arguments a<l to model a policy over the current argument al. | 1708.04782#48 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 49 | Networks are trained for 200k steps of gradient descent on all possible match-ups in StarCraft II. We trained with mini-batches of 64 observations taken at random from all replays uniformly across time. Observations are sampled with a step multiplier of 8, consistent with the RL setup. The reso- lution of both screen and minimap is 64 à 64. Each observation consists of the screen and minimap spatial feature layers as well as player stats such as food cap and number of collected minerals that human players see on the screen. We use 90% of the replays as training set, and a ï¬xed test set of 0.5M frames drawn from the rest of the 10% of the replays. The agent performance is evaluated continuously against this test set as training progresses.
15
Figure 8 shows average accuracy over training step as well as accuracy of a trained model as a function of game time. A random baseline would correct approximately 50% of the time since the game is well balanced across all race pairs, and tying is extremely rare. As training progresses, the FullyConv architecture achieves an accuracy of 64%. Also, as the game progresses, value predic- tion becomes more accurate, as seen in Figure 8 (Right). This mirrors the results of prior work on StarCraft I [9].
# 5.2 Policy Predictions | 1708.04782#49 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 50 | # 5.2 Policy Predictions
TOP 1 ACCURACY TOP 5 ACCURACY ATARI-NET FULLYCONV ARFULLYCONV RANDOM ACTION 37.8% 37.9% 37.7% 4.3% SCREEN MINIMAP 19.8% 25.7% 25.9% 0.0% 1.2% 9.5% 10.5% 0.0% ACTION 87.2% 88.2% 87.4% 29.5% SCREEN MINIMAP 55.6% 62.3% 62.7% 1.0% 2.9% 18.5% 22.1% 1.0%
Table 2: Policy top 1 and top 5 accuracies for the base actions and screen/minimap arguments. arFullyConv refers to the autoregressive version of FullyConv. The random baseline is a arFullyConv with randomly initialised weights.
The same network trained to predict values had a separate output designed to predict the action issued by the user. We sometimes refer to this part of the network as the policy since it can be readily deployed to play the game. | 1708.04782#50 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 51 | There are many schemes one might employ to train networks to imitate human behaviour from re- plays. Here we use a simple approach that connects straightforwardly with the RL work in Section 4. When training our policy we sampled observations at a ï¬xed step multiplier of 8 frames. We take the ï¬rst action issued within each 8 frames as the learning target for the policy. If no action was taken during that period, we take the target to be a âno-opâ, i.e., a special action which has no effect.
When humans play StarCraft II, only a subset of all possible actions are available at any given time. For example, âbuilding a marineâ is enabled only if barracks are currently selected. Networks should not need to learn to avoid illegal actions since this information is readily available. Thus, during training, we ï¬lter out actions that would not be available to a human player. To do so, we take the union of all available actions for the past 8 frames and apply a mask that sets the probability of all unavailable actions to near zero.
Note that, as previously mentioned, we trained the policy to play all possible matchups. Thus, in principle, the agent can play any race. However, for consistency with the reinforcement learning agents studied in Section 4, we report in-game metrics in the single Terran versus Terran matchup. | 1708.04782#51 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 52 | Table 2 shows how different architectures perform in terms of accuracy at predicting the action identiï¬er, the screen, and the minimap argument. As expected, both FullyConv and arFullyConv architectures perform much better for spatial arguments. As well, the arFullyConv architecture out- performs FullyConv, presumably because it knows which action identiï¬er the argument will be used for.
When we directly plug the policy trained with supervised learning into the game, it is able to produce more units and play better as a function of observed replay data, as shown in Figure 9 and in the video at https://youtu.be/WEOzide5XFc. It also outperforms all agents trained in Section 4 on the simpler mini-game of BuildMarines, which has a restricted action space, even though the supervised policy is playing an unrestricted, full 1v1 game. These results suggest that supervised imitation learning may be a promising direction for bootstrapping StarCraft II agents. Future work should look to improve imitation initialised policies by training directly with reinforcement learning on the objective we really care about â i.e., the game outcome.
# 6 Conclusions & Future Work | 1708.04782#52 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 53 | # 6 Conclusions & Future Work
This paper introduces StarCraft II as a new challenge for deep reinforcement learning research. We provide details for a freely available Python interface to play the game as well as human replay data from ranked games collected via Blizzardâs ofï¬cial BattleNet ladder. With this initial release
16
21.0. ptari-net a. 12) Atari-net < 5 og) FullyConv 3 10/ââ FullyConv e ââ arFullyConv 2 g|ââ arFullyConv © 0.6 S$ . co} > 20.4 E = o we} B02 5 vo 2 = 2 0.0 0.0 0.2 04 06 08 10 12 14 0 02 04 06 O08 10 12 Observed training frames le7 Observed training frames le7
Figure 9: The probability of building army units as training the policy nets progresses over the training data. The game setup is Terran vs. Terran. (Left) Probability of building any army units in a game. (Right) Average number of army units built per game.
we describe supervised learning results on the human replay data for policy and value networks. We also also describe results for straightforward baseline RL agents on seven mini-games and on the full game. | 1708.04782#53 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 54 | we describe supervised learning results on the human replay data for policy and value networks. We also also describe results for straightforward baseline RL agents on seven mini-games and on the full game.
We regard the mini-games primarily as unit tests. That is, an RL agent should be able to achieve human level performance on these with relative ease if it is to have a chance to succeed on the full game. It may be instructive to build additional mini-games, but we take the full game â evaluated on the ï¬nal outcome â as the most interesting problem, and hope ï¬rst and foremost to encourage research that will lead to its solution.
While performance on some mini-games is close to expert human play, we ï¬nd, as expected, that current state-of-the-art baseline agents cannot learn to win against the easiest built-in AI on the full game. This is true not only when the game outcome (i.e., -1, 0, 1) is used as the reward signal, but also when a shaping reward is provided at each timestep (i.e., the native game score provided by Blizzard). In this sense, our provided environment presents a challenge that is at once canonical, externally deï¬ned, and completely intractable for off-the-shelf baseline algorithms. | 1708.04782#54 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 55 | This release simpliï¬es several aspects of the game as it is played by humans: 1. the observations are preprocessed before they are given to the agent, 2. the action space has been simpliï¬ed to be more easily used by RL agents instead of using the keyboard and mouse-click setup used by humans, 3. it is played in lock-step so that agents can compute for as long as they need at each time-step rather than being real-time, and 4. the full game only allows play against the built-in AI. However, we consider the real challenge to build agents that can play the best human players on their own turf, that is with RGB pixel observations and strict time limits. Therefore, future releases may relax the simpliï¬cations above, as well as enable self-play, moving us towards the goal of training agents that humans consider to be fair opponents.
# Contributions
Blizzard:
⢠StarCraft II Binary
⢠StarCraft II API: https://github.com/Blizzard/s2client-proto
⢠Replays
DeepMind:
⢠PySC2: https://github.com/deepmind/pysc2
All the agents and experiments in the paper
17
14
# Acknowledgements | 1708.04782#55 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 56 | ⢠Replays
DeepMind:
⢠PySC2: https://github.com/deepmind/pysc2
All the agents and experiments in the paper
17
14
# Acknowledgements
We would like to thank many at Blizzard, especially Tommy Tran, Tyler Plass, Brian Song, Tom van Dijck, and Greg Risselada, the Grandmaster. We would also like to thank the DeepMind team, especially Nal Kalchbrenner, Ali Eslami, Jamey Stevenson, Adam Cain and our esteemed game testers Amir Sadik & Sarah York. We also would like to thank David Churchill for his early feedback on the Raw API, for building CommandCenter, and for comments on the manuscript.
# References
[1] The Brood War API. http://bwapi.github.io/, 2017.
[2] Brenna D Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning from demonstration. Robotics and autonomous systems, 57(5):469â483, 2009.
[3] Charles Beattie, Joel Z Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich K¨uttler, Andrew Lefrancq, Simon Green, V´ıctor Vald´es, Amir Sadik, et al. DeepMind Lab. arXiv preprint arXiv:1612.03801, 2016. | 1708.04782#56 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 57 | [4] Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The Arcade Learning Environment: An evaluation platform for general agents. J. Artif. Intell. Res.(JAIR), 47:253â 279, 2013.
[5] Nadav Bhonker, Shai Rozenberg, and Itay Hubara. Playing SNES in the retro learning envi- ronment. arXiv preprint arXiv:1611.02205, 2016.
[6] Michael Buro and David Churchill. Real-time strategy game competitions. AI Magazine, 33 (3):106, 2012.
[7] George E Dahl, Dong Yu, Li Deng, and Alex Acero. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on audio, speech, and language processing, 20(1):30â42, 2012.
[8] Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. In International Conference on Machine Learn- ing, pages 1329â1338, 2016.
[9] Graham Kurtis Stephen Erickson and Michael Buro. Global state evaluation in StarCraft. In AIIDE, 2014. | 1708.04782#57 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 58 | [9] Graham Kurtis Stephen Erickson and Michael Buro. Global state evaluation in StarCraft. In AIIDE, 2014.
[10] Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, and John Agapiou. Learning from demonstra- tions for real world reinforcement learning. arXiv preprint arXiv:1704.03732, 2017.
[11] Philip Hingston. A Turing test for computer game bots. IEEE Transactions on Computational Intelligence and AI in Games, 1(3):169â186, 2009.
[12] Ulit Jaidee and H´ector MuËnoz-Avila. Classq-l: A q-learning algorithm for adversarial real- time strategy games. In Eighth Artiï¬cial Intelligence and Interactive Digital Entertainment Conference, 2012.
[13] Niels Justesen and Sebastian Risi. Learning macromanagement in StarCraft from replays using deep learning. arXiv preprint arXiv:1707.03743, 2017. | 1708.04782#58 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 59 | [14] MichaÅ Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Ja´skowski. Vizdoom: A Doom-based AI research platform for visual reinforcement learning. In Compu- tational Intelligence and Games (CIG), 2016 IEEE Conference on, pages 1â8. IEEE, 2016.
[15] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the 2nd International Conference on Learning Representations, 2014.
[16] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
18
[17] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436â 444, 2015.
[18] Sergey Levine, Peter Pastor, Alex Krizhevsky, Julian Ibarz, and Deirdre Quillen. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International Journal of Robotics Research, page 0278364917710318, 2016. | 1708.04782#59 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 60 | [19] Chris J Maddison, Aja Huang, Ilya Sutskever, and David Silver. Move evaluation in Go using deep convolutional neural networks. arXiv preprint arXiv:1412.6564, 2014.
[20] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
[21] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lill- icrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. ICML, 2016.
[22] Santiago Ontan´on, Gabriel Synnaeve, Alberto Uriarte, Florian Richoux, David Churchill, and Mike Preuss. A survey of real-time strategy game AI research and competition in StarCraft. IEEE Transactions on Computational Intelligence and AI in games, 5(4):293â311, 2013. | 1708.04782#60 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 61 | [23] Peng Peng, Quan Yuan, Ying Wen, Yaodong Yang, Zhenkun Tang, Haitao Long, and Jun Wang. Multiagent bidirectionally-coordinated nets for learning to play starcraft combat games. arXiv preprint arXiv:1703.10069, 2017.
[24] Diego Perez, Spyridon Samothrakis, Julian Togelius, Tom Schaul, Simon Lucas, Adrien Cou¨etoux, Jeyull Lee, Chong-U Lim, and Tommy Thompson. The 2014 general video game playing competition. Computational Intelligence and AI in Games, 2015.
[25] Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, and Martin Riedmiller. Data-efï¬cient deep reinforcement learning for dexterous manipulation. arXiv preprint arXiv:1704.03073, 2017.
[26] Glen Robertson and Ian Watson. A review of real-time strategy game AI. AI Magazine, 35(4): 75â104, 2014. | 1708.04782#61 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 62 | [26] Glen Robertson and Ian Watson. A review of real-time strategy game AI. AI Magazine, 35(4): 75â104, 2014.
[27] Philipp Rohlfshagen and Simon M Lucas. Ms Pac-man versus ghost team CEC 2011 com- petition. In Evolutionary Computation (CEC), 2011 IEEE Congress on, pages 70â77. IEEE, 2011.
[28] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Fei- Fei Li. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â252, 2015.
[29] Andrei A Rusu, Matej Vecerik, Thomas Roth¨orl, Nicolas Heess, Razvan Pascanu, and Raia arXiv preprint Sim-to-real robot learning from pixels with progressive nets. Hadsell. arXiv:1610.04286, 2016.
[30] Tom Schaul. A video game description language for model-based or interactive learning. In Conference on Computational Intelligence in Games (IEEE-CIG), pages 1â8. IEEE, 2013. | 1708.04782#62 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 63 | [31] Tom Schaul, Julian Togelius, and J¨urgen Schmidhuber. Measuring intelligence through games. arXiv preprint arXiv:1109.1314, 2011.
[32] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanc- tot, et al. Mastering the game of Go with deep neural networks and tree search. Nature, 529 (7587):484â489, 2016.
[33] Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timoth´ee Lacroix, Zem- ing Lin, Florian Richoux, and Nicolas Usunier. Torchcraft: a library for machine learning research on real-time strategy games. arXiv preprint arXiv:1611.00625, 2016.
19
[34] Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, and Larry Zitnick. ELF: An ex- tensive, lightweight and ï¬exible research platform for real-time strategy games. arXiv preprint arXiv:1707.01067, 2017. | 1708.04782#63 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.04782 | 64 | [35] Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, and Larry Zitnick. Elf: An exten- sive, lightweight and ï¬exible research platform for real-time strategy games. arXiv preprint arXiv:1707.01067, 2017.
[36] Julian Togelius, Sergey Karakovskiy, and Robin Baumgarten. The 2009 Mario AI competition. In Evolutionary Computation (CEC), 2010 IEEE Congress on, pages 1â8. IEEE, 2010.
[37] Nicolas Usunier, Gabriel Synnaeve, Zeming Lin, and Soumith Chintala. Episodic exploration for deep deterministic policies for StarCraft micromanagement. In International Conference on Learning Representations, 2017.
[38] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Googleâs neural ma- chine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
20 | 1708.04782#64 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | [
{
"id": "1611.00625"
},
{
"id": "1707.03743"
},
{
"id": "1611.02205"
},
{
"id": "1707.01067"
},
{
"id": "1610.04286"
},
{
"id": "1704.03732"
},
{
"id": "1609.08144"
},
{
"id": "1703.10069"
},
{
"id": "1704.03073"
},
{
"id": "1612.03801"
}
] |
1708.03888 | 0 | 7 1 0 2
p e S 3 1 ] V C . s c [
3 v 8 8 8 3 0 . 8 0 7 1 : v i X r a
Technical Report
# LARGE BATCH TRAINING OF CONVOLUTIONAL NET- WORKS
# Yang You â Computer Science Division University of California at Berkeley [email protected]
# Igor Gitman Computer Science Department Carnegie Mellon University [email protected]
# Boris Ginsburg NVIDIA [email protected]
# ABSTRACT
A common way to speed up training of large convolutional networks is to add computational units. Training is then performed using data-parallel synchronous Stochastic Gradient Descent (SGD) with mini-batch divided between computational units. With an increase in the number of nodes, the batch size grows. But training with large batch size often results in the lower model accuracy. We argue that the current recipe for large batch training (linear learning rate scaling with warm-up) is not general enough and training may diverge. To overcome this optimization difï¬culties we propose a new training algorithm based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in accuracy.
# INTRODUCTION | 1708.03888#0 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 1 | # INTRODUCTION
Training of large Convolutional Neural Networks (CNN) takes a lot of time. The brute-force way to speed up CNN training is to add more computational power (e.g. more GPU nodes) and train network using data-parallel Stochastic Gradient Descent, where each worker receives some chunk of global mini-batch (see e.g. Krizhevsky (2014) or Goyal et al. (2017) ). The size of a chunk should be large enough to utilize the computational resources of the worker. So scaling up the number of workers results in the increase of batch size. But using large batch may negatively impact the model accuracy, as was observed in Krizhevsky (2014), Li et al. (2014), Keskar et al. (2016), Hoffer et al. (2017),.. | 1708.03888#1 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 2 | Increasing the global batch while keeping the same number of epochs means that you have fewer iterations to update weights. The straight-forward way to compensate for a smaller number of iterations is to do larger steps by increasing the learning rate (LR). For example, Krizhevsky (2014) suggests to linearly scale up LR with batch size. However using a larger LR makes optimization more difï¬cult, and networks may diverge especially during the initial phase. To overcome this difï¬culty, Goyal et al. (2017) suggested doing a "learning rate warm-up": training starts with a small "safe" LR, which is slowly increased to the target "base" LR. With a LR warm-up and a linear scaling rule, Goyal et al. (2017) successfully trained Resnet-50 with batch B=8K (see also Cho et al. (2017)). Linear scaling of LR with a warm-up is the "state-of-the art" recipe for large batch training. | 1708.03888#2 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 3 | We tried to apply this linear scaling and warm-up scheme to train Alexnet on Imagenet (Deng et al. (2009)), but scaling stopped after B=2K since training diverged for large LR-s. For B=4K the accuracy dropped from the baseline 57.6% ( for B=256) to 53.1%, and for B=8K the accuracy decreased to 44.8%. To enable training with a large LR, we replaced Local Response Normalization layers in Alexnet with Batch Normalization (BN). We will refer to this modiï¬cation of AlexNet as AlexNet-BN throughout the rest of the paper. BN improved both model convergence for large LR as well as accuracy: for B=8K the accuracy gap was decreased from 14% to 2.2%.
âWork was performed when Y.You and I.Gitman were NVIDIA interns
1
# Technical Report | 1708.03888#3 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 4 | âWork was performed when Y.You and I.Gitman were NVIDIA interns
1
# Technical Report
To analyze the training stability with large LRs we measured the ratio between the norm of the layer weights and norm of gradients update. We observed that if this ratio is too high, the training may become unstable. On other hand, if the ratio is too small, then weights donât change fast enough. This ratio varies a lot between different layers, which makes it necessary to use a separate LR for each layer. Thus we propose a novel Layer-wise Adaptive Rate Scaling (LARS) algorithm. There are two notable differences between LARS and other adaptive algorithms such as ADAM (Kingma & Ba (2014)) or RMSProp (Tieleman & Hinton (2012)): ï¬rst, LARS uses a separate learning rate for each layer and not for each weight, which leads to better stability. And second, the magnitude of the update is controlled with respect to the weight norm for better control of training speed. With LARS we trained Alexnet-BN and Resnet-50 with B=32K without accuracy loss.
# 2 BACKGROUND | 1708.03888#4 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 5 | # 2 BACKGROUND
The training of CNN is done using Stochastic Gradient (SG) based methods. At each step t a mini- batch of B samples xi is selected from the training set. The gradients of loss function âL(xi, w) are computed for this subset, and networks weights w are updated based on this stochastic gradient: 1 B
The computation of SG can be done in parallel by N units, where each unit processes a chunk of the mini-batch with B N samples. Increasing the mini-batch permits scaling to more nodes without reducing the workload on each unit. However, it was observed that training with a large batch is difï¬cult. To maintain the network accuracy, it is necessary to carefully adjust training hyper-parameters (learning rate, momentum etc).
Krizhevsky (2014) suggested the following rules for training with large batches: when you increase the batch B by k, you should also increase LR by k while keeping other hyper-parameters (momentum, weight decay, etc) unchanged. The logic behind linear LR scaling is straight-forward: if you increase B by k while keeping the number of epochs unchanged, you will do k fewer steps. So it seems natural to increase the step size by k. For example, letâs take k = 2. The weight updates for batch size B after 2 iterations would be: | 1708.03888#5 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 6 | Wig = WwW, -A* FOL PH, wr) + So VEC) 141) (2)
j=1 The weight update for the batch B2 = 2 â B with learning rate λ2:
1 2B Wry1 = Wt â Ag * xB VE wr) (3)
will be similar if you take λ2 = 2 â λ, assuming that âL(xj, wt+1) â L(xj, wt) .
Using the "linear LR scaling" Krizhevsky (2014) trained AlexNet with batch B=1K with minor (â 1%) accuracy loss. The scaling of Alexnet above 2K is difï¬cult, since the training diverges for larger LRs. It was observed that linear scaling works much better for networks with Batch Normalization (e.g. Codreanu et al. (2017)). For example Chen et al. (2016) trained the Inception model with batch B=6400, and Li (2017) trained Resnet-152 for B=5K. | 1708.03888#6 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.