doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1707.01067 | 4 | 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
To our knowledge, no current game platforms satisfy all criteria. Modern commercial games (e.g., StarCraft I/II, GTA V) are extremely realistic, but are not customizable and require signiï¬cant re- sources for complex visual effects and for computational costs related to platform-shifting (e.g., a virtual machine to host Windows-only SC I on Linux). Old games and their wrappers [4, 6, 5, 14]) are substantially faster, but are less realistic with limited customizability. On the other hand, games designed for research purpose (e.g., MazeBase [29], µRTS [23]) are efï¬cient and highly customiz- able, but are not very extensive in their capabilities. Furthermore, none of the environments consider simulation concurrency, and thus have limited ï¬exibility when different training architectures are applied. For instance, the interplay between RL methods and environments during training is often limited to providing simplistic interfaces (e.g., one interface for one game) in scripting languages like Python. | 1707.01067#4 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 4 | We notice that state-of-the-art basic architectures such as Xception [3] and ResNeXt [40] become less efï¬cient in ex- tremely small networks because of the costly dense 1 à 1 convolutions. We propose using pointwise group convoluWe also examine the speedup on real hardware, i.e. an off-the-shelf ARM-based computing core. The Shufï¬eNet model achieves â¼13à actual speedup (theoretical speedup is 18Ã) over AlexNet [21] while maintaining comparable accuracy.
# 2. Related Work | 1707.01083#4 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 5 | In this paper, we propose ELF, a research-oriented platform that offers games with diverse prop- erties, efï¬cient simulation, and highly customizable environment settings. The platform allows for both game parameter changes and new game additions. The training of RL methods is deeply and ï¬exibly integrated into the environment, with an emphasis on concurrent simulations. On ELF, we build a real-time strategy (RTS) game engine that includes three initial environments including Mini-RTS, Capture the Flag and Tower Defense. Mini-RTS is a miniature custom-made RTS game that captures all the basic dynamics of StarCraft (fog-of-war, resource gathering, troop building, defense/attack with troops, etc). Mini-RTS runs at 165K FPS on a 4 core laptop, which is faster than existing environments by an order of magnitude. This enables us for the ï¬rst time to train end-to- end a full-game bot against built-in AIs. Moreover, training is accomplished in only one day using 6 CPUs and 1 GPU. The other two games can be trained with similar (or higher) efï¬ciency. | 1707.01067#5 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 5 | # 2. Related Work
Efï¬cient Model Designs The last few years have seen the success of deep neural networks in computer vision tasks [21, 36, 28], in which model designs play an im- portant role. The increasing needs of running high qual- ity deep neural networks on embedded devices encour- age the study on efï¬cient model designs [8]. For ex- ample, GoogLeNet [33] increases the depth of networks with much lower complexity compared to simply stack- ing convolution layers. SqueezeNet [14] reduces parame- ters and computation signiï¬cantly while maintaining accu- racy. ResNet [9, 10] utilizes the efï¬cient bottleneck struc- ture to achieve impressive performance. SENet [13] in- troduces an architectural unit that boosts performance at slight computation cost. Concurrent with us, a very re* Equally contribution.
1
kK Channels- >| kK Channels- > kK Channels- > Input GConv1 Feature 1 SSSR xX) OS Channel| GConv2 ! ceenansevounensnrsaovsseeeeetenses oblffle | Output (a) (c) | 1707.01083#5 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 6 | Many real-world scenarios and complex games (e.g. StarCraft) are hierarchical in nature. Our RTS engine has full access to the game data and has a built-in hierarchical command system, which allows training at any level of the command hierarchy. As we demonstrate, this allows us to train a full-game bot that acts on the top-level strategy in the hierarchy while lower-level commands are handled using build-in tactics. Previously, most research on RTS games focused only on lower-level scenarios such as tactical battles [34, 25]. The full access to the game data also allows for supervised training with small-scale internal data. | 1707.01067#6 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 6 | Figure 1. Channel shufï¬e with two stacked group convolutions. GConv stands for group convolution. a) two stacked convolution layers with the same number of groups. Each output channel only relates to the input channels within the group. No cross talk; b) input and output channels are fully related when GConv2 takes data from different groups after GConv1; c) an equivalent implementation to b) using channel shufï¬e.
cent work [46] employs reinforcement learning and model search to explore efï¬cient model designs. The proposed mobile NASNet model achieves comparable performance with our counterpart Shufï¬eNet model (26.0% @ 564 MFLOPs vs. 26.3% @ 524 MFLOPs for ImageNet clas- siï¬cation error). But [46] do not report results on extremely tiny models (e.g. complexity less than 150 MFLOPs), nor evaluate the actual inference time on mobile devices. | 1707.01083#6 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 7 | ELF is resilient to changes in the topology of the environment-actor communication used for train- ing, thanks to its hybrid C++/Python framework. These include one-to-one, many-to-one and one- to-many mappings. In contrast, existing environments (e.g., OpenAI Gym [6] and Universe [33]) wrap one game in one Python interface, which makes it cumbersome to change topologies. Paral- lelism is implemented in C++, which is essential for simulation acceleration. Finally, ELF is capable of hosting any existing game written in C/C++, including Atari games (e.g., ALE [4]), board games (e.g. Chess and Go [32]), physics engines (e.g., Bullet [10]), etc, by writing a simple adaptor. | 1707.01067#7 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 7 | Model Acceleration This direction aims to accelerate in- ference while preserving accuracy of a pre-trained model. Pruning network connections [6, 7] or channels [38] re- duces redundant connections in a pre-trained model while maintaining performance. Quantization [31, 27, 39, 45, 44] and factorization [22, 16, 18, 37] are proposed in litera- ture to reduce redundancy in calculations to speed up in- ference. Without modifying the parameters, optimized con- volution algorithms implemented by FFT [25, 35] and other methods [2] decrease time consumption in practice. Distill- ing [11] transfers knowledge from large models into small ones, which makes training small models easier.
Group Convolution The concept of group convolution, which was ï¬rst introduced in AlexNet [21] for distribut- ing the model over two GPUs, has been well demon- strated its effectiveness in ResNeXt [40]. Depthwise sep- arable convolution proposed in Xception [3] generalizes the ideas of separable convolutions in Inception series [34, 32]. Recently, MobileNet [12] utilizes the depthwise separa- ble convolutions and gains state-of-the-art results among lightweight models. Our work generalizes group convolu- tion and depthwise separable convolution in a novel form. | 1707.01083#7 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 8 | Equipped with a ï¬exible RL backend powered by PyTorch, we experiment with numerous baselines, and highlight effective techniques used in training. We show the ï¬rst demonstration of end-to- end trained AIs for real-time strategy games with partial information. We use the Asynchronous Advantagous Actor-Critic (A3C) model [21] and explore extensive design choices including frame- skip, temporal horizon, network structure, curriculum training, etc. We show that a network with Leaky ReLU [17] and Batch Normalization [11] coupled with long-horizon training and progressive curriculum beats the rule-based built-in AI more than 70% of the time in full-game Mini-RTS. We also show stronger performance in others games. ELF and its RL platform, is open-sourced at https://github.com/facebookresearch/ELF.
# 2 Architecture | 1707.01067#8 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 8 | Channel Shufï¬e Operation To the best of our knowl- edge, the idea of channel shufï¬e operation is rarely men- tioned in previous work on efï¬cient model design, although CNN library cuda-convnet [20] supports ârandom sparse convolutionâ layer, which is equivalent to random channel shufï¬e followed by a group convolutional layer. Such âran- dom shufï¬eâ operation has different purpose and been sel- dom exploited later. Very recently, another concurrent work [41] also adopt this idea for a two-stage convolution. How- ever, [41] did not specially investigate the effectiveness of channel shufï¬e itself and its usage in tiny model design.
# 3. Approach
# 3.1. Channel Shufï¬e for Group Convolutions | 1707.01083#8 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 9 | # 2 Architecture
ELF follows a canonical and simple producer-consumer paradigm (Fig. 1). The producer plays N games, each in a single C++ thread. When a batch of M current game states are ready (M < N ), the corresponding games are blocked and the batch are sent to the Python side via the daemon. The con- sumers (e.g., actor, optimizer, etc) get batched experience with history information via a Python/C++ interface and send back the replies to the blocked batch of the games, which are waiting for the next action and/or values, so that they can proceed. For simplicity, the producer and consumers are in the same process. However, they can also live in different processes, or even on different machines. Before the training (or evaluation) starts, different consumers register themselves for batches with
2
Game 1 H History buffer Batch with history info Game 2 H History buffer y ° . . Game N HY History buffer Producer (Games in C++, Consumers (Python ae
Figure 1: Overview of ELF. | 1707.01067#9 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 9 | Modern convolutional neural networks [30, 33, 34, 32, 9, 10] usually consist of repeated building blocks with the same structure. Among them, state-of-the-art networks such as Xception [3] and ResNeXt [40] introduce efï¬cient depthwise separable convolutions or group convolutions into the building blocks to strike an excellent trade-off between representation capability and computational cost. However, we notice that both designs do not fully take the 1 à 1 convolutions (also called pointwise convolutions in [12]) into account, which require considerable complex- ity. For example, in ResNeXt [40] only 3 à 3 layers are equipped with group convolutions. As a result, for each residual unit in ResNeXt the pointwise convolutions occupy 93.4% multiplication-adds (cardinality = 32 as suggested in [40]). In tiny networks, expensive pointwise convolutions result in limited number of channels to meet the complexity constraint, which might signiï¬cantly damage the accuracy. To address the issue, a straightforward solution is to ap(a) (b) on on 1x1 Conv 1x1 GConv 1x1 GConv BN ReLU BN | 1707.01083#9 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 10 | Figure 1: Overview of ELF.
different history length. For example, an actor might need a batch with short history, while an op- timizer (e.g., T -step actor-critic) needs a batch with longer history. During training, the consumers use the batch in various ways. For example, the actor takes the batch and returns the probabilties of actions (and values), then the actions are sampled from the distribution and sent back. The batch received by the optimizer already contains the sampled actions from the previous steps, and can be used to drive reinforcement learning algorithms such as A3C. Here is a sample usage of ELF: | 1707.01067#10 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01067 | 11 | 1 2 3 4 5 6 7 8 9 10 11 12 13 # We run 1024 games concurrently . num games = 1024 # Wait for a batch of 256 games. batchsize = 256 # The return states contain key âs â, # The reply contains key âaâ to be ï¬lled from the Python side . # The deï¬nitions of the keys are in the wrapper of the game. input spec = dict (s=ââ , r=ââ , reply spec = dict (a=ââ ) â r â and â terminal â terminal =ââ ) context = Init (num games, batchsize , input spec , reply spec ) Initialization of ELF 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 # Start all game threads and enter main loop . context . Start () while True: # Wait for a batch of game states to be ready # These games will be blocked, waiting for batch = context . Wait() replies . # Apply a model to the game state . The output has key â pi â output = model(batch) # Sample from the output reply [ âaâ ][:] = SampleFromDistribution(output ) to get the actions of this batch . # Resume games. context . Steps () # Stop all game | 1707.01067#11 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 11 | Figure 2. Shufï¬eNet Units. a) bottleneck unit [9] with depthwise convolution (DWConv) [3, 12]; b) Shufï¬eNet unit with pointwise group convolution (GConv) and channel shufï¬e; c) Shufï¬eNet unit with stride = 2.
ply channel sparse connections, for example group convo- lutions, also on 1 à 1 layers. By ensuring that each con- volution operates only on the corresponding input channel group, group convolution signiï¬cantly reduces computation cost. However, if multiple group convolutions stack to- gether, there is one side effect: outputs from a certain chan- nel are only derived from a small fraction of input channels. Fig 1 (a) illustrates a situation of two stacked group convo- lution layers. It is clear that outputs from a certain group only relate to the inputs within the group. This property blocks information ï¬ow between channel groups and weak- ens representation. | 1707.01083#11 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01083 | 12 | If we allow group convolution to obtain input data from different groups (as shown in Fig 1 (b)), the input and out- put channels will be fully related. Speciï¬cally, for the fea- ture map generated from the previous group layer, we can ï¬rst divide the channels in each group into several sub- groups, then feed each group in the next layer with differ- ent subgroups. This can be efï¬ciently and elegantly im- plemented by a channel shufï¬e operation (Fig 1 (c)): sup- pose a convolutional layer with g groups whose output has g à n channels; we ï¬rst reshape the output channel dimen- sion into (g, n), transposing and then ï¬attening it back as the input of next layer. Note that the operation still takes effect even if the two convolutions have different numbers of groups. Moreover, channel shufï¬e is also differentiable, which means it can be embedded into network structures for end-to-end training.
Channel shufï¬e operation makes it possible to build more powerful structures with multiple group convolutional layers. In the next subsection we will introduce an efï¬cient network unit with channel shufï¬e and group convolution. | 1707.01083#12 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 13 | # Main loop of ELF
Parallelism using C++ threads. Modern reinforcement learning methods often require heavy par- allelism to obtain diverse experiences [21, 22]. Most existing RL environments (OpenAI Gym [6] and Universe [33], RLE [5], Atari [4], Doom [14]) provide Python interfaces which wrap only sin- gle game instances. As a result, parallelism needs to be built in Python when applying modern RL methods. However, thread-level parallelism in Python can only poorly utilize multi-core processors, due to the Global Interpreter Lock (GIL)1. Process-level parallelism will also introduce extra data exchange overhead between processes and increase complexity to framework design. In contrast, our parallelism is achieved with C++ threads for better scaling on multi-core CPUs. | 1707.01067#13 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01067 | 14 | Flexible Environment-Model Conï¬gurations. In ELF, one or multiple consumers can be used. Each consumer knows the game environment identities of samples from received batches, and typi- cally contains one neural network model. The models of different consumers may or may not share parameters, might update the weights, might reside in different processes or even on different ma- chines. This architecture offers ï¬exibility for switching topologies between game environments and models. We can assign one model to each game environment, or one-to-one (e.g, vanilla A3C [21]), in which each agent follows and updates its own copy of the model. Similarly, multiple environ- ments can be assigned to a single model, or many-to-one (e.g., BatchA3C [35] or GA3C [1]), where the model can perform batched forward prediction to better utilize GPUs. We have also incorporated forward-planning methods (e.g., Monte-Carlo Tree Search (MCTS) [7, 32, 27]) and Self-Play [27], in which a single environment might emit multiple states processed by multiple models, or one-to- many. Using ELF, these training conï¬gurations can be tested with minimal changes. | 1707.01067#14 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 14 | Taking advantage of the channel shufï¬e operation, we propose a novel Shufï¬eNet unit specially designed for small networks. We start from the design principle of bottleneck unit [9] in Fig 2 (a). It is a residual block. In its residual branch, for the 3 à 3 layer, we apply a computational eco- nomical 3 à 3 depthwise convolution [3] on the bottleneck feature map. Then, we replace the ï¬rst 1 à 1 layer with pointwise group convolution followed by a channel shufï¬e operation, to form a Shufï¬eNet unit, as shown in Fig 2 (b). The purpose of the second pointwise group convolution is to recover the channel dimension to match the shortcut path. For simplicity, we do not apply an extra channel shufï¬e op- eration after the second pointwise layer as it results in com- parable scores. The usage of batch normalization (BN) [15] and nonlinearity is similar to [9, 40], except that we do not use ReLU after depthwise convolution as suggested by [3]. As for the case where Shufï¬eNet is applied with stride, we simply make two modiï¬cations (see | 1707.01083#14 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 15 | Highly customizable and uniï¬ed interface. Games implemented with our RTS engine can be trained using raw pixel data or lower-dimensional internal game data. Using internal game data is
1The GIL in Python forbids simultaneous interpretations of multiple statements even on multi-core CPUs.
3
An extensive framework that can host many games. RTS Engine Mini-RTS Capture Tower the Flag Defense Specific game engines. Go (DarkForest) Environments
Figure 2: Hierarchical layout of ELF. In the current repository (https://github.com/ facebookresearch/ELF, master branch), there are board games (e.g., Go [32]), Atari learn- ing environment [4], and a customized RTS engine that contains three simple games.
(a) Le... (bd) Resource Mini-RTS Gather resource and build 1000-6000 ticks troops to destroy uaa Selected unit opponent's base. Capture the Flag Capture the flag and bring 1000-4000 ticks it to your own base Tower Defense Builds defensive towers to 1000-2000 ticks Enemy base block enemy invasion.
Figure 3: Overview of Real-time strategy engine. (a) Visualization of current game state. (b) The three different game environments and their descriptions.
typically more convenient for research focusing on reasoning tasks rather than perceptual ones. Note that web-based visual renderings is also supported (e.g., Fig. 3(a)) for case-by-case debugging. | 1707.01067#15 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01067 | 16 | ELF allows for a uniï¬ed interface capable of hosting any existing game written in C/C++, including Atari games (e.g., ALE [4]), board games (e.g. Go [32]), and a customized RTS engine, with a simple adaptor (Fig. 2). This enables easy multi-threaded training and evaluation using existing RL methods. Besides, we also provide three concrete simple games based on RTS engine (Sec. 3).
Reinforcement Learning backend. We propose a Python-based RL backend. It has a ï¬exible design that decouples RL methods from models. Multiple baseline methods (e.g., A3C [21], Policy Gradient [30], Q-learning [20], Trust Region Policy Optimization [26], etc) are implemented, mostly with very few lines of Python codes.
# 3 Real-time strategy Games | 1707.01067#16 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 16 | Thanks to pointwise group convolution with channel shufï¬e, all components in Shufï¬eNet unit can be com- puted efï¬ciently. Compared with ResNet [9] (bottleneck design) and ResNeXt [40], our structure has less complex- ity under the same settings. For example, given the input size c à h à w and the bottleneck channels m, ResNet unit requires hw(2cm + 9m2) FLOPs and ResNeXt has hw(2cm + 9m2/g) FLOPs, while our Shufï¬eNet unit re- quires only hw(2cm/g + 9m) FLOPs, where g means the | 1707.01083#16 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 17 | # 3 Real-time strategy Games
Real-time strategy (RTS) games are considered to be one of the next grand AI challenges after Chess and Go [27]. In RTS games, players commonly gather resources, build units (facilities, troops, etc), and explore the environment in the fog-of-war (i.e., regions outside the sight of units are invisible) to invade/defend the enemy, until one player wins. RTS games are known for their exponential and changing action space (e.g., 510 possible actions for 10 units with 5 choices each, and units of each player can be built/destroyed when game advances), subtle game situations, incomplete information due to limited sight and long-delayed rewards. Typically professional players take 200-300 actions per minute, and the game lasts for 20-30 minutes. | 1707.01067#17 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 17 | Layer Image Conv1 MaxPool Stage2 Stage3 Stage4 Output size KSize 224 Ã 224 112 Ã 112 56 Ã 56 28 Ã 28 28 Ã 28 14 Ã 14 14 Ã 14 7 Ã 7 7 Ã 7 1 Ã 1 3 Ã 3 3 Ã 3 Stride Repeat 2 2 2 1 2 1 2 1 1 1 3 1 7 1 3 g = 1 3 24 144 144 288 288 576 576 Output channels (g groups) g = 3 3 24 g = 2 3 24 g = 4 3 24 200 200 400 400 800 800 240 240 480 480 960 960 272 272 544 544 1088 1088 g = 8 3 24 384 384 768 768 1536 1536 GlobalPool FC Complexity 7 Ã 7 1000 1000 1000 143M 140M 137M 133M 137M 1000 1000
Table 1. Shufï¬eNet architecture. The complexity is evaluated with FLOPs, i.e. the number of ï¬oating-point multiplication-adds. Note that for Stage 2, we do not apply group convolution on the ï¬rst pointwise layer because the number of input channels is relatively small. | 1707.01083#17 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 18 | Very few existing RTS engines can be used directly for research. Commercial RTS games (e.g., StarCraft I/II) have sophisticated dynamics, interactions and graphics. The game play strategies have been long proven to be complex. Moreover, they are close-source with unknown internal states, and cannot be easily utilized for research. Open-source RTS games like Spring [12], OpenRA [24] and Warzone 2100 [28] focus on complex graphics and effects, convenient user interface, stable network play, ï¬exible map editors and plug-and-play mods (i.e., game extensions). Most of them use rule-based AIs, do not intend to run faster than real-time, and offer no straightforward interface
4
Realistic Code Resource Rule AIs Data AIs RL backend StarCraft I/II TorchCraft ORTS, BattleCode µRTS, MazeBase Mini-RTS No Yes No Yes Yes Table 1: Comparison between different RTS engines. High High Mid Low Mid No Yes Yes Yes Yes High High Low Low Low Yes Yes Yes Yes Yes No No No No Yes | 1707.01067#18 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 18 | Model Shufï¬eNet 1à Shufï¬eNet 0.5à Shufï¬eNet 0.25à Complexity (MFLOPs) 140 38 13 g = 1 33.6 45.1 57.1 Classiï¬cation error (%) g = 4 g = 3 g = 2 32.8 32.6 32.7 41.6 43.2 44.4 54.2 55.0 56.8 g = 8 32.4 42.3 52.7
Table 2. Classiï¬cation error vs. number of groups g (smaller number represents better performance)
number of groups for convolutions. In other words, given a computational budget, Shufï¬eNet can use wider feature maps. We ï¬nd this is critical for small networks, as tiny networks usually have an insufï¬cient number of channels to process the information. | 1707.01083#18 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 19 | Platform Frame per second Platform Frame per second ALE [4] 6000 RLE [5] 530 DeepMind Lab [3] VizDoom [14] 287(C)/866(G) â¼ 7,000 Universe [33] 60 TorchCraft [31] 2,000 (frameskip=50) Malmo [13] 120 Mini-RTS 40,000 Table 2: Frame rate comparison. Note that Mini-RTS does not render frames, but save game infor- mation into a C structure which is used in Python without copying. For DeepMind Lab, FPS is 287 (CPU) and 866 (GPU) on single 6CPU+1GPU machine. Other numbers are in 1CPU core.
with modern machine learning architectures. ORTS [8], BattleCode [2] and RoboCup Simulation League [16] are designed for coding competitions and focused on rule-based AIs. Research-oriented platforms (e.g., µRTS [23], MazeBase [29]) are fast and simple, often coming with various baselines, but often with much simpler dynamics than RTS games. Recently, TorchCraft [31] provides APIs for StarCraft I to access its internal game states. However, due to platform incompatibility, one docker is used to host one StarCraft engine, and is resource-consuming. Tbl. 1 summarizes the difference.
# 3.1 Our approach | 1707.01067#19 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 19 | In addition, in Shufï¬eNet depthwise convolution only performs on bottleneck feature maps. Even though depth- wise convolution usually has very low theoretical complex- ity, we ï¬nd it difï¬cult to efï¬ciently implement on low- power mobile devices, which may result from a worse com- putation/memory access ratio compared with other dense operations. Such drawback is also referred in [3], which has a runtime library based on TensorFlow [1]. In Shufï¬eNet units, we intentionally use depthwise convolution only on bottleneck in order to prevent overhead as much as possi- ble.
# 3.3. Network Architecture
Built on Shufï¬eNet units, we present the overall Shuf- ï¬eNet architecture in Table 1. The proposed network is mainly composed of a stack of Shufï¬eNet units grouped into three stages. The ï¬rst building block in each stage is ap- plied with stride = 2. Other hyper-parameters within a stage stay the same, and for the next stage the output channels are doubled. Similar to [9], we set the number of bottleneck channels to 1/4 of the output channels for each Shufï¬eNet | 1707.01083#19 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 20 | # 3.1 Our approach
Many popular RTS games and its variants (e.g., StarCraft, DoTA, Leagues of Legends, Tower De- fense) share the same structure: a few units are controlled by a player, to move, attack, gather or cast special spells, to inï¬uence their own or an enemyâs army. With our command hierarchy, a new game can be created by changing (1) available commands (2) available units, and (3) how each unit emits commands triggered by certain scenarios. For this, we offer simple yet effective tools. Researchers can change these variables either by adding commands in C++, or by writing game scripts (e.g., Lua). All derived games share the mechanism of hierarchical commands, replay, etc. Rule-based AIs can also be extended similarly. We provide the following three games: Mini-RTS, Capture the Flag and Tower Defense (Fig. 3(b)). These games share the following properties: | 1707.01067#20 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 20 | unit. Our intent is to provide a reference design as simple as possible, although we ï¬nd that further hyper-parameter tunning might generate better results.
In Shufï¬eNet units, group number g controls the connec- tion sparsity of pointwise convolutions. Table 1 explores different group numbers and we adapt the output chan- nels to ensure overall computation cost roughly unchanged (â¼140 MFLOPs). Obviously, larger group numbers result in more output channels (thus more convolutional ï¬lters) for a given complexity constraint, which helps to encode more information, though it might also lead to degradation for an individual convolutional ï¬lter due to limited corresponding input channels. In Sec 4.1.1 we will study the impact of this number subject to different computational constrains.
To customize the network to a desired complexity, we can simply apply a scale factor s on the number of chan- nels. For example, we denote the networks in Table 1 as âShufï¬eNet 1Ãâ, then âShufï¬eNet sÃâ means scaling the number of ï¬lters in Shufï¬eNet 1à by s times thus overall complexity will be roughly s2 times of Shufï¬eNet 1Ã.
# 4. Experiments | 1707.01083#20 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 21 | Gameplay. Units in each game move with real coordinates, have dimensions and collision checks, and perform durative actions. The RTS engine is tick-driven. At each tick, AIs make decisions by sending commands to units based on observed information. Then commands are executed, the gameâs state changes, and the game continues. Despite a fair complicated game mechanism, Mini- RTS is able to run 40K frames-per-second per core on a laptop, an order of magnitude faster than most existing environments. Therefore, bots can be trained in a day on a single machine.
Built-in hierarchical command levels. An agent could issue strategic commands (e.g., more ag- gressive expansion), tactical commands (e.g., hit and run), or micro-command (e.g., move a partic- ular unit backward to avoid damage). Ideally strong agents master all levels; in practice, they may focus on a certain level of command hierarchy, and leave others to be covered by hard-coded rules. For this, our RTS engine uses a hierarchical command system that offers different levels of controls over the game. A high-level command may affect all units, by issuing low-level commands. A low-level, unit-speciï¬c durative command lasts a few ticks until completion during which per-tick immediate commands are issued. | 1707.01067#21 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 21 | # 4. Experiments
We mainly evaluate our models on the ImageNet 2012 classiï¬cation dataset [29, 4]. We follow most of the train- ing settings and hyper-parameters used in [40], with two exceptions: (i) we set the weight decay to 4e-5 instead of
Cls err. (%, no shufï¬e) Cls err. (%, shufï¬e) â err. (%) 34.5 37.6 45.7 48.1 56.3 56.5 32.6 32.4 43.2 42.3 55.0 52.7 1.9 5.2 2.5 5.8 1.3 3.8
Table 3. Shufï¬eNet with/without channel shufï¬e (smaller number represents better performance) | 1707.01083#21 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 22 | Built-in rule-based AIs. We have designed rule-based AIs along with the environment. These AIs have access to all the information of the map and follow ï¬xed strategies (e.g., build 5 tanks and attack the opponent base). These AIs act by sending high-level commands which are then translated to low-level ones and then executed.
With ELF, for the ï¬rst time, we are able to train full-game bots for real-time strategy games and achieve stronger performance than built-in rule-based AIs. In contrast, existing RTS AIs are either
5
KFPS per CPU core for Mini-RTS KFPS per CPU core for Pong (Atari) =Icore 6 = core 1 2 cores = 2 cores 4cores 5 4 cores 1 8 cores #8 cores 4 1 16 cores 1 16 cores 3 2 Hi pent 2 Mer 1 0 0 64threads 128threads 256 threads 512 threads 1024 threads 64threads 128threads 256threads 512 threads 1024 threads.
°
50
40
30
20
10 | 1707.01067#22 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 22 | Table 3. Shufï¬eNet with/without channel shufï¬e (smaller number represents better performance)
1e-4 and use linear-decay learning rate policy (decreased from 0.5 to 0); (ii) we use slightly less aggressive scale aug- mentation for data preprocessing. Similar modiï¬cations are also referenced in [12] because such small networks usu- ally suffer from underï¬tting rather than overï¬tting. It takes 1 or 2 days to train a model for 3Ã105 iterations on 4 GPUs, whose batch size is set to 1024. To benchmark, we compare single crop top-1 performance on ImageNet validation set, i.e. cropping 224 à 224 center view from 256à input image and evaluating classiï¬cation accuracy. We use exactly the same settings for all models to ensure fair comparisons. | 1707.01083#22 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 23 | °
50
40
30
20
10
Figure 4: Frame-per-second per CPU core (no hyper-threading) with respect to CPUs/threads. ELF (light-shaded) is 3x faster than OpenAI Gym [6] (dark-shaded) with 1024 threads. CPU involved in testing: Intel [email protected].
rule-based or focused on tactics (e.g., 5 units vs. 5 units). We run experiments on the three games to justify the usability of our platform.
# 4 Experiments
# 4.1 Benchmarking ELF | 1707.01067#23 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 23 | (e.g. g = 8), the classiï¬cation score saturates or even drops. With an increase in group number (thus wider fea- ture maps), input channels for each convolutional ï¬lter be- come fewer, which may harm representation capability. In- terestingly, we also notice that for smaller models such as Shufï¬eNet 0.25à larger group numbers tend to better re- sults consistently, which suggests wider feature maps bring more beneï¬ts for smaller models.
# 4.1.2 Channel Shufï¬e vs. No Shufï¬e
# 4.1. Ablation Study
The core idea of Shufï¬eNet lies in pointwise group con- volution and channel shufï¬e operation. In this subsection we evaluate them respectively.
# 4.1.1 Pointwise Group Convolutions | 1707.01083#23 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 24 | # 4 Experiments
# 4.1 Benchmarking ELF
We run ELF on a single server with a different number of CPU cores to test the efï¬ciency of paral- lelism. Fig. 4(a) shows the results when running Mini-RTS. We can see that ELF scales well with the number of CPU cores used to run the environments. We also embed Atari emulator [4] into our platform and check the speed difference between a single-threaded ALE and paralleled ALE per core (Fig. 4(b)). While a single-threaded engine gives around 5.8K FPS on Pong, our paralleled ALE runs comparable speed (5.1K FPS per core) with up to 16 cores, while OpenAI Gym (with Python threads) runs 3x slower (1.7K FPS per core) with 16 cores 1024 threads, and degrades with more cores. Number of threads matters for training since they determine how diverse the experiences could be, with the same number of CPUs. Apart from this, we observed that Python multiprocessing with Gym is even slower, due to heavy communication of game frames among processes. Note that we used no hyperthreading for all experiments.
# 4.2 Baselines on Real-time Strategy Games | 1707.01067#24 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 24 | # 4.1.1 Pointwise Group Convolutions
To evaluate the importance of pointwise group convolu- tions, we compare Shufï¬eNet models of the same com- plexity whose numbers of groups range from 1 to 8. If the group number equals 1, no pointwise group convolu- tion is involved and then the Shufï¬eNet unit becomes an âXception-likeâ [3] structure. For better understanding, we also scale the width of the networks to 3 different complex- ities and compare their classiï¬cation performance respec- tively. Results are shown in Table 2. | 1707.01083#24 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 25 | # 4.2 Baselines on Real-time Strategy Games
We focus on 1-vs-1 full games between trained AIs and built-in AIs. Built-in AIs have access to full information (e.g., number of opponentâs tanks), while trained AIs know partial information in the fog of war, i.e., game environment within the sight of its own units. There are exceptions: in Mini-RTS, the location of the opponentâs base is known so that the trained AI can attack; in Capture the Flag, the ï¬ag location is known to all; Tower Defense is a game of complete information.
Details of Built-in AI. For Mini-RTS there are two rule-based AIs: SIMPLE gathers, builds ï¬ve tanks and then attacks the opponent base. HIT N RUN often harasses, builds and attacks. For Capture the Flag, we have one built-in AI. For Tower Defense (TD), no AI is needed. We tested our built-in AIs against a human player and ï¬nd they are strong in combat but exploitable. For example, SIMPLE is vulnerable to hit-and-run style harass. As a result, a human player has a win rate of 90% and 50% against SIMPLE and HIT N RUN, respectively, in 20 games. | 1707.01067#25 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 25 | From the results, we see that models with group convo- lutions (g > 1) consistently perform better than the coun- terparts without pointwise group convolutions (g = 1). Smaller models tend to beneï¬t more from groups. For ex- ample, for Shufï¬eNet 1à the best entry (g = 8) is 1.2% better than the counterpart, while for Shufï¬eNet 0.5à and 0.25à the gaps become 3.5% and 4.4% respectively. Note that group convolution allows more feature map channels for a given complexity constraint, so we hypothesize that the performance gain comes from wider feature maps which help to encode more information. In addition, a smaller network involves thinner feature maps, meaning it beneï¬ts more from enlarged feature maps.
Table 2 also shows that for some models (e.g. Shuf- ï¬eNet 0.5Ã) when group numbers become relatively large | 1707.01083#25 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 26 | Action Space. For simplicity, we use 9 strategic (and thus global) actions with hard-coded execution details. For example, AI may issue BUILD BARRACKS, which automatically picks a worker to build barracks at an empty location, if the player can afford. Although this setting is simple, detailed commands (e.g., command per unit) can be easily set up, which bear more resemblance to StarCraft. Similar setting applies to Capture the Flag and Tower Defense. Please check Appendix for detailed descriptions.
Rewards. For Mini-RTS, the agent only receives a reward when the game ends (±1 for win/loss). An average game of Mini-RTS lasts for around 4000 ticks, which results in 80 decisions for a frame skip of 50, showing that the game is indeed delayed in reward. For Capturing the Flag, we give intermediate rewards when the ï¬ag moves towards playerâs own base (one score when the ï¬ag âtouches downâ). In Tower Defense, intermediate penalty is given if enemy units are leaked.
6
# Gym | 1707.01067#26 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 26 | Table 2 also shows that for some models (e.g. Shuf- ï¬eNet 0.5Ã) when group numbers become relatively large
The purpose of shufï¬e operation is to enable cross-group information ï¬ow for multiple group convolution layers. Ta- ble 3 compares the performance of Shufï¬eNet structures (group number is set to 3 or 8 for instance) with/without channel shufï¬e. The evaluations are performed under three different scales of complexity. It is clear that channel shuf- ï¬e consistently boosts classiï¬cation scores for different set- tings. Especially, when group number is relatively large (e.g. g = 8), models with channel shufï¬e outperform the counterparts by a signiï¬cant margin, which shows the im- portance of cross-group information interchange.
# 4.2. Comparison with Other Structure Units | 1707.01083#26 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 27 | 6
# Gym
Frameskip 50 20 10 SIMPLE 68.4(±4.3) 61.4(±5.8) 52.8(±2.4) HIT N RUN 63.6(±7.9) 55.4(±4.7) 51.1(±5.0) Random Trained AI Capture Flag Tower Defense 0.7 (± 0.9) 59.9 (± 7.4) 36.3 (± 0.3) 91.0 (± 7.6)
Table 3: Win rate of A3C models competing with built-in AIs over 10k games. Left: Mini-RTS. Frame skip of the trained AI is 50. Right: For Capture the Flag, frame skip of trained AI is 10, while the opponent is 50. For Tower Defense the frame skip of trained AI is 50, no opponent AI. | 1707.01067#27 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 27 | # 4.2. Comparison with Other Structure Units
leading convolutional units in VGG [30], ResNet [9], GoogleNet [33], ResNeXt [40] and Xcep- tion [3] have pursued state-of-the-art results with large mod- els (e.g. ⥠1GFLOPs), but do not fully explore low- complexity conditions. In this section we survey a variety of building blocks and make comparisons with Shufï¬eNet under the same complexity constraint.
For fair comparison, we use the overall network architec- ture as shown in Table 1. We replace the Shufï¬eNet units in Stage 2-4 with other structures, then adapt the number of channels to ensure the complexity remains unchanged. The structures we explored include:
⢠VGG-like. Following the design principle of VGG net [30], we use a two-layer 3Ã3 convolutions as the basic building block. Different from [30], we add a Batch Normalization layer [15] after each of the con- volutions to make end-to-end training easier.
140 38 13 50.7 - - 37.3 48.8 63.7 33.6 45.1 57.1 33.3 46.0 65.2
# Complexity (MFLOPs) VGG-like ResNet Xception-like ResNeXt | 1707.01083#27 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 28 | Game ReLU Leaky ReLU BN Leaky ReLU + BN Mini-RTS HIT N RUN Median Mean (± std) Median Mean (± std) 57.0 (± 6.8) 60.3 (± 3.3) 57.5 (± 6.8) 63.6 (± 7.9) Mini-RTS SIMPLE 52.8 59.8 61.0 72.2 54.7 (± 4.2) 61.0 (± 2.6) 64.4 (± 7.4 ) 68.4 (± 4.3) 60.4 60.2 55.6 65.5
Table 4: Win rate in % of A3C models using different network architectures. Frame skip of both sides are 50 ticks. The fact that the medians are better than the means shows that different instances of A3C could converge to very different solutions.
# 4.2.1 A3C baseline | 1707.01067#28 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01067 | 29 | Next, we describe our baselines and their variants. Note that while we refer to these as baseline, we are the ï¬rst to demonstrate end-to-end trained AIs for real-time strategy (RTS) games with partial information. For all games, we randomize the initial game states for more diverse experience and use A3C [21] to train AIs to play the full game. We run all experiments 5 times and report mean and standard deviation. We use simple convolutional networks with two heads, one for actions and the other for values. The input features are composed of spatially structured (20-by-20) abstractions of the current game environment with multiple channels. At each (rounded) 2D location, the type and hit point of the unit at that location is quantized and written to their corresponding channels. For Mini-RTS, we also add an additional constant channel ï¬lled with current resource of the player. The input feature only contains the units within the sight of one player, respecting the properties of fog-of-war. For Capture the Flag, immediate action is required at speciï¬c situations (e.g., when the opponent just gets the ï¬ag) and A3C does not give good performance. Therefore we use frame skip | 1707.01067#29 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 29 | Model 1.0 MobileNet-224 Shufï¬eNet 2à (g = 3) Shufï¬eNet 2à (with SE[13], g = 3) 0.75 MobileNet-224 Shufï¬eNet 1.5à (g = 3) 0.5 MobileNet-224 Shufï¬eNet 1à (g = 8) 0.25 MobileNet-224 Shufï¬eNet 0.5à (g = 4) Shufï¬eNet 0.5à (shallow, g = 3) Complexity (MFLOPs) Cls err. (%) â err. (%) 569 524 527 325 292 149 140 41 38 40 29.4 26.3 24.7 31.6 28.5 36.3 32.4 49.4 41.6 42.8 - 3.1 4.7 - 3.1 - 3.9 - 7.8 6.6
Table 5. Shufï¬eNet vs. MobileNet [12] on ImageNet Classiï¬cation
⢠ResNet. We adopt the âbottleneckâ design in our ex- periment, which has been demonstrated more efï¬cient in [9] . Same as [9], the bottleneck ratio1 is also 1 : 4. | 1707.01083#29 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01083 | 30 | the increase of accuracy. Since the efï¬cient design of Shuf- ï¬eNet, we can use more channels for a given computation budget, thus usually resulting in better performance.
⢠Xception-like. The original structure proposed in [3] involves fancy designs or hyper-parameters for differ- ent stages, which we ï¬nd difï¬cult for fair comparison Instead, we remove the pointwise on small models. group convolutions and channel shufï¬e operation from Shufï¬eNet (also equivalent to Shufï¬eNet with g = 1). The derived structure shares the same idea of âdepth- wise separable convolutionâ as in [3], which is called an Xception-like structure here.
⢠ResNeXt. We use the settings of cardinality = 16 and bottleneck ratio = 1 : 2 as suggested in [40]. We also explore other settings, e.g. bottleneck ratio = 1 : 4, and get similar results. | 1707.01083#30 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 31 | Note that there are several factors affecting the AI performance.
Frame-skip. A frame skip of 50 means that the AI acts every 50 ticks, etc. Against an opponent with low frame skip (fast-acting), A3Câs performance is generally lower (Fig. 3). When the opponent has high frame skip (e.g., 50 ticks), the trained agent is able to ï¬nd a strategy that exploits the long- delayed nature of the opponent. For example, in Mini-RTS it will send two tanks to the opponentâs base. When one tank is destroyed, the opponent does not attack the other tank until the next 50- divisible tick comes. Interestingly, the trained model could be adaptive to different frame-rates and learn to develop different strategies for faster acting opponents. For Capture the Flag, the trained bot learns to win 60% over built-in AI, with an advantage in frame skip. For even frame skip, trained AI performance is low. | 1707.01067#31 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 31 | include GoogleNet or Inception series [33, 34, 32]. We ï¬nd it non- trivial to generate such Inception structures to small net- works because the original design of Inception module in- volves too many hyper-parameters. As a reference, the ï¬rst GoogleNet version [33] has 31.3% top-1 error at the cost of 1.5 GFLOPs (See Table 6). More sophisticated Inception versions [34, 32] are more accurate, however, involve sig- niï¬cantly increased complexity. Recently, Kim et al. pro- pose a lightweight network structure named PVANET [19] which adopts Inception units. Our reimplemented PVANET (with 224Ã224 input size) has 29.7% classiï¬cation error with a computation complexity of 557 MFLOPs, while our Shufï¬eNet 2x model (g = 3) gets 26.3% with 524 MFLOPs (see Table 6). | 1707.01083#31 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 32 | Network Architectures. Since the input is sparse and heterogeneous, we experiment on CNN ar- chitectures with Batch Normalization [11] and Leaky ReLU [18]. BatchNorm stabilizes the gradient ï¬ow by normalizing the outputs of each ï¬lter. Leaky ReLU preserves the signal of negative linear responses, which is important in scenarios when the input features are sparse. Tbl. 4 shows that these two modiï¬cations both improve and stabilize the performance. Furthermore, they are compli- mentary to each other when combined.
History length. History length T affects the convergence speed, as well as the ï¬nal performance of A3C (Fig. 5). While Vanilla A3C [21] uses T = 5 for Atari games, the reward in Mini-RTS In this case, the T -step estimation of reward is more delayed (â¼ 80 actions before a reward).
7
ALSIMPLE ALHIT_AND_RUN 5 075 § S 3 3 2 3 4 E055 § £ -T4 gg g : g 2 âT-8 2 E 035 =T=12 B 5 -T+16 E 2 oss ~120 Boos B 019 200 400 600 800 O19 200 400 600 800 Samples used (in thousands) Samples used (in thousands) | 1707.01067#32 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 32 | We use exactly the same settings to train these models. Results are shown in Table 4. Our Shufï¬eNet models out- perform most others by a signiï¬cant margin under different complexities. Interestingly, we ï¬nd an empirical relation- ship between feature map channels and classiï¬cation accu- racy. For example, under the complexity of 38 MFLOPs, output channels of Stage 4 (see Table 1) for VGG-like, ResNet, ResNeXt, Xception-like, Shufï¬eNet models are 50, 192, 192, 288, 576 respectively, which is consistent with
1In the bottleneck-like units (like ResNet, ResNeXt or Shufï¬eNet) bot- tleneck ratio implies the ratio of bottleneck channels to output channels. For example, bottleneck ratio = 1 : 4 means the output feature map is 4 times the width of the bottleneck feature map.
# 4.3. Comparison with MobileNets and Other Frameworks
Recently Howard et al. have proposed MobileNets [12] which mainly focus on efï¬cient network architecture for mobile devices. MobileNet takes the idea of depthwise sep- arable convolution from [3] and achieves state-of-the-art results on small models. | 1707.01083#32 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 33 | Figure 5: Win rate in Mini-RTS with respect to the amount of experience at different steps T in A3C. Note that one sample (with history) in T = 2 is equivalent to two samples in T = 1. Longer T shows superior performance to small step counterparts, even if their samples are more expensive.
Trained Al (Blue) âALSIMPLE (Red) (a) (b) (°) (a) (e)
Figure 6: Game screenshots between trained AI (blue) and built-in SIMPLE (red). Player colors are shown on the boundary of hit point gauges. (a) Trained AI rushes opponent using early advantage. (c) Trained AI defends enemy invasion by (b) Trained AI attacks one opponent unit at a time. blocking their ways. (d)-(e) Trained AI uses one long-range attacker (top) to distract enemy units and one melee attacker to attack enemyâs base.
R= wy y'-1r, + y7V (sr) used in A3C does not yield a good estimation of the V (sr) is inaccurate, in particular for small T. For other experiments we use T = 6.
# true reward if
Interesting behaviors The trained AI learns to act promptly and use sophisticated strategies (Fig. 6). Multiple videos are available in https://github.com/facebookresearch/ELF. | 1707.01067#33 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 33 | Table 5 compares classiï¬cation scores under a variety of complexity levels. It is clear that our Shufï¬eNet models are superior to MobileNet for all the complexities. Though our Shufï¬eNet network is specially designed for small models (< 150 MFLOPs), we ï¬nd it is still better than MobileNet
Model VGG-16 [30] Shufï¬eNet 2à (g = 3) GoogleNet [33]* Shufï¬eNet 1à (g = 8) AlexNet [21] SqueezeNet [14] Shufï¬eNet 0.5à (g = 4) Cls err. (%) Complexity (MFLOPs) 28.5 26.3 31.3 32.4 42.8 42.5 41.6 15300 524 1500 140 720 833 38
Table 6. Complexity comparison. *Implemented by BVLC (https://github.com/BVLC/caffe/tree/master/models/bvlc googlenet) | 1707.01083#33 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 34 | # true reward if
Interesting behaviors The trained AI learns to act promptly and use sophisticated strategies (Fig. 6). Multiple videos are available in https://github.com/facebookresearch/ELF.
# 4.2.2 Curriculum Training
We ï¬nd that curriculum training plays an important role in training AIs. All AIs shown in Tbl. 3 and Tbl. 4 are trained with curriculum training. For Mini-RTS, we let the built-in AI play the ï¬rst k ticks, where k â¼ Uniform(0, 1000), then switch to the AI to be trained. This (1) reduces the difï¬culty of the game initially and (2) gives diverse situations for training to avoid local minima. During training, the aid of the built-in AIs is gradually reduced until no aid is given. All reported win rates are obtained by running the trained agents alone with greedy policy. | 1707.01067#34 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 34 | Model Shufï¬eNet 2à (g = 3) Shufï¬eNet 1à (g = 3) 1.0 MobileNet-224 [12] 1.0 MobileNet-224 (our impl.) mAP [.5, .95] (300à image) mAP [.5, .95] (600à image) 18.7% 14.5% 16.4% 14.9% 25.0% 19.8% 19.8% 19.3%
Table 7. Object detection results on MS COCO (larger numbers represents better performance). For MobileNets we compare two results: 1) COCO detection scores reported by [12]; 2) ï¬netuning from our reimplemented MobileNets, whose training and ï¬netuning settings are exactly the same as that for Shufï¬eNets. | 1707.01083#34 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 35 | We list the comparison with and without curriculum training in Tbl. 6. It is clear that the performance improves with curriculum training. Similarly, when ï¬ne-tuning models pre-trained with one type of opponent towards a mixture of opponents (e.g., 50%SIMPLE + 50%HIT N RUN), curriculum training is critical for better performance (Tbl. 5). Tbl. 5 shows that AIs trained with one built-in AI cannot do very well against another built-in AI in the same game. This demonstrates that training with diverse agents is important for training AIs with low-exploitability.
Game Mini-RTS HIT N RUN 26.6(±7.6) 63.6 (±7.9) 46.0(±15.3) 54.7(±11.2) Combined 47.5(±5.1) 49.1(±10.5) 47.7(±11.0) 53.2(±8.5) SIMPLE HIT N RUN Combined(No curriculum) Combined | 1707.01067#35 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 35 | Model Shufï¬eNet 0.5à (g = 3) Shufï¬eNet 1à (g = 3) Shufï¬eNet 2à (g = 3) AlexNet [21] 1.0 MobileNet-224 [12] Cls err. (%) 43.2 32.6 26.3 42.8 29.4 FLOPs 38M 140M 524M 720M 569M 224 à 224 15.2ms 37.8ms 108.8ms 184.0ms 110.0ms 480 à 640 87.4ms 222.2ms 617.0ms 1156.7ms 612.0ms
720 Ã 1280 260.1ms 684.5ms 1857.6ms 3633.9ms 1879.2ms
Table 8. Actual inference time on mobile device (smaller number represents better performance). The platform is based on a single Qualcomm Snapdragon 820 processor. All results are evaluated with single thread. | 1707.01083#35 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 36 | SIMPLE 68.4 (±4.3) 34.6(±13.1) 49.4(±10.0) 51.8(±10.6) Table 5: Training with a speciï¬c/combined AIs. Frame skip of both sides is 50. When against combined AIs (50%SIMPLE + 50%HIT N RUN), curriculum training is particularly important.
8
Game no curriculum training with curriculum training Mini-RTS SIMPLE Mini-RTS HIT N RUN 66.0(±2.4) 68.4 (±4.3) 54.4(±15.9) 63.6 (±7.9) Capture the Flag 54.2(±20.0) 59.9 (±7.4)
Table 6: Win rate of A3C models with and without curriculum training. Mini-RTS: Frame skip of both sides are 50 ticks. Capture the Flag: Frame skip of trained AI is 10, while the opponent is 50. The standard deviation of win rates are large due to instability of A3C training. For example in Capture the Flag, highest win rate reaches 70% while lowest win rate is only 27%. | 1707.01067#36 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 36 | Table 8. Actual inference time on mobile device (smaller number represents better performance). The platform is based on a single Qualcomm Snapdragon 820 processor. All results are evaluated with single thread.
for higher computation cost, e.g. 3.1% more accurate than MobileNet 1à at the cost of 500 MFLOPs. For smaller networks (â¼40 MFLOPs) Shufï¬eNet surpasses MobileNet by 7.8%. Note that our Shufï¬eNet architecture contains 50 layers while MobileNet only has 28 layers. For better un- derstanding, we also try Shufï¬eNet on a 26-layer architec- ture by removing half of the blocks in Stage 2-4 (see âShuf- ï¬eNet 0.5à shallow (g = 3)â in Table 5). Results show that the shallower model is still signiï¬cantly better than the cor- responding MobileNet, which implies that the effectiveness of Shufï¬eNet mainly results from its efï¬cient structure, not the depth. | 1707.01083#36 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01083 | 37 | state-of-the-art results on large ImageNet models. We ï¬nd SE modules also take effect in combination with the back- bone Shufï¬eNets, for instance, boosting the top-1 error of Shufï¬eNet 2à to 24.7% (shown in Table 5). Interestingly, though negligible increase of theoretical complexity, we ï¬nd Shufï¬eNets with SE modules are usually 25 â¼ 40% slower than the ârawâ Shufï¬eNets on mobile devices, which implies that actual speedup evaluation is critical on low-cost architecture design. In Sec 4.5 we will make further discus- sion.
# 4.4. Generalization Ability
Table 6 compares our Shufï¬eNet with a few popular models. Results show that with similar accuracy Shufï¬eNet is much more efï¬cient than others. For example, Shuf- ï¬eNet 0.5à is theoretically 18à faster than AlexNet [21] with comparable classiï¬cation score. We will evaluate the actual running time in Sec 4.5. | 1707.01083#37 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 38 | Table 7: Win rate using MCTS over 1000 games. Both players use a frameskip of 50.
# 4.2.3 Monte-Carlo Tree Search
Monte-Carlo Tree Search (MCTS) can be used for planning when complete information about the game is known. This includes the complete state s without fog-of-war, and the precise forward model sâ = sâ(s,a). Rooted at the current game state, MCTS builds a game tree that is biased towards paths with high win rate. Leaves are expanded with all candidate moves and the win rate estimation is computed by random self-play until the game ends. We use 8 threads, each with 100 rollouts. We use root parallelization [9] in which each thread independently expands a tree, and are combined to get the most visited action. As shown in Tol. [7] MCTS achieves a comparable win rate to models trained with RL. Note that the win rates of the two methods are not directly comparable, since RL methods have no knowledge of game dynamics, and its state knowledge is reduced by the limits introduced by the fog-of-war. Also, MCTS runs much slower (2-3sec per move) than the trained RL AI (< Imsec per move).
# 5 Conclusion and Future Work | 1707.01067#38 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01067 | 39 | # 5 Conclusion and Future Work
In this paper, we propose ELF, a research-oriented platform for concurrent game simulation which offers an extensive set of game play options, a lightweight game simulator, and a ï¬exible envi- ronment. Based on ELF, we build a RTS game engine and three initial environments (Mini-RTS, Capture the Flag and Tower Defense) that run 40KFPS per core on a laptop. As a result, a full- game bot in these games can be trained end-to-end in one day using a single machine. In addition to the platform, we provide throughput benchmarks of ELF, and extensive baseline results using state-of-the-art RL methods (e.g, A3C [21]) on Mini-RTS and show interesting learnt behaviors.
ELF opens up many possibilities for future research. With this lightweight and ï¬exible platform, RL methods on RTS games can be explored in an efï¬cient way, including forward modeling, hierarchical RL, planning under uncertainty, RL with complicated action space, and so on. Furthermore, the exploration can be done with an affordable amount of resources. As future work, we will continue improving the platform and build a library of maps and bots to compete with.
# References | 1707.01067#39 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 39 | To evaluate the generalization ability for transfer learn- ing, we test our Shufï¬eNet model on the task of MS COCO object detection [23]. We adopt Faster-RCNN [28] as the detection framework and use the publicly released Caffe code [28, 17] for training with default settings. Similar to [12], the models are trained on the COCO train+val dataset excluding 5000 minival images and we conduct testing on the minival set. Table 7 shows the comparison of results trained and evaluated on two input resolutions. Comparing Shufï¬eNet 2à with MobileNet whose complexity are comparable (524 vs. 569 MFLOPs), our Shufï¬eNet 2à sur- passes MobileNet by a signiï¬cant margin on both resolu- tions; our Shufï¬eNet 1à also achieves comparable results with MobileNet on 600à resolution, but has â¼4à com- plexity reduction. We conjecture that this signiï¬cant gain is partly due to Shufï¬eNetâs simple design of architecture without bells and whistles.
# 4.5. Actual Speedup Evaluation | 1707.01083#39 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 40 | # References
[1] Mohammad Babaeizadeh, Iuri Frosio, Stephen Tyree, Jason Clemons, and Jan Kautz. Re- International inforcement learning through asynchronous advantage actor-critic on a gpu. Conference on Learning Representations (ICLR), 2017.
[2] BattleCode. Battlecode, mitâs ai programming competition: https://www.battlecode.org/. 2000. URL https://www.battlecode.org/.
[3] Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich K¨uttler, Andrew Lefrancq, Simon Green, V´ıctor Vald´es, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis, Shane Legg, and Stig Petersen. Deepmind lab. CoRR, abs/1612.03801, 2016. URL http://arxiv.org/abs/1612.03801.
9 | 1707.01067#40 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 40 | # 4.5. Actual Speedup Evaluation
Finally, we evaluate the actual inference speed of Shuf- ï¬eNet models on a mobile device with an ARM platform. Though Shufï¬eNets with larger group numbers (e.g. g = 4 or g = 8) usually have better performance, we ï¬nd it less efï¬cient in our current implementation. Empirically g = 3 usually has a proper trade-off between accuracy and actual inference time. As shown in Table 8, three input resolutions are exploited for the test. Due to memory access and other overheads, we ï¬nd every 4à theoretical complexity reduc- tion usually results in â¼2.6à actual speedup in our im- plementation. Nevertheless, compared with AlexNet [21] our Shufï¬eNet 0.5à model still achieves â¼13à actual speedup under comparable classiï¬cation accuracy (the the- oretical speedup is 18Ã), which is much faster than previ- ous AlexNet-level models or speedup approaches such as [14, 16, 22, 42, 43, 38].
# References | 1707.01083#40 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 41 | 9
[4] Marc G. Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. CoRR, abs/1207.4708, 2012. URL http://arxiv.org/abs/1207.4708.
[5] Nadav Bhonker, Shai Rozenberg, and Itay Hubara. Playing SNES in the retro learning envi- ronment. CoRR, abs/1611.02205, 2016. URL http://arxiv.org/abs/1611.02205.
[6] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. CoRR, abs/1606.01540, 2016. URL http://arxiv. org/abs/1606.01540.
[7] Cameron B Browne, Edward Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowl- ing, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton. A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in games, 4(1):1â43, 2012. | 1707.01067#41 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 41 | # References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorï¬ow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. 4
[2] H. Bagherinezhad, M. Rastegari, and A. Farhadi. Lcnn: Lookup-based convolutional neural network. arXiv preprint arXiv:1611.06473, 2016. 2
[3] F. Chollet. Xception: Deep learning with depthwise separa- ble convolutions. arXiv preprint arXiv:1610.02357, 2016. 1, 2, 3, 4, 5, 6
[4] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248â255. IEEE, 2009. 1, 4 | 1707.01083#41 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 42 | [8] Michael Buro and Timothy Furtak. On the development of a free rts game engine. In Game- OnNA Conference, pages 23â27, 2005.
[9] Guillaume MJ-B Chaslot, Mark HM Winands, and H Jaap van Den Herik. Parallel monte-carlo tree search. In International Conference on Computers and Games, pages 60â71. Springer, 2008.
[10] Erwin Coumans. Bullet physics engine. Open Source Software: http://bulletphysics.org, 2010.
[11] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. ICML, 2015.
[12] Stefan Johansson and Robin Westberg. Spring: https://springrts.com/. 2008. URL https: //springrts.com/.
[13] Matthew Johnson, Katja Hofmann, Tim Hutton, and David Bignell. The malmo platform for In International joint conference on artiï¬cial intelli- artiï¬cial intelligence experimentation. gence (IJCAI), page 4246, 2016. | 1707.01067#42 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 42 | [5] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic In Proceedings of the IEEE conference on segmentation. computer vision and pattern recognition, pages 580â587, 2014. 1
Deep compres- sion: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. 2
[7] S. Han, J. Pool, J. Tran, and W. Dally. Learning both weights and connections for efï¬cient neural network. In Advances in
Neural Information Processing Systems, pages 1135â1143, 2015. 2
[8] K. He and J. Sun. Convolutional neural networks at con- strained time cost. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5353â 5360, 2015. 1
[9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition, pages 770â778, 2016. 1, 2, 3, 4, 5, 6 | 1707.01083#42 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 43 | [14] MichaŠKempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Ja´skowski. Vizdoom: A doom-based ai research platform for visual reinforcement learning. arXiv preprint arXiv:1605.02097, 2016.
[15] Guillaume Lample and Devendra Singh Chaplot. Playing fps games with deep reinforcement learning. arXiv preprint arXiv:1609.05521, 2016.
[16] RoboCup Simulation League. Robocup https://en.wikipedia.org/wiki/robocup simulation league. //en.wikipedia.org/wiki/RoboCup_Simulation_League. 1995. simulation league: URL https:
[17] Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectiï¬er nonlinearities improve neural network acoustic models. In Proc. ICML, volume 30, 2013.
[18] Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectiï¬er nonlinearities improve neural network acoustic models. 2013. | 1707.01067#43 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 43 | [10] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In European Conference on Com- puter Vision, pages 630â645. Springer, 2016. 1, 2
[11] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. 2
[12] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efï¬- cient convolutional neural networks for mobile vision appli- cations. arXiv preprint arXiv:1704.04861, 2017. 1, 2, 3, 5, 6, 7 | 1707.01083#43 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 44 | [18] Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectiï¬er nonlinearities improve neural network acoustic models. 2013.
[19] Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Ban- ino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, and Raia Hadsell. Learning to navigate in complex environments. ICLR, 2017.
[20] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
[21] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lill- icrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016.
10 | 1707.01067#44 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 44 | [13] J. Hu, L. Shen, and G. Sun. Squeeze-and-excitation net- works. arXiv preprint arXiv:1709.01507, 2017. 1, 6, 7 [14] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size. arXiv preprint arXiv:1602.07360, 2016. 1, 7, 8
[15] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 3, 5
[16] M. Jaderberg, A. Vedaldi, and A. Zisserman. Speeding up convolutional neural networks with low rank expansions. arXiv preprint arXiv:1405.3866, 2014. 1, 2, 8 | 1707.01083#44 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 45 | 10
[22] Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Vedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, Shane Legg, Volodymyr Mnih, Koray Kavukcuoglu, and David Silver. Massively parallel methods for deep reinforcement learning. CoRR, abs/1507.04296, 2015. URL http://arxiv.org/ abs/1507.04296.
[23] Santiago Ontan´on. The combinatorial multi-armed bandit problem and its application to real- time strategy games. In Proceedings of the Ninth AAAI Conference on Artiï¬cial Intelligence and Interactive Digital Entertainment, pages 58â64. AAAI Press, 2013.
# [24] OpenRA. Openra: http://www.openra.net/. 2007. URL http://www.openra.net/.
[25] Peng Peng, Quan Yuan, Ying Wen, Yaodong Yang, Zhenkun Tang, Haitao Long, and Jun Wang. Multiagent bidirectionally-coordinated nets for learning to play starcraft combat games. CoRR, abs/1703.10069, 2017. URL http://arxiv.org/abs/1703.10069. | 1707.01067#45 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 45 | [17] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Gir- shick, S. Guadarrama, and T. Darrell. Caffe: Convolu- tional architecture for fast feature embedding. In Proceed- ings of the 22nd ACM international conference on Multime- dia, pages 675â678. ACM, 2014. 7
[18] J. Jin, A. Dundar, and E. Culurciello. Flattened convolutional neural networks for feedforward acceleration. arXiv preprint arXiv:1412.5474, 2014. 2
[19] K.-H. Kim, S. Hong, B. Roh, Y. Cheon, and M. Park. Pvanet: Deep but lightweight neural networks for real-time object de- tection. arXiv preprint arXiv:1608.08021, 2016. 6
[20] A. Krizhevsky. cuda-convnet: High-performance c++/cuda implementation of convolutional neural networks, 2012. 2
Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012. 1, 2, 7, 8 | 1707.01083#45 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 46 | [26] John Schulman, Sergey Levine, Pieter Abbeel, Michael I Jordan, and Philipp Moritz. Trust region policy optimization. In ICML, pages 1889â1897, 2015.
[27] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanc- tot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529 (7587):484â489, 2016.
[28] Pumpkin Studios. Warzone 2100: https://wz2100.net/. 1999. URL https://wz2100. net/.
[29] Sainbayar Sukhbaatar, Arthur Szlam, Gabriel Synnaeve, Soumith Chintala, and Rob Fergus. Mazebase: A sandbox for learning from games. CoRR, abs/1511.07401, 2015. URL http: //arxiv.org/abs/1511.07401.
[30] Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gra- dient methods for reinforcement learning with function approximation. In NIPS, volume 99, pages 1057â1063, 1999. | 1707.01067#46 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 46 | I. Oseledets, and V. Lempitsky. Speeding-up convolutional neural net- works using ï¬ne-tuned cp-decomposition. arXiv preprint arXiv:1412.6553, 2014. 1, 2, 8
[23] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft coco: Com- mon objects in context. In European Conference on Com- puter Vision, pages 740â755. Springer, 2014. 1, 7
[24] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 3431â3440, 2015. 1
[25] M. Mathieu, M. Henaff, and Y. LeCun. of convolutional networks through ffts. arXiv:1312.5851, 2013. 2 Fast training arXiv preprint | 1707.01083#46 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 47 | [31] Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timoth´ee Lacroix, Zem- ing Lin, Florian Richoux, and Nicolas Usunier. Torchcraft: a library for machine learn- ing research on real-time strategy games. CoRR, abs/1611.00625, 2016. URL http: //arxiv.org/abs/1611.00625.
[32] Yuandong Tian and Yan Zhu. Better computer go player with neural network and long-term prediction. arXiv preprint arXiv:1511.06410, 2015.
# [33] Universe. 2016. URL universe.openai.com.
[34] Nicolas Usunier, Gabriel Synnaeve, Zeming Lin, and Soumith Chintala. Episodic exploration ICLR, for deep deterministic policies: An application to starcraft micromanagement tasks. 2017.
[35] Yuxin Wu and Yuandong Tian. Training agent for ï¬rst-person shooter game with actor-critic curriculum learning. International Conference on Learning Representations (ICLR), 2017.
11
# 6 Appendix: Detailed descriptions of RTS engine and games
# 6.1 Overview | 1707.01067#47 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 47 | [26] P. Ramachandran, B. Zoph, and Q. V. Le. Swish: a self-gated activation function. arXiv preprint arXiv:1710.05941, 2017. 7
[27] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. Xnor- net: Imagenet classiï¬cation using binary convolutional neu- ral networks. In European Conference on Computer Vision, pages 525â542. Springer, 2016. 1, 2
[28] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91â99, 2015. 1, 7
[29] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â252, 2015. 1, 4 | 1707.01083#47 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 48 | 11
# 6 Appendix: Detailed descriptions of RTS engine and games
# 6.1 Overview
On ELF, we thus build three different environments, Mini-RTS, Capture the Flag and Tower De- fense. Tbl. 8 shows their characteristics.
(a) 4 (b) Resource Game ends? All Bots Act() Execute Commands Your base Worker Wout barracks Cmd G: Durative/Gather State 0: Moving to resource Game State Fee, Hit Point Action/Reply AAI Selected unit > Enemy base Coordinates in floating points.
Figure 7: Overview of Mini-RTS. (a) Tick-driven system. Command system. (b) Visualization of game play. (c)
Descriptions Gather resource/build troops to destroy enemyâs base. Capture the ï¬ag and bring it to your own base Builds defensive towers to block enemy invasion. Table 8: Short descriptions of three different environments built from our RTS engine.
# 6.2 Hierarchical Commands
Strategic Environment command a 7 Immediate -}ââ] Game state change Top-level | 1707.01067#48 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 48 | [30] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 1, 2, 5, 7
[31] D. Soudry, I. Hubara, and R. Meir. Expectation backpropa- gation: Parameter-free training of multilayer neural networks with continuous or discrete weights. In Advances in Neural Information Processing Systems, pages 963â971, 2014. 2
[32] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi. Inception- v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016. 1, 2, 6 [33] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â9, 2015. 1, 2, 5, 6, 7 | 1707.01083#48 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 49 | # 6.2 Hierarchical Commands
Strategic Environment command a 7 Immediate -}ââ] Game state change Top-level
Figure 8: Hierarchical command system in our RTS engine. Top-level commands can issue strategic level commands, which in terms can issue durative and immediate commands to each unit (e.g., ALL ATTACK can issue ATTACK command to all units of our side). For a unit, durative commands usually last for a few ticks until the goal is achieved (e.g., enemy down). At each tick, the durative command can issue other durative ones, or immediate commands which takes effects by changing the game situation at the current tick.
The command level in our RTS engine is hierarchical (Fig. 8). A high-level command can issue other commands at the same tick during execution, which are then executed and can potential issues other commands as well. A command can also issue subsequent commands for future ticks. Two kinds of commands exist, durative and immediate. Durative commands (e.g., Move, Attack) last for many ticks until completion (e.g., enemy down), while immediate commands take effect at the current tick.
12
# 6.3 Units and Game Dynamics | 1707.01067#49 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 49 | [34] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818â2826, 2016. 1, 2, 6 [35] N. Vasilache, J. Johnson, M. Mathieu, S. Chintala, S. Pi- Fast convolutional nets with arXiv preprint
antino, and Y. LeCun. fbfft: A gpu performance evaluation. arXiv:1412.7580, 2014. 2
[36] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 3156â3164, 2015. 1
[37] M. Wang, B. Liu, and H. Foroosh. Design of efï¬cient convolutional layers using single intra-channel convolution, topological subdivisioning and spatial âbottleneckâ struc- ture. arXiv preprint arXiv:1608.04337, 2016. 2 | 1707.01083#49 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 50 | 12
# 6.3 Units and Game Dynamics
Mini-RTS. Tbl. 9 shows available units for Mini-RTS, which captures all basic dynamics of RTS Games: Gathering, Building facilities, Building different kinds of troops, Defending opponentâs attacks and/or Invading opponentâs base. For troops, there are melee units with high hit point, high attack points but low moving speed, and agile units with low hit point, long attack range but fast moving speed. Tbl. 10 shows available units for Capture the Flag.
Note that our framework is extensive and adding more units is easy.
Unit name BASE RESOURCE WORKER BARRACKS MELEE ATTACKER RANGE ATTACKER Description Building that can build workers and collect resources. Resource unit that contains 1000 minerals. Worker who can build barracks and gather resource. Low movement speed and low attack damage. Building that can build melee attacker and range attacker. Tank with high HP, medium movement speed, short attack range, high attack damage. Tank with low HP, high movement speed, long attack range and medium attack damage.
# Table 9: Available units in Mini-RTS.
Unit name Description BASE FLAG ATHLETE Unit with attack damage and can carry a ï¬ag. Moves slowly with a ï¬ag. Table 10: Available units in Capture the Flag. | 1707.01067#50 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 50 | [38] W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pages 2074â2082, 2016. 1, 2, 8
[39] J. Wu, C. Leng, Y. Wang, Q. Hu, and J. Cheng. Quantized In Pro- convolutional neural networks for mobile devices. ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4820â4828, 2016. 2
[40] S. Xie, R. Girshick, P. Doll´ar, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. arXiv preprint arXiv:1611.05431, 2016. 1, 2, 3, 4, 5, 6
[41] T. Zhang, G.-J. Qi, B. Xiao, and J. Wang. Interleaved group convolutions for deep neural networks. In International Con- ference on Computer Vision, 2017. 2
[42] X. Zhang, J. Zou, K. He, and J. Sun. Accelerating very deep convolutional networks for classiï¬cation and detection. IEEE transactions on pattern analysis and machine intelli- gence, 38(10):1943â1955, 2016. 1, 8 | 1707.01083#50 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 51 | Capture the Flag. During the game, the player will try to bring the ï¬ag back to his own base. The ï¬ag will appear in the middle of the map. The athlete can carry a ï¬ag or ï¬ght each other. When carrying a ï¬ag, an athlete has reduced movement speed. Upon death, it will drop the ï¬ag if it is carrying one, and will respawn automatically at base after a certain period of time. Once a ï¬ag is brought to a playerâs base, the player scores a point and the ï¬ag is returned to the middle of the map. The ï¬rst player to score 5 points wins.
Tower Defense. During the game, the player will defend his base at top-left corner. Every 200 ticks, increasing number of enemy attackers will spawn at lower-right corner of the map, and travel towards playerâs base through a maze. The player can build towers along the way to prevent enemy from reaching the target. For every 5 enemies killed, the player can build a new tower. The player will lose if 10 enemies reach his base, and will win if he can survive 10 waves of attacks.
# 6.4 Others | 1707.01067#51 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01083 | 51 | [43] X. Zhang, J. Zou, X. Ming, K. He, and J. Sun. Efï¬cient and accurate approximations of nonlinear convolutional net- works. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1984â1992, 2015. 1, 8
[44] A. Zhou, A. Yao, Y. Guo, L. Xu, and Y. Chen. Incremen- tal network quantization: Towards lossless cnns with low- precision weights. arXiv preprint arXiv:1702.03044, 2017. 2
[45] S. Zhou, Y. Wu, Z. Ni, X. Zhou, H. Wen, and Y. Zou. Dorefa-net: Training low bitwidth convolutional neural arXiv preprint networks with low bitwidth gradients. arXiv:1606.06160, 2016. 2
[46] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le. Learn- ing transferable architectures for scalable image recognition. arXiv preprint arXiv:1707.07012, 2017. 2 | 1707.01083#51 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | [
{
"id": "1602.07360"
},
{
"id": "1611.06473"
},
{
"id": "1502.03167"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1608.04337"
},
{
"id": "1606.06160"
},
{
"id": "1702.03044"
},
{
"id": "1608.08021"
},
{
"id": "1710.05941"
},
{
"id": "1707.07012"
},
{
"id": "1611.05431"
},
{
"id": "1603.04467"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
},
{
"id": "1709.01507"
},
{
"id": "1510.00149"
}
] |
1707.01067 | 52 | # 6.4 Others
Game Balance. We test the game balance of Mini-RTS and Capture the Flag. We put the same AI to combat each other. In Mini-RTS the win rate for player 0 is 50.0(±3.0) and In Capture the Flag the win rate for player 0 is 49.9(±1.1).
Replay. We offer serialization of replay and state snapshot at arbitrary ticks, which is more ï¬exible than many commercial games.
13
# 7 Detailed explanation of the experiments
Tbl. 11 shows the discrete action space for Mini-RTS and Capture the Flag used in the experiments.
Randomness. All games based on RTS engine are deterministic. However, modern RL methods require the experience to be diverse to explore the game state space more efï¬ciently. When we train AIs for Mini-RTS, we add randomness by randomly placing resources and bases, and by randomly adding units and buildings when the game starts. For Capture the Flag, all athletes have random starting position, and the ï¬ag appears in a random place with equal distances to both playerâs bases.
# 7.1 Rule based AIs for Mini-RTS | 1707.01067#52 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01067 | 53 | # 7.1 Rule based AIs for Mini-RTS
Simple AI This AI builds 3 workers and ask them to gather resources, then builds a barrack if resource permits, and then starts to build melee attackers. Once he has 5 melee attackers, all 5 attackers will attack opponentâs base.
Hit & Run AI This AI builds 3 workers and ask them to gather resources, then builds a barrack if resource permits, and then starts to build range attackers. Once he has 2 range attackers, the range attackers will move towards opponentâs base and attack enemy troops in range. If enemy counterattacks, the range attackers will hit and run.
# 7.2 Rule based AIs for Capture the Flag
Simple AI This AI will try to get ï¬ag if ï¬ag is not occupied. If one of the athlete gets the ï¬ag, he will escort the ï¬ag back to base, while other athletes defend opponentâs attack. If an opponent athlete carries the ï¬ag, all athletes will attack the ï¬ag carrier. | 1707.01067#53 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.01067 | 54 | Command name IDLE BUILD WORKER BUILD BARRACK BUILD MELEE ATTACKER BUILD RANGE ATTACKER HIT AND RUN ATTACK ATTACK IN RANGE ALL DEFEND Description Do nothing. If the base is idle, build a worker. Move a worker (gathering or idle) to an empty place and build a barrack. If we have an idle barrack, build an melee attacker. If we have an idle barrack, build an range attacker. If we have range attackers, move towards opponent base and attack. Take advantage of their long attack range and high movement speed to hit and run if enemy counter-attack. All melee and range attackers attack the opponentâs base. All melee and range attackers attack enemies in sight. All troops attack enemy troops near the base and resource.
Table 11: Action space used in our trained AI. There are 9 strategic hard-coded global commands. Note that all building commands will be automatically cancelled when the resource is insufï¬cient.
Command name Description IDLE Do nothing. GET FLAG All athletes move towards the ï¬ag and capture the ï¬ag. ESCORT FLAG Move the athlete with the ï¬ag back to base. ATTACK DEFEND
Table 12: Action space used in Capture the Flag trained AI.
14 | 1707.01067#54 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | [
{
"id": "1605.02097"
},
{
"id": "1511.06410"
},
{
"id": "1609.05521"
},
{
"id": "1602.01783"
}
] |
1707.00110 | 0 | 7 1 0 2
l u J 1 ] L C . s c [
1 v 0 1 1 0 0 . 7 0 7 1 : v i X r a
# Efficient Attention using a Fixed-Size Memory Representation
# Denny Britzâ and Melody Y. Guanâ and Minh-Thang Luong Google Brain dennybritz,melodyguan,[email protected]
# Abstract
The standard content-based attention mecha- nism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20% for real-world translation tasks and more for tasks with longer sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments. | 1707.00110#0 | Efficient Attention using a Fixed-Size Memory Representation | The standard content-based attention mechanism typically used in
sequence-to-sequence models is computationally expensive as it requires the
comparison of large encoder and decoder states at each time step. In this work,
we propose an alternative attention mechanism based on a fixed size memory
representation that is more efficient. Our technique predicts a compact set of
K attention contexts during encoding and lets the decoder compute an efficient
lookup that does not need to consult the memory. We show that our approach
performs on-par with the standard attention mechanism while yielding inference
speedups of 20% for real-world translation tasks and more for tasks with longer
sequences. By visualizing attention scores we demonstrate that our models learn
distinct, meaningful alignments. | http://arxiv.org/pdf/1707.00110 | Denny Britz, Melody Y. Guan, Minh-Thang Luong | cs.CL | EMNLP 2017 | null | cs.CL | 20170701 | 20170701 | [] |
1707.00110 | 1 | step based on the current state of the decoder. Intuitively, this corresponds to looking at the source sequence after the output of every single target token. Inspired by how humans process sentences, we believe it may be unnecessary to look back at the entire original source sequence at each step.1 We thus propose an alternative attention mechanism (section 3) that leads to smaller computational time complexity. Our method predicts K attention context vectors while reading the source, and learns to use a weighted av- erage of these vectors at each step of decoding. Thus, we avoid looking back at the source sequence once it has been encoded. We show (section 4) that this speeds up inference while performing on-par with the standard mechanism on both toy and real-world WMT translation datasets. We also show that our mecha- nism leads to larger speedups as sequences get longer. Finally, by visualizing the attention scores (section 5), we verify that the proposed technique learns mean- ingful alignments, and that different attention context vectors specialize on different parts of the source.
# 1 Introduction
# 2 Background | 1707.00110#1 | Efficient Attention using a Fixed-Size Memory Representation | The standard content-based attention mechanism typically used in
sequence-to-sequence models is computationally expensive as it requires the
comparison of large encoder and decoder states at each time step. In this work,
we propose an alternative attention mechanism based on a fixed size memory
representation that is more efficient. Our technique predicts a compact set of
K attention contexts during encoding and lets the decoder compute an efficient
lookup that does not need to consult the memory. We show that our approach
performs on-par with the standard attention mechanism while yielding inference
speedups of 20% for real-world translation tasks and more for tasks with longer
sequences. By visualizing attention scores we demonstrate that our models learn
distinct, meaningful alignments. | http://arxiv.org/pdf/1707.00110 | Denny Britz, Melody Y. Guan, Minh-Thang Luong | cs.CL | EMNLP 2017 | null | cs.CL | 20170701 | 20170701 | [] |
1707.00110 | 2 | # 1 Introduction
# 2 Background
Sequence-to-sequence models (Sutskever et al., 2014; Cho et al., 2014) have achieved state of the art results across a wide variety of tasks, including Neural Machine Translation (NMT) (Bahdanau et al., 2014; Wu et al., 2016), text summarization (Rush et al., 2015; Nallapati et al., 2016), speech recognition (Chan et al., 2015; Chorowski and Jaitly, 2016), image captioning (Xu et al., 2015), and conversational modeling (Vinyals and Le, 2015; Li et al., 2015).
The most popular approaches are based on an two encoder-decoder architecture consisting of recurrent neural networks (RNNs) and an attention mechanism that aligns target to source tokens (Bah- danau et al., 2014; Luong et al., 2015). The typical attention mechanism used in these architectures computes a new attention context at each decoding
# âEqual Contribution. Author order alphabetical.
# 2.1 Sequence-to-Sequence Model with Attention | 1707.00110#2 | Efficient Attention using a Fixed-Size Memory Representation | The standard content-based attention mechanism typically used in
sequence-to-sequence models is computationally expensive as it requires the
comparison of large encoder and decoder states at each time step. In this work,
we propose an alternative attention mechanism based on a fixed size memory
representation that is more efficient. Our technique predicts a compact set of
K attention contexts during encoding and lets the decoder compute an efficient
lookup that does not need to consult the memory. We show that our approach
performs on-par with the standard attention mechanism while yielding inference
speedups of 20% for real-world translation tasks and more for tasks with longer
sequences. By visualizing attention scores we demonstrate that our models learn
distinct, meaningful alignments. | http://arxiv.org/pdf/1707.00110 | Denny Britz, Melody Y. Guan, Minh-Thang Luong | cs.CL | EMNLP 2017 | null | cs.CL | 20170701 | 20170701 | [] |
1707.00110 | 3 | # âEqual Contribution. Author order alphabetical.
# 2.1 Sequence-to-Sequence Model with Attention
Our models are based on an encoder-decoder archi- tecture with attention mechanism (Bahdanau et al., 2014; Luong et al., 2015). An encoder function takes as input a sequence of source tokens x=(x1,...,xm) and produces a sequence of states s=(s1,...,sm) .The decoder is an RNN that predicts the probability of a target sequence y =(y1,...,yT |s). The probability of each target token yi â {1,...,|V |} is predicted based on the recurrent state in the decoder RNN, hi, the pre- vious words, y<i, and a context vector ci. The context vector ci, also referred to as the attention vector, is calculated as a weighted average of the source states.
1Eye-tracking and keystroke logging data from human translators show that translators generally do not reread previously translated source text words when producing target text (Carl et al., 2011).
=) 0458; dd) j
αi =softmax(fatt(hi,s)) (2) | 1707.00110#3 | Efficient Attention using a Fixed-Size Memory Representation | The standard content-based attention mechanism typically used in
sequence-to-sequence models is computationally expensive as it requires the
comparison of large encoder and decoder states at each time step. In this work,
we propose an alternative attention mechanism based on a fixed size memory
representation that is more efficient. Our technique predicts a compact set of
K attention contexts during encoding and lets the decoder compute an efficient
lookup that does not need to consult the memory. We show that our approach
performs on-par with the standard attention mechanism while yielding inference
speedups of 20% for real-world translation tasks and more for tasks with longer
sequences. By visualizing attention scores we demonstrate that our models learn
distinct, meaningful alignments. | http://arxiv.org/pdf/1707.00110 | Denny Britz, Melody Y. Guan, Minh-Thang Luong | cs.CL | EMNLP 2017 | null | cs.CL | 20170701 | 20170701 | [] |
1707.00110 | 4 | =) 0458; dd) j
αi =softmax(fatt(hi,s)) (2)
Here, fatt(hi, s) is an attention function that calculates an unnormalized alignment score between the encoder state sj and the decoder state hi. Variants of fatt used in Bahdanau et al. (2014) and Luong et al. (2015) are:
fatt(hi,sj)= vT a tanh(Wa[hi,sj]), Bahdanau hT i Wasj Luong
where Wa and va are model parameters learned to predict alignment. Let |S| and |T | denote the lengths of the source and target sequences respectively and D denoate the state size of the encoder and decoder RNN. Such content-based attention mechanisms result in in- ference times of O(D2|S||T |)2, as each context vector depends on the current decoder state hi and all encoder states, and requires an O(D2) matrix multiplication. The decoder outputs a distribution over a
vocabulary of fixed-size |V |:
P (yi|y<i,x)=softmax(W [si;ci]+b)
The model is trained end-to-end by minimizing the negative log likelihood of the target words using stochastic gradient descent.
# 3 Memory-Based Attention Model | 1707.00110#4 | Efficient Attention using a Fixed-Size Memory Representation | The standard content-based attention mechanism typically used in
sequence-to-sequence models is computationally expensive as it requires the
comparison of large encoder and decoder states at each time step. In this work,
we propose an alternative attention mechanism based on a fixed size memory
representation that is more efficient. Our technique predicts a compact set of
K attention contexts during encoding and lets the decoder compute an efficient
lookup that does not need to consult the memory. We show that our approach
performs on-par with the standard attention mechanism while yielding inference
speedups of 20% for real-world translation tasks and more for tasks with longer
sequences. By visualizing attention scores we demonstrate that our models learn
distinct, meaningful alignments. | http://arxiv.org/pdf/1707.00110 | Denny Britz, Melody Y. Guan, Minh-Thang Luong | cs.CL | EMNLP 2017 | null | cs.CL | 20170701 | 20170701 | [] |
1707.00110 | 5 | The model is trained end-to-end by minimizing the negative log likelihood of the target words using stochastic gradient descent.
# 3 Memory-Based Attention Model
Our proposed model is shown in Figure 1. During en- coding, we compute an attention matrix C âRKÃD, where K is the number of attention vectors and a hyperparameter of our method, and D is the dimen- sionality of the top-most encoder state. This matrix is computed by predicting a score vector αt â RK at each encoding time step t. C is then a linear combination of the encoder states, weighted by αt:
Is| Ch= Svc Vth St 4) t=0
αt =softmax(Wαst), (5)
where Wα is a parameter matrix in RKÃD.
time complexity for this operation is O(KD|S|). One can think of C as the decoder compact fixed-length memory that
2An exception is the dot-attention from Luong et al. (2015), which is O(D|S||T |), which we discuss further in Section 3. | 1707.00110#5 | Efficient Attention using a Fixed-Size Memory Representation | The standard content-based attention mechanism typically used in
sequence-to-sequence models is computationally expensive as it requires the
comparison of large encoder and decoder states at each time step. In this work,
we propose an alternative attention mechanism based on a fixed size memory
representation that is more efficient. Our technique predicts a compact set of
K attention contexts during encoding and lets the decoder compute an efficient
lookup that does not need to consult the memory. We show that our approach
performs on-par with the standard attention mechanism while yielding inference
speedups of 20% for real-world translation tasks and more for tasks with longer
sequences. By visualizing attention scores we demonstrate that our models learn
distinct, meaningful alignments. | http://arxiv.org/pdf/1707.00110 | Denny Britz, Melody Y. Guan, Minh-Thang Luong | cs.CL | EMNLP 2017 | null | cs.CL | 20170701 | 20170701 | [] |
1707.00110 | 6 | 2An exception is the dot-attention from Luong et al. (2015), which is O(D|S||T |), which we discuss further in Section 3.
will perform attention over. In contrast, standard approaches use a variable-length set of encoder states for attention. At each decoding step, we similarly predict K scores β âRK. The final attention context c is a linear combination of the rows in C weighted by the scores. Intuitively, each decoder step predicts how important each of the K attention vectors is.
K c= So BiC; (6) i=0
β =softmax(Wβh) (7)
Here, h is the current state of the decoder, and Wβ is a learned parameter matrix. Note that we do not access the encoder states at each decoder step. We simply take a linear combination of the attention matrix C pre-computed during encoding - a much cheaper op- eration that is independent of the length of the source sequence. The time complexity of this computation is O(KD|T |) as multiplication with the K attention matrices needs to happen at each decoding step. | 1707.00110#6 | Efficient Attention using a Fixed-Size Memory Representation | The standard content-based attention mechanism typically used in
sequence-to-sequence models is computationally expensive as it requires the
comparison of large encoder and decoder states at each time step. In this work,
we propose an alternative attention mechanism based on a fixed size memory
representation that is more efficient. Our technique predicts a compact set of
K attention contexts during encoding and lets the decoder compute an efficient
lookup that does not need to consult the memory. We show that our approach
performs on-par with the standard attention mechanism while yielding inference
speedups of 20% for real-world translation tasks and more for tasks with longer
sequences. By visualizing attention scores we demonstrate that our models learn
distinct, meaningful alignments. | http://arxiv.org/pdf/1707.00110 | Denny Britz, Melody Y. Guan, Minh-Thang Luong | cs.CL | EMNLP 2017 | null | cs.CL | 20170701 | 20170701 | [] |
1707.00110 | 7 | from encoding and O(KD|T |) from decoding, we have a total linear computational complexity of O(KD(|S| + |T |). As D is typically very large, 512 or 1024 units in most applications, we expect our model to be faster than the standard attention mechanism running in O(D2|S||T |). For long sequences (as in summariza- tion, where âSâ is large), we also expect our model to be faster than the cheaper dot-based attention mech- anism, which needs O(D|S||T |) computation time and requires encoder and decoder states sizes to match. We also experimented with using a sigmoid function instead of the softmax to score the encoder and decoder attention scores, resulting in 4 possible combinations. We call this choice the scoring function. A softmax scoring function calculates normalized scores, while the sigmoid scoring function results in unnormalized scores that can be understood as gates.
# 3.1 Model Interpretations | 1707.00110#7 | Efficient Attention using a Fixed-Size Memory Representation | The standard content-based attention mechanism typically used in
sequence-to-sequence models is computationally expensive as it requires the
comparison of large encoder and decoder states at each time step. In this work,
we propose an alternative attention mechanism based on a fixed size memory
representation that is more efficient. Our technique predicts a compact set of
K attention contexts during encoding and lets the decoder compute an efficient
lookup that does not need to consult the memory. We show that our approach
performs on-par with the standard attention mechanism while yielding inference
speedups of 20% for real-world translation tasks and more for tasks with longer
sequences. By visualizing attention scores we demonstrate that our models learn
distinct, meaningful alignments. | http://arxiv.org/pdf/1707.00110 | Denny Britz, Melody Y. Guan, Minh-Thang Luong | cs.CL | EMNLP 2017 | null | cs.CL | 20170701 | 20170701 | [] |
1707.00110 | 8 | # 3.1 Model Interpretations
Our memory-based attention model can be under- stood intuitively in two ways. We can interpret it as âpredictingâ the set of attention contexts produced by a standard attention mechanism during encoding. To see this, assume we set K â |T |. In this case, we predict all |T | attention contexts during the encoding stage and learn to choose the right one during decoding. This is cheaper than computing contexts one-by-one based on the decoder and encoder content. In fact, we could enforce this objective by first training
a) Regular Encoding / 4 / . hy HHL GAH B z BR END Ed zz & #30 END | sTART | Encoder Decoder Encoder Decoder c) Our Encoding Cy Cy Encoder Decoder RB RF B Beno | start | Encoder Decoder
Figure 1: Memory Attention model architecture. K attention vectors are predicted during encoding, and a linear combination is chosen during decoding. In our example, K =3. | 1707.00110#8 | Efficient Attention using a Fixed-Size Memory Representation | The standard content-based attention mechanism typically used in
sequence-to-sequence models is computationally expensive as it requires the
comparison of large encoder and decoder states at each time step. In this work,
we propose an alternative attention mechanism based on a fixed size memory
representation that is more efficient. Our technique predicts a compact set of
K attention contexts during encoding and lets the decoder compute an efficient
lookup that does not need to consult the memory. We show that our approach
performs on-par with the standard attention mechanism while yielding inference
speedups of 20% for real-world translation tasks and more for tasks with longer
sequences. By visualizing attention scores we demonstrate that our models learn
distinct, meaningful alignments. | http://arxiv.org/pdf/1707.00110 | Denny Britz, Melody Y. Guan, Minh-Thang Luong | cs.CL | EMNLP 2017 | null | cs.CL | 20170701 | 20170701 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.