doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1707.04873 | 18 | The structure of the Net2Deeper actor is shown in Fig- ure 3, which is a recurrent network whose hidden state is initialized with the ï¬nal hidden state of the encoder net- work. Similar to previous work (Baker et al. 2017), we al- low the Net2Deeper actor to insert one new layer at each step. Speciï¬cally, we divide a CNN architecture into sev- eral blocks according to the pooling layers and Net2Deeper actor sequentially determines which block to insert the new layer, a speciï¬c index within the block and parameters of the new layer. For a new convolutional layer, the agent needs to determine the ï¬lter size and the stride while for a new fully- connected layer, no parameter prediction is needed. In CNN architectures, any fully-connected layer should be on the top of all convolutional and pooling layers. To avoid resulting in unreasonable architectures, if the Net2Deeper actor decides to insert a new layer after a fully-connected layer or the ï¬nal global average pooling layer, the new layer is restricted to be a fully-connected layer, otherwise it must be a convolutional layer. | 1707.04873#18 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 19 | Function-preserving Transformation for DenseNet The original Net2Net operations proposed in (Chen, Good- fellow, and Shlens 2015) are discussed under the scenarios where the network is arranged layer-by-layer, i.e. the output of a layer is only fed to its next layer. As such, in some mod- ern CNN architectures where the output of a layer would be fed to multiple subsequent layers, such as DenseNet (Huang
et al. 2017), directly applying the original Net2Net opera- tions can be problematic. In this section, we introduce sev- eral extensions to the original Net2Net operations to enable function-preserving transformations for DenseNet.
Different from the plain CNN, in DenseNet, the lth layer would receive the outputs of all preceding layers as input, which are concatenated on the channel dimension, denoted as [O0, O1, · · · , Olâ1], while its output Ol would be fed to all subsequent layers. | 1707.04873#19 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 20 | Denote the kernel of the lth layer as Kl with shape o). To replace the lth layer with a wider layer i , f l (kl h, f l w, kl that has Ëf l o output channels while preserving the function- ality, the creation of the new kernel ËKl in the lth layer is the same as the original Net2WiderNet operation (see Eq. (1) and Eq. (2)). As such, the new output of the wider layer is ËOl with ËOl(j) = Ol(Gl(j)), where Gl is the random remapping function as deï¬ned in Eq. (1). Since the output of the lth layer will be fed to all subsequent layers in DenseNet, the replication in ËOl will result in replication in the inputs of all layers after the lth layer. As such, instead of only modifying the kernel of the next layer as done in the original Net2WiderNet operation, we need to modify the kernels of all subsequent layers in DenseNet. For the mth layer where m > l, its input be- comes [O0, · · · , Olâ1, ËOl, Ol+1, · · · , Omâ1] after widen- ing the lth layer, thus from | 1707.04873#20 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 22 | ËGm(j) = j f 0:l o +Gl(j) j â Ëf l o +f l o 1 ⤠j ⤠f 0:l f 0:l o < j ⤠f 0:l o + Ëf l f 0:l o o + Ëf l o < j ⤠f 0:m o o + Ëf l o âf l o , (4)
where f 0:l o is the number of input channels for the lth layer, the ï¬rst part corresponds to [O0, · · · , Olâ1], the second part corresponds to [ ËOl], and the last part corre- sponds to [Ol+1, · · · , Omâ1]. A simple example of ËGm is given as
(on a A â_ â Gin : {1,-++ ,5,6,7,8, 9,10, 11} > {1,-+- ,5,6,7,6, 6, 8, 9} where G : {1,2,3,4} > {1,2, 1,1}.
Accordingly the new kernel of mth layer can be given by Eq. (3) with Gl replaced with ËGm. | 1707.04873#22 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 23 | Accordingly the new kernel of mth layer can be given by Eq. (3) with Gl replaced with ËGm.
To insert a new layer in DenseNet, suppose the new layer is inserted after the lth layer. Denote the output of the new layer as Onew, and its input is [O0, O1, · · · , Ol]. Therefore, for the mth (m > l) layer, its new input after the insertion is [O0, O1, · · · , Ol, Onew, Ol+1, · · · , Omâ1]. To preserve the functionality, similar to the Net2WiderNet case, Onew should be the replication of some entries in [O0, O1, · · · , Ol]. It is possible, since the input of the new layer is [O0, O1, · · · , Ol]. Each ï¬lter in the new layer can be represented with a tensor, denoted as ËF with shape i = f 0:l+1 (knew ), where knew denote the width and height of the ï¬lter, and f new is the number of in- put channels. To make the output of ËF to be a replication | 1707.04873#23 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 24 | of the nth entry in [O0, O1, · · · , Ol], we can set ËF (using the special case that knew h = 3 for illustration) as the following
ËF [x, y, n] = 0 0 0 0 1 0 0 0 0 , (5)
while all other values in ËF are set to be 0. Note that n can be chosen randomly from {1, · · · , f 0:l+1 } for each ï¬lter. After all ï¬lters in the new layer are set, we can form an equivalent random remapping function for all subsequent layers as is done in Eq. (4) and modify their kernels accordingly.
# Experiments and Results | 1707.04873#24 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 25 | # Experiments and Results
In line with the previous work (Baker et al. 2017; Zoph and Le 2017; Real et al. 2017), we apply the proposed EAS on image benchmark datasets (CIFAR-10 and SVHN) to ex- plore high performance CNN architectures for the image classiï¬cation task1. Notice that the performances of the ï¬nal designed models largely depend on the architecture space and the computational resources. In our experiments, we evaluate EAS in two different settings. In all cases, we use restricted computational resources (5 GPUs) compared to the previous work such as (Zoph and Le 2017) that used 800 GPUs. In the ï¬rst setting, we apply EAS to explore the plain CNN architecture space, which purely consists of con- volutional, pooling and fully-connected layers. While in the second setting, we apply EAS to explore the DenseNet ar- chitecture space.
# Image Datasets | 1707.04873#25 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 26 | # Image Datasets
CIFAR-10 The CIFAR-10 dataset (Krizhevsky and Hin- ton 2009) consists of 50,000 training images and 10,000 test images. We use a standard data augmentation scheme that is widely used for CIFAR-10 (Huang et al. 2017), and denote the augmented dataset as C10+ while the original dataset is denoted as C10. For preprocessing, we normal- ized the images using the channel means and standard de- viations. Following the previous work (Baker et al. 2017; Zoph and Le 2017), we randomly sample 5,000 images from the training set to form a validation set while using the re- maining 45,000 images for training during exploring the ar- chitecture space. | 1707.04873#26 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 27 | SVHN The Street View House Numbers (SVHN) dataset (Netzer et al. 2011) contains 73,257 images in the original training set, 26,032 images in the test set, and 531,131 addi- tional images in the extra training set. For preprocessing, we divide the pixel values by 255 and do not perform any data augmentation, as is done in (Huang et al. 2017). We follow (Baker et al. 2017) and use the original training set during the architecture search phase with 5,000 randomly sampled images as the validation set, while training the ï¬nal discov- ered architectures using all the training data, including the original training set and extra training set.
1Experiment code and discovered top architectures along with weights: https://github.com/han-cai/EAS
wo fo (95.11) & 300 epochs ! training (92.47) o eq 1 o N 1 (91.78) 100 epochs } training | (90.18) oO ° 1 foe] 0 1 (87.07) Average Validation Accuracy (%) co a ie} 100 200 300 400 500 Number of Nets Sampled
Figure 4: Progress of two stages architecture search on C10+ in the plain CNN architecture space. | 1707.04873#27 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 29 | At each step, the meta-controller samples 10 networks by taking network transformation actions. Since the sampled networks are not trained from scratch but we reuse weights of the given network in our scenario, they are then trained for 20 epochs, a relative small number compared to 50 epochs in (Zoph and Le 2017). Besides, we use a smaller initial learn- ing rate for this reason. Other settings for training networks on CIFAR-10 and SVHN, are similar to (Huang et al. 2017; Zoph and Le 2017). Speciï¬cally, we use the SGD with a Nesterov momentum (Sutskever et al. 2013) of 0.9, a weight decay of 0.0001, a batch size of 64. The initial learning rate is 0.02 and is further annealed with a cosine learning rate decay (Gastaldi 2017). The accuracy in the held-out valida- tion set is used to compute the reward signal for each sam- pled network. Since the gain of improving the accuracy from 90% to 91% should be much larger than from 60% to 61%, instead of directly using the validation accuracy accv as the reward, as done in (Zoph and Le 2017), we perform a non- linear transformation on | 1707.04873#29 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 31 | Explore Plain CNN Architecture Space We start applying EAS to explore the plain CNN archi- tecture space. Following the previous automatic architec- ture designing methods (Baker et al. 2017; Zoph and Le 2017), EAS searches layer parameters in a discrete and lim- ited space. For every convolutional layer, the ï¬lter size is chosen from {1, 3, 5} and the number of ï¬lters is cho- sen from {16, 32, 64, 96, 128, 192, 256, 320, 384, 448, 512}, while the stride is ï¬xed to be 1 (Baker et al. 2017). For every fully-connected layer, the number of units is chosen from {64, 128, 256, 384, 512, 640, 768, 896, 1024}. Additionally,
Table 1: Simple start point network. C(n, f, l) denotes a convolutional layer with n ï¬lters, ï¬lter size f and stride l; P(f, l, MAX) and P(f, l, AVG) denote a max and an average pooling layer with ï¬lter size f and stride l respectively; FC(n) denotes a fully- connected layer with n units; SM(n) denotes a softmax layer with n output units. | 1707.04873#31 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 32 | Model Architecture C(16, 3, 1), P(2, 2, MAX), C(32, 3, 1), P(2, 2, MAX), C(64, 3, 1), P(2, 2, MAX), C(128, 3, 1), P(4, 4, AVG), FC(256), SM(10) Validation Accuracy (%) 87.07
we use ReLU and batch normalization for each convolu- tional or fully-connected layer. For SVHN, we add a dropout layer after each convolutional layer (except the ï¬rst layer) and use a dropout rate of 0.2 (Huang et al. 2017). | 1707.04873#32 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 33 | Start with Small Network We begin the exploration on C10+, using a small network (see Table 1), which achieves 87.07% accuracy in the held-out validation set, as the start point. Different from (Zoph and Le 2017; Baker et al. 2017), EAS is not restricted to start from empty and can ï¬exibly use any discovered architecture as the new start point. As such, to take the advantage of such ï¬exibility and also re- duce the search space for saving the computational resources and time, we divide the whole architecture search process into two stages where we allow the meta-controller to take 5 steps of Net2Deeper action and 4 steps of Net2Wider action in the ï¬rst stage. After 300 networks are sampled, we take the network which performs best currently and train it with a longer period of time (100 epochs) to be used as the start point for the second stage. Similarly, in the second stage, we also allow the meta-controller to take 5 steps of Net2Deeper action and 4 steps of Net2Wider action and stop exploration after 150 networks are sampled. | 1707.04873#33 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 34 | The progress of the two stages architecture search is shown in Figure 4, where we can ï¬nd that EAS gradu- ally learns to pick high performance architectures at each stage. As EAS takes function-preserving transformations to explore the architecture space, we can also ï¬nd that the sampled architectures consistently perform better than the start point network at each stage. Thus it is usually âsafeâ to explore the architecture space with EAS. We take the top networks discovered during the second stage and fur- ther train the networks with 300 epochs using the full train- ing set. Finally, the best model achieves 95.11% test ac- curacy (i.e. 4.89% test error rate). Furthermore, to justify the transferability of the discovered networks, we train the top architecture (95.11% test accuracy) on SVHN from ran- dom initialization with 40 epochs using the full training set and achieves 98.17% test accuracy (i.e. 1.83% test er- ror rate), better than both human-designed and automatically designed architectures that are in the plain CNN architecture space (see Table 2). | 1707.04873#34 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 35 | We would like to emphasize that the required computa- tional resources to achieve this result is much smaller than those required in (Zoph and Le 2017; Real et al. 2017). Speciï¬cally, it takes less than 2 days on 5 GeForce GTX 1080 GPUs with totally 450 networks trained to achieve 4.89% test error rate on C10+ starting from a small network.
Further Explore Larger Architecture Space To further search better architectures in the plain CNN architecture
Table 2: Test error rate (%) comparison with CNNs that use convolutional, fully-connected and pooling layers alone.
human designed auto designed Model Maxout (Goodfellow et al. 2013) NIN (Lin, Chen, and Yan 2013) All-CNN (Springenberg et al. 2014) VGGnet (Simonyan and Zisserman 2015) MetaQNN (Baker et al. 2017) (depth=7) MetaQNN (Baker et al. 2017) (ensemble) EAS (plain CNN, depth=16) EAS (plain CNN, depth=20) C10+ 9.38 8.81 7.25 7.25 6.92 - 4.89 4.23 SVHN 2.47 2.35 - - - 2.06 1.83 1.73 | 1707.04873#35 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 37 | The summarized results of comparing with human- designed and automatically designed architectures that use a similar design scheme (plain CNN), are reported in Table 2, where we can ï¬nd that the top model designed by EAS on the plain CNN architecture space outperforms all similar models by a large margin. Speciï¬cally, comparing to human- designed models, the test error rate drops from 7.25% to 4.23% on C10+ and from 2.35% to 1.73% on SVHN. While comparing to MetaQNN, the Q-learning based automatic ar- chitecture designing method, EAS achieves a relative test er- ror rate reduction of 38.9% on C10+ and 16.0% on SVHN. We also notice that the best model designed by MetaQNN on C10+ only has a depth of 7, though the maximum is set to be 18 in the original paper (Baker et al. 2017). We sup- pose maybe they trained each designed network from scratch and used an aggressive training strategy to accelerate train- ing, which resulted in many networks under performed, es- pecially for deep networks. Since we reuse the weights of pre-existing networks, the deep networks are validated more accurately in EAS, and we can thus design deeper and more accurate networks than MetaQNN. | 1707.04873#37 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 38 | We also report the comparison with state-of-the-art ar- chitectures that use advanced techniques such as skip- connections, branching etc., on C10+ in Table 3. Though it is not a fair comparison since we do not incorporate such advanced techniques into the search space in this experi- ment, we still ï¬nd that the top model designed by EAS is highly competitive even comparing to these state-of-the-art modern architectures. Speciï¬cally, the 20-layers plain CNN with 23.4M parameters outperforms ResNet, its stochas- tic depth variant and its pre-activation variant. It also ap- proaches the best result given by DenseNet. When com- paring to automatic architecture designing methods that in# Table 3: Test error rate (%) comparison with state-of-the-art architectures. | 1707.04873#38 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 39 | human designed auto designed Model ResNet (He et al. 2016a) ResNet (stochastic depth) (Huang et al. 2017) Wide ResNet (Zagoruyko and Komodakis 2016) Wide ResNet (Zagoruyko and Komodakis 2016) ResNet (pre-activation) (He et al. 2016b) DenseNet (L = 40, k = 12) (Huang et al. 2017) DenseNet-BC (L = 100, k = 12) (Huang et al. 2017) DenseNet-BC (L = 190, k = 40) (Huang et al. 2017) Large-Scale Evolution (250 GPUs)(Real et al. 2017) NAS (predicting strides, 800 GPUs) (Zoph and Le 2017) NAS (max pooling, 800 GPUs) (Zoph and Le 2017) NAS (post-processing, 800 GPUs) (Zoph and Le 2017) EAS (plain CNN, 5 GPUs) Depth 110 1202 16 28 1001 40 100 190 - 20 39 39 20 Params C10+ 6.61 1.7M 4.91 10.2M 4.81 11.0M 4.17 36.5M 4.62 10.2M 5.24 1.0M 4.51 0.8M 3.46 25.6M | 1707.04873#39 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 41 | © i © 3 Average Validation Accuracy (%) $ Best Validation Accuracy (%) pS en ont yo ee Ny 88 â RL â RL â-- Random â-- Random 87 + 87 1 0 100 200 300 0 100 200 300 Number of Nets Sampled Number of Nets Sampled
Figure 5: Comparison between RL based meta-controller and random search on C10+.
Table 4: Test error rate (%) results of exploring DenseNet architecture space with EAS.
Model DenseNet (L = 100, k = 24) DenseNet-BC (L = 250, k = 24) DenseNet-BC (L = 190, k = 40) NAS (post-processing) EAS (DenseNet on C10) EAS (DenseNet on C10+) Depth 100 250 190 39 70 76 Params C10 C10+ 3.74 27.2M 5.83 3.62 15.3M 5.19 3.46 25.6M 3.65 37.4M 8.6M 4.66 - 3.44 10.7M - - corporate skip-connections into their search space, our 20- layers plain model beats most of them except NAS with post-processing, that is much deeper and has more param- eters than our model. Moreover, we only use 5 GPUs and train hundreds of networks while they use 800 GPUs and train tens of thousands of networks. | 1707.04873#41 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 42 | Comparison Between RL and Random Search Our framework is not restricted to use the RL based meta- controller. Beside RL, one can also take network transfor- mation actions to explore the architecture space by random search, which can be effective in some cases (Bergstra and Bengio 2012). In this experiment, we compare the perfor- mances of the RL based meta-controller and the random search meta-controller in the architecture space that is used in the above experiments. Speciï¬cally, we use the network in Table 1 as the start point and let the meta-controller to take 5 steps of Net2Deeper action and 4 steps of Net2Wider
action. The result is reported in Figure 5, which shows that the RL based meta-controller can effectively focus on the right search direction, while the random search cannot (left plot), and thus ï¬nd high performance architectures more ef- ï¬ciently than random search. | 1707.04873#42 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 43 | Explore DenseNet Architecture Space We also apply EAS to explore the DenseNet architecture space. We use the DenseNet-BC (L = 40, k = 40) as the start point. The growth rate, i.e. the width of the non- bottleneck layer is chosen from {40, 44, 48, 52, 56, 60, 64}, and the result is reported in Table 4. We ï¬nd that by ap- plying EAS to explore the DenseNet architecture space, we achieve a test error rate of 4.66% on C10, better than the best result, i.e. 5.19% given by the original DenseNet while having 43.79% less parameters. On C10+, we achieve a test error rate of 3.44%, also outperforming the best result, i.e. 3.46% given by the original DenseNet while having 58.20% less parameters. | 1707.04873#43 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 44 | Conclusion In this paper, we presented EAS, a new framework to- ward economical and efï¬cient architecture search, where the meta-controller is implemented as a RL agent. It learns to take actions for network transformation to explore the architecture space. By starting from an existing network and reusing its weights via the class of function-preserving transformation operations, EAS is able to utilize knowledge stored in previously trained networks and take advantage of the existing successful architectures in the target task to explore the architecture space efï¬ciently. Our experiments have demonstrated EASâs outstanding performance and ef- ï¬ciency compared with several strong baselines. For future work, we would like to explore more network transforma- tion operations and apply EAS for different purposes such as searching networks that not only have high accuracy but also keep a balance between the size and the performance.
Acknowledgments This research was sponsored by Huawei Innovation Re- search Program, NSFC (61702327) and Shanghai Sailing Program (17YF1428200).
References [Bahdanau, Cho, and Bengio 2014] Bahdanau, D.; Cho, K.; and Bengio, Y. 2014. Neural machine translation by jointly learning to align and translate. ICLR. | 1707.04873#44 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 45 | [Baker et al. 2017] Baker, B.; Gupta, O.; Naik, N.; and Raskar, R. 2017. Designing neural network architectures using reinforcement learning. ICLR.
[Bergstra and Bengio 2012] Bergstra, J., and Bengio, Y. 2012. Ran- dom search for hyper-parameter optimization. JMLR.
[Cai et al. 2017] Cai, H.; Ren, K.; Zhang, W.; Malialis, K.; Wang, J.; Yu, Y.; and Guo, D. 2017. Real-time bidding by reinforcement learning in display advertising. In WSDM.
[Chen, Goodfellow, and Shlens 2015] Chen, T.; Goodfellow, I.; and Shlens, J. 2015. Net2net: Accelerating learning via knowledge transfer. ICLR.
[Domhan, Springenberg, and Hutter 2015] Domhan, T.; Springen- berg, J. T.; and Hutter, F. 2015. Speeding up automatic hyper- parameter optimization of deep neural networks by extrapolation of learning curves. In IJCAI.
[Gastaldi 2017] Gastaldi, X. 2017. Shake-shake regularization.
arXiv preprint arXiv:1705.07485. [Goodfellow et al. 2013] Goodfellow, | 1707.04873#45 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 46 | arXiv preprint arXiv:1705.07485. [Goodfellow et al. 2013] Goodfellow,
J.; Warde-Farley, D.; Mirza, M.; Courville, A.; and Bengio, Y. 2013. Maxout networks. ICML.
[Han et al. 2015] Han, S.; Pool, J.; Tran, J.; and Dally, W. 2015. Learning both weights and connections for efï¬cient neural net- work. In NIPS.
[He et al. 2016a] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016a. Deep residual learning for image recognition. In CVPR.
[He et al. 2016b] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016b. Identity mappings in deep residual networks. In ECCV.
[Huang et al. 2017] Huang, G.; Liu, Z.; Weinberger, K. Q.; and van der Maaten, L. 2017. Densely connected convolutional net- works. CVPR.
[Ioffe and Szegedy 2015] Ioffe, S., and Szegedy, C. 2015. Batch normalization: Accelerating deep network training by reducing in- ternal covariate shift. ICML. | 1707.04873#46 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 47 | [Kakade 2002] Kakade, S. 2002. A natural policy gradient. NIPS. [Kingma and Ba 2015] Kingma, D., and Ba, J. 2015. Adam: A
method for stochastic optimization. ICLR.
[Klein et al. 2017] Klein, A.; Falkner, S.; Springenberg, J. T.; and Hutter, F. 2017. Learning curve prediction with bayesian neural networks. ICLR.
[Krizhevsky and Hinton 2009] Krizhevsky, A., and Hinton, G. 2009. Learning multiple layers of features from tiny images.
[Krizhevsky, Sutskever, and Hinton 2012] Krizhevsky, A.; Imagenet classiï¬ca- Sutskever, I.; and Hinton, G. E. tion with deep convolutional neural networks. In NIPS. 2012.
[Lin, Chen, and Yan 2013] Lin, M.; Chen, Q.; and Yan, S. 2013. Network in network. arXiv preprint arXiv:1312.4400. | 1707.04873#47 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 48 | [Mendoza et al. 2016] Mendoza, H.; Klein, A.; Feurer, M.; Sprin- genberg, J. T.; and Hutter, F. 2016. Towards automatically-tuned neural networks. In Workshop on Automatic Machine Learning. [Miller, Todd, and Hegde 1989] Miller, G. F.; Todd, P. M.; and Hegde, S. U. 1989. Designing neural networks using genetic algo- rithms. In ICGA. Morgan Kaufmann Publishers Inc.
[Negrinho and Gordon 2017] Negrinho, R., and Gordon, G. 2017. Deeparchitect: Automatically designing and training deep architec- tures. arXiv preprint arXiv:1704.08792.
[Netzer et al. 2011] Netzer, Y.; Wang, T.; Coates, A.; Bissacco, A.; Wu, B.; and Ng, A. Y. 2011. Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature learning.
[Real et al. 2017] Real, E.; Moore, S.; Selle, A.; Saxena, S.; Sue- matsu, Y. L.; Le, Q.; and Kurakin, A. 2017. Large-scale evolution of image classiï¬ers. ICML. | 1707.04873#48 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 49 | [Schulman et al. 2015] Schulman, J.; Levine, S.; Abbeel, P.; Jordan, M. I.; and Moritz, P. 2015. Trust region policy optimization. In ICML.
[Schuster and Paliwal 1997] Schuster, M., and Paliwal, K. K. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Sig- nal Processing.
[Silver et al. 2016] Silver, D.; Huang, A.; Maddison, C. J.; Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. 2016. Mastering the game of go with deep neural networks and tree search. Nature. [Simonyan and Zisserman 2015] Simonyan, K., and Zisserman, A. 2015. Very deep convolutional networks for large-scale image recognition. ICLR.
[Snoek, Larochelle, and Adams 2012] Snoek, J.; Larochelle, H.; and Adams, R. P. 2012. Practical bayesian optimization of ma- chine learning algorithms. In NIPS. | 1707.04873#49 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.04873 | 50 | [Springenberg et al. 2014] Springenberg, J. T.; Dosovitskiy, A.; Brox, T.; and Riedmiller, M. 2014. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806.
[Stanley and Miikkulainen 2002] Stanley, K. O., and Miikkulainen, R. 2002. Evolving neural networks through augmenting topolo- gies. Evolutionary computation.
[Sutskever et al. 2013] Sutskever, I.; Martens, J.; Dahl, G.; and Hin- ton, G. 2013. On the importance of initialization and momentum in deep learning. In ICML.
[Sutskever, Vinyals, and Le 2014] Sutskever, I.; Vinyals, O.; and Le, Q. V. 2014. Sequence to sequence learning with neural net- works. In NIPS.
[Sutton and Barto 1998] Sutton, R. S., and Barto, A. G. 1998. Re- inforcement learning: An introduction. MIT press Cambridge.
[Williams 1992] Williams, R. J. 1992. Simple statistical gradient- learning. following algorithms for connectionist reinforcement Machine learning. | 1707.04873#50 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | [
{
"id": "1705.07485"
},
{
"id": "1704.08792"
},
{
"id": "1605.07146"
}
] |
1707.03743 | 1 | AbstractâThe real-time strategy game StarCraft has proven to be a challenging environment for artiï¬cial intelligence techniques, and as a result, current state-of-the-art solutions consist of numerous hand-crafted modules. In this paper, we show how macromanagement decisions in StarCraft can be learned directly from game replays using deep learning. Neural networks are trained on 789,571 state-action pairs extracted from 2,005 replays of highly skilled players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting the next build action. By integrating the trained network into UAlbertaBot, an open source StarCraft bot, the system can signiï¬cantly outperform the gameâs built-in Terran bot, and play competitively against UAlbertaBot with a ï¬xed rush strategy. To our knowledge, this is the ï¬rst time macromanagement tasks are learned directly from replays in StarCraft. While the best hand-crafted strategies are still the state-of-the-art, the deep network approach is able to express a wide range of different strategies and thus improving the networkâs performance further with deep reinforcement learning is an immediately promising avenue for future research. Ultimately this approach could lead to strong StarCraft bots that are less reliant on hard-coded strategies.
# I. INTRODUCTION | 1707.03743#1 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 1 | We present two new large-scale datasets aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. The QUASAR-S dataset consists of 37000 cloze-style (ï¬ll-in-the-gap) queries constructed from deï¬nitions of software entity tags on the popular website Stack Overï¬ow. The posts and comments on the website serve as the background cor- pus for answering the cloze questions. The QUASAR-T dataset consists of 43000 open-domain trivia questions and their answers obtained from various internet sources. ClueWeb09 (Callan et al., 2009) serves as the background corpus for ex- tracting these answers. We pose these datasets as a challenge for two related sub- tasks of factoid Question Answering: (1) searching for relevant pieces of text that include the correct answer to a query, and (2) reading the retrieved text to answer the query. We also describe a retrieval system for extracting relevant sentences and doc- uments from the corpus given a query, and include these in the release for researchers wishing to only focus on (2). | 1707.03904#1 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 2 | # I. INTRODUCTION
Artiï¬cial neural networks have been a promising tool in machine learning for many tasks. In the last decade, the increase in computational resources as well as several algorith- mic improvements, have allowed deep neural networks with many layers to be trained on large datasets. This approach, also re-branded as deep learning, has remarkably pushed the limits within object recognition [13], speech recognition [8], and many other domains. Combined with reinforcement learning, these techniques have surpassed the previous state-of-the-art in playing Atari games [16], the classic board game Go [23] and the 3D ï¬rst-person shooter Doom [15].
An open challenge for these methods are real-time strategy (RTS) games such as StarCraft, which are highly complex on many levels because of their enormous state and actions space with a large number of units that must be controlled in real-time. Furthermore, in contrast to games like Go, AI algorithms in StarCraft must deal with hidden information; the opponentâs base is initially hidden and must be explored continuously throughout the game to know (or guess) what strategy the opponent is following. The game has been a popular environment for game AI researchers with several StarCraft AI competitions such as the AIIDE StarCraft AI
Competition1, CIG StarCraft RTS AI Competition2 and the Student StarCraft AI Competition3. | 1707.03743#2 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 2 | for extracting relevant sentences and doc- uments from the corpus given a query, and include these in the release for researchers wishing to only focus on (2). We evaluate several baselines on both datasets, ranging from simple heuristics to powerful neu- ral models, and show that these lag be- hind human performance by 16.4% and 32.1% for QUASAR-S and -T respectively. The datasets are available at https:// github.com/bdhingra/quasar. | 1707.03904#2 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 3 | Competition1, CIG StarCraft RTS AI Competition2 and the Student StarCraft AI Competition3.
However, bots participating in these competitions rely mainly on hard-coded strategies [6, 20] and are rarely able to adapt to the opponent during the game. They usually have a modular control architecture that divides the game into smaller task areas, relying heavily on hand-crafted modules and developer domain knowledge. Learning to play the entire game with end-to-end deep learning, as it was done for Atari games [16], is currently an unsolved challenge and perhaps an infeasible approach. A simpler approach, which we follow in this paper, is to apply deep learning to replace a speciï¬c function in a larger AI architecture. | 1707.03743#3 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 3 | # Introduction
to information seeking questions posed in natu- ral language. Depending on the knowledge source available there are two main approaches for fac- toid QA. Structured sources, including Knowledge Bases (KBs) such as Freebase (Bollacker et al., 2008), are easier to process automatically since the information is organized according to a ï¬xed In this case the question is parsed into schema. a logical form in order to query against the KB. However, even the largest KBs are often incom- plete (Miller et al., 2016; West et al., 2014), and hence can only answer a limited subset of all pos- sible factoid questions.
For this reason the focus is now shifting towards unstructured sources, such as Wikipedia articles, which hold a vast quantity of information in tex- tual form and, in principle, can be used to answer a much larger collection of questions. Extracting the correct answer from unstructured text is, how- ever, challenging, and typical QA pipelines con- sist of the following two components: (1) search- ing for the passages relevant to the given question, and (2) reading the retrieved text in order to se- lect a span of text which best answers the question (Chen et al., 2017; Watanabe et al., 2017). | 1707.03904#3 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 4 | More speciï¬cally, we focus on applying deep learning to macromanagement tasks in StarCraft: Brood War in the context of deciding what to produce next. A neural network is trained to predict these decisions based on a training set extracted from replay ï¬les (i.e. game logs) of highly skilled human players. The trained neural network is combined with the existing StarCraft bot UAlbertaBot, and is responsible for deciding what unit, building, technology, or upgrade to produce next, given the current state of the game. While our approach does not achieve state-of-the-art results on its own, it is a promising ï¬rst step towards self-learning methods for macromanagement in RTS games. Additionally, the approach presented here is not restricted to StarCraft and can be directly applied to other RTS games as well. | 1707.03743#4 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 4 | Like most other language technologies, the cur- rent research focus for both these steps is ï¬rmly on machine learning based approaches for which performance improves with the amount of data available. Machine reading performance, in par- ticular, has been signiï¬cantly boosted in the last few years with the introduction of large-scale read- ing comprehension datasets such as CNN / Daily- Mail (Hermann et al., 2015) and Squad (Rajpurkar et al., 2016). State-of-the-art systems for these datasets (Dhingra et al., 2017; Seo et al., 2017) fo- cus solely on step (2) above, in effect assuming the relevant passage of text is already known.
Factoid Question Answering (QA) aims to extract answers, from an underlying knowledge source,
In this paper, we introduce two new datasets for QUestion Answering by Search And Reading
javascript â javascript not to be confused with java is a dynamic weakly-typed language used for XXXXX as well as server-side scripting . client-side JavaScript is not weakly typed, it is strong typed. JavaScript is a Client Side Scripting Language. JavaScript was the **original** client-side web scripting language.
# Question
# Answer Context excerpt
Context excerpt JavaScript is not weakly typed, it is strong typed.
# Question Answer Context excerpt | 1707.03904#4 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 5 | II. STARCRAFT StarCraft is a real-time strategy (RTS) game released by Blizzard in 1998. The same year an expansion set called StarCraft: Brood War was released, which became so popular that a professional StarCraft gamer scene emerged. The game is a strategic military combat simulation in a science ï¬ction setting. Each player controls one of three races; Terran, Protoss and Zerg. During the game, they must gather resources to expand their base and produce an army. The winner of a game is the player that manages to destroy the opponentâs base. Figure 1 shows a screenshot from a playerâs perspective con- trolling the Protoss. The screenshot shows numerous workers
# 1http://www.cs.mun.ca/â¼dchurchill/starcraftaicomp/ 2http://cilab.sejong.ac.kr/sc competition/ 3http://sscaitournament.com/ | 1707.03743#5 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 5 | # Question
# Answer Context excerpt
Context excerpt JavaScript is not weakly typed, it is strong typed.
# Question Answer Context excerpt
7-Eleven stores were temporarily converted into Kwik E-marts to promote the release of what movie? the simpsons movie In July 2007 , 7-Eleven redesigned some stores to look like Kwik-E-Marts in select cities to promote The Simpsons Movie . Tie-in promotions were made with several companies , including 7-Eleven , which transformed se- lected stores into Kwik-E-Marts . â 7-Eleven Becomes Kwik-E-Mart for â Simpsons Movie â Promotion â .
Figure 1: Example short-document instances from QUASAR-S (top) and QUASAR-T (bottom) | 1707.03904#5 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03904 | 6 | Figure 1: Example short-document instances from QUASAR-S (top) and QUASAR-T (bottom)
â QUASAR. The datasets each consist of factoid question-answer pairs and a corresponding large background corpus to facilitate research into the combined problem of retrieval and comprehen- sion. QUASAR-S consists of 37,362 cloze-style questions constructed from deï¬nitions of software entities available on the popular website Stack Overï¬ow1. The answer to each question is re- stricted to be another software entity, from an out- put vocabulary of 4874 entities. QUASAR-T con- sists of 43,013 trivia questions collected from var- ious internet sources by a trivia enthusiast. The answers to these questions are free-form spans of text, though most are noun phrases. | 1707.03904#6 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 7 | collecting minerals and gas resources, and some buildings used to produce combat units. To master the game, StarCraft players need quick reactions to accurately and efï¬ciently control a large number of units in real-time. Tasks related to unit control are called micromanagement tasks, while macromanagement refers to the higher-level game strategy the player is following. Part of the macromanagement is the chosen build order, i.e. the order in which the player produces material in the game, which can be viewed as the strategic plan a player is following. In this paper, the term build is used to refer to any of the four types of material that can be produced: units, buildings, upgrades and technologies. Besides the opening build order, it is equally important for the player to be able to adapt to the opponentâs strategy later in the game. For example, if a player becomes aware that the opponent is producing ï¬ying units it is a bad idea to exclusively produce melee troops that are restricted to ground attacks. Players need to be able to react and adjust to the build strategies of their opponent; learning these macromanagement decisions is the focus of this paper. Macromanagement in StarCraft is challenging for a number of reasons, but mostly because | 1707.03743#7 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 7 | ity to answer questions given large corpora. Prior datasets (such as those used in (Chen et al., 2017)) are constructed by ï¬rst selecting a passage and then constructing questions about that pas- sage. This design (intentionally) ignores some of the subproblems required to answer open-domain questions from corpora, namely searching for pas- sages that may contain candidate answers, and ag- gregating information/resolving conï¬icts between candidates from many passages. The purpose of Quasar is to allow research into these subprob- lems, and in particular whether the search step can beneï¬t from integration and joint training with downstream reading systems.
While production quality QA systems may have access to the entire world wide web as a knowl- edge source, for QUASAR we restrict our search to speciï¬c background corpora. This is necessary to avoid uninteresting solutions which directly ex- tract answers from the sources from which the questions were constructed. For QUASAR-S we construct the knowledge source by collecting top 50 threads2 tagged with each entity in the dataset on the Stack Overï¬ow website. For QUASAR-T we use ClueWeb09 (Callan et al., 2009), which contains about 1 billion web pages collected be- tween January and February 2009. Figure 1 shows some examples. | 1707.03904#7 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 8 | of their opponent; learning these macromanagement decisions is the focus of this paper. Macromanagement in StarCraft is challenging for a number of reasons, but mostly because areas which are not occupied by friendly units are not observable, a game mechanic known as fog-of-war. This restriction means that players must order units to scout the map to locate the opponentâs bases. The opponentâs strategy must then be deduced continuously from the partial knowledge obtained by scouting units. | 1707.03743#8 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 8 | the interest- ing feature of being a closed-domain dataset about computer programming, and successful ap- proaches to it must develop domain-expertise and a deep understanding of the background corpus. To our knowledge it is one of the largest closed- domain QA datasets available. QUASAR-T, on the other hand, consists of open-domain questions based on trivia, which refers to âbits of informa- tion, often of little importanceâ. Unlike previous open-domain systems which rely heavily on the redundancy of information on the web to correctly answer questions, we hypothesize that QUASAR-T requires a deeper reading of documents to answer correctly.
Unlike existing reading comprehension tasks, the QUASAR tasks go beyond the ability to only understand a given passage, and require the abil1Stack Overï¬ow is a website featuring questions and answers (posts) from a wide range of topics in computer programming. The entity deï¬nitions were scraped from https://stackoverflow.com/tags.
2A question along with the answers provided by other users is collectively called a thread. The threads are ranked in terms of votes from the community. Note that these questions are different from the cloze-style queries in the QUASAR-S dataset. | 1707.03904#8 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 9 | Today, most StarCraft players play the sequel expansion set StarCraft II: Legacy of the Void. While this game introduces modern 3D graphics and new units, the core gameplay is the same as in the original. For StarCraft: Brood War, bots can communicate with the game using the Brood War Application Programming Interface (BWAPI)4, which has been the foun- dation of several StarCraft AI competitions.
4http://bwapi.github.io/
# III. RELATED WORK
A. Build Order Planning
Build order planning can be viewed as a search problem, in which the goal is to ï¬nd the build order that optimizes a speciï¬c heuristic. Churchill et al. applied tree search for build order planning with a goal-based approach; the search tries to minimize the time used to reach a given goal [5]. This approach is also implemented in UAlbertaBot and runs in real-time. | 1707.03743#9 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 9 | We evaluate QUASAR against human testers, as well as several baselines ranging from na¨ıve heuristics to state-of-the-art machine readers. The best performing baselines achieve 33.6% and 28.5% on QUASAR-S and QUASAR-T, while hu- man performance is 50% and 60.6% respectively. For the automatic systems, we see an interesting tension between searching and reading accuracies â retrieving more documents in the search phase
leads to a higher coverage of answers, but makes the comprehension task more difï¬cult. We also collect annotations on a subset of the development set questions to allow researchers to analyze the categories in which their system performs well or falls short. We plan to release these annotations along with the datasets, and our retrieved docu- ments for each question.
# 2 Existing Datasets | 1707.03904#9 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 10 | Other goal-based methods that have shown promising re- sults in optimizing opening build orders are multi-objective evolutionary algorithms [1, 12, 14]. The downside of goal- based approaches is that goals and thus strategies are ï¬xed, thereby preventing the bot from adapting to its opponent. Justesen et al. recently demonstrated how an approach called Continual Online Evolutionary Planning (COEP) can continu- ally evolve build orders during the game itself to adapt to the opponent [10]. In contrast to a goal-based approach, COEP does not direct the search towards a ï¬xed goal but can instead adapt to the opponentâs units. The downside of this approach is however, that it requires a sophisticated heuristic that is difï¬cult to design.
B. Learning from StarCraft Replays | 1707.03743#10 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 10 | Open-Domain QA: Early research into open- domain QA was driven by the TREC-QA chal- lenges organized by the National Institute of Stan- dards and Technology (NIST) (Voorhees and Tice, 2000). Both dataset construction and evalua- tion were done manually, restricting the size of the dataset to only a few hundreds. WIKIQA (Yang et al., 2015) was introduced as a larger- scale dataset for the subtask of answer sentence selection, however it does not identify spans of the actual answer within the selected sentence. More recently, Miller et al. (2016) introduced the MOVIESQA dataset where the task is to answer questions about movies from a background cor- pus of Wikipedia articles. MOVIESQA contains â¼ 100k questions, however many of these are similarly phrased and fall into one of only 13 dif- ferent categories; hence, existing systems already have â¼ 85% accuracy on it (Watanabe et al., 2017). MS MARCO (Nguyen et al., 2016) con- sists of diverse real-world queries collected from Bing search logs, however many of them not fac- tual, which makes their evaluation | 1707.03904#10 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 11 | B. Learning from StarCraft Replays
Players have the option to save a replay ï¬le after each game in StarCraft, which enables them to watch the game without fog-of-war. Several web sites are dedicated to hosting replay ï¬les, as they are a useful resource to improve oneâs strategic knowledge of the game. Replay ï¬les contain the set of actions performed by both players, which the StarCraft engine can use to reproduce the exact events. Replay ï¬les are thus a great resource for machine learning if one wants to learn how players are playing the game. This section will review some previous approaches that learn from replay ï¬les. | 1707.03743#11 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 11 | 2016) con- sists of diverse real-world queries collected from Bing search logs, however many of them not fac- tual, which makes their evaluation tricky. Chen et al. (2017) study the task of Machine Reading at Scale which combines the aspects of search and reading for open-domain QA. They show that jointly training a neural reader on several distantly supervised QA datasets leads to a performance im- provement on all of them. This justiï¬es our moti- vation of introducing two new datasets to add to the collection of existing ones; more data is good data. | 1707.03904#11 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 12 | Case-based reasoning [9, 19, 30], feature-expanded decision trees [4], and several traditional machine learning algorithms [4] have been used to predict the opponentâs strategy in RTS games by learning from replays. While strategy prediction is a critical part of playing StarCraft, the usefulness of applying these approaches to StarCraft bots has not been demonstrated. Dereszynski et al. trained Hidden Markov Models on 331 replays to learn the probabilities of the opponentâs future unit productions as well as a probabilistic state transition model [7]. The learned model takes as input the partial knowledge about the opponentâs buildings and units and then outputs the probability that the opponent will produce a certain unit in the near future. Synnaeve et al. applied a Bayesian model for build tree prediction in StarCraft from partial observations with robust results even with 30% noise (i.e. up to 30% of the opponentâs buildings are unknown) [26]. These predictive models can be very useful for a StarCraft bot, but they do not directly determine what to produce during the game. Tactical decision making can beneï¬t equally from combat forward models; Uriarte et al. showed how such a model can be ï¬ne- tuned using knowledge learned from replay data [28]. | 1707.03743#12 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 12 | Reading Comprehension: Reading Compre- hension (RC) aims to measure the capability of systems to âunderstandâ a given piece of text, by posing questions over it. It is assumed that the passage containing the answer is known before- hand. Several datasets have been proposed to measure this capability. Richardson et al. (2013) used crowd-sourcing to collect MCTest â 500 stories with 2000 questions over them. Signiï¬cant progress, however, was enabled when Her- mann et al. (2015) introduced the much larger CNN / Daily Mail datasets consisting of 300k and 800k cloze-style questions respectively. Chil- drenâs Book Test (CBT) (Hill et al., 2016) and Who-Did-What (WDW) (Onishi et al., 2016) are similar cloze-style datasets. However, the au- tomatic procedure used to construct these ques- tions often introduces ambiguity and makes the task more difï¬cult (Chen et al., 2016). Squad (Rajpurkar et al., 2016) and NewsQA (Trischler et al., 2016) attempt to move toward more gen- eral extractive QA by collecting, through | 1707.03904#12 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 13 | The approach presented in this paper addresses the com- plete challenge that is involved in deciding what to produce. Additionally, our approach learns a solution to this problem using deep learning, which is brieï¬y described next.
# C. Deep Learning
Artiï¬cial neural networks are computational models loosely inspired by the functioning of biological brains. Given an input signal, an output is computed by traversing a large number of connected neural units. The topology and connection weights of these networks can be optimized with evolutionary algo- rithms, which is a popular approach to evolve game-playing behaviors [21]. In contrast, deep learning most often refers to deep neural networks trained with gradient descent methods (e.g. backpropagation) on large amounts of data, which has shown remarkable success in a variety of different ï¬elds. In this case the network topologies are often hand-designed with many layers of computational units, while the parameters are learned through small iterated updates. As computers have become more powerful and with the help of algorithmic improvements, it has become feasible to train deep neural networks to perform at a human-level in object recognition [13] and speech recognition [8]. | 1707.03743#13 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 13 | et al., 2016) and NewsQA (Trischler et al., 2016) attempt to move toward more gen- eral extractive QA by collecting, through crowd- sourcing, more than 100k questions whose an- swers are spans of text in a given passage. Squad in particular has attracted considerable interest, but recent work (Weissenborn et al., 2017) sug- gests that answering the questions does not require a great deal of reasoning. | 1707.03904#13 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 14 | A combination of deep learning and reinforcement learning has achieved human-level results in Atari video games [16, 17] and beyond human-level in the classic board game Go [23]. In the case of Go, pre-training the networks on game logs of human players to predict actions was a critical step in achieving this landmark result because it allowed further training through self-play with reinforcement learning.
While deep learning has been successfully applied to achieve human-level results for many types of games, it is still an open question how it can be applied to StarCraft. On a much smaller scale Stanescu et al. showed how to train convolutional neural networks as game state evaluators in µRTS [25] and Usunier et al. applied reinforcement learning on small-scale StarCraft combats [29]. To our knowledge no prior work shows how to learn macromanagement actions from replays using deep learning.
Also worth mentioning is a technique known as imitation learning, in which a policy is trained to imitate human players. Imitation learning has been applied to Super Mario [3] and Atari games [2]. These results suggest that learning to play games from human traces is a promising approach that is the foundation of the method presented in this paper.
# IV. APPROACH | 1707.03743#14 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 14 | Recently, Joshi et al. (2017) prepared the Triv- iaQA dataset, which also consists of trivia ques- tions collected from online sources, and is similar to QUASAR-T. However, the documents retrieved for TriviaQA were obtained using a commercial search engine, making it difï¬cult for researchers to vary the retrieval step of the QA system in a controlled fashion; in contrast we use ClueWeb09, a standard corpus. We also supply a larger col- lection of retrieved passages, including many not containing the correct answer to facilitate research into retrieval, perform a more extensive analysis of baselines for answering the questions, and pro- vide additional human evaluation and annotation of the questions. In addition we present QUASAR- S, a second dataset. SearchQA (Dunn et al., 2017) is another recent dataset aimed at facilitating re- search towards an end-to-end QA pipeline, how- ever this too uses a commercial search engine, and does not provide negative contexts not contain- ing the answer, making research into the retrieval component difï¬cult.
# 3 Dataset Construction | 1707.03904#14 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 15 | # IV. APPROACH
This section describes the presented approach, which con- sists of two parts. First, a neural network is trained to predict human macromanagement actions, i.e. what to produce next in a given state. Second, the trained network is applied to an existing StarCraft bot called UAlbertaBot by replacing the module responsible for production decisions. UAlbertaBot is an open source StarCraft bot developed by David Churchill5
5https://github.com/davechurchill/ualbertabot
that won the annual AIIDE StarCraft AI Competition in 2013. The bot consists of numerous hierarchical modules, such as an information manager, building manager and production man- ager. The production manager is responsible for managing the build order queue, i.e. the order in which the bot produces new builds. This architecture enables us to replace the production manager with our neural network, such that whenever the bot is deciding what to produce next, the network predicts what a human player would produce. The modular design of UAlbertaBot is described in more detail in Ontan´on et al. [20].
# A. Dataset | 1707.03743#15 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03743 | 16 | # A. Dataset
This section gives an overview of the dataset used for training and how it has been created from replay ï¬les. A replay ï¬le for StarCraft contains every action performed throughout the game by each player, and the StarCraft engine can recreate the game by executing these actions in the correct order. To train a neural network to predict the macromanagement decisions made by players, state-action pairs are extracted from replay ï¬les, where a state describes the current game situation and an action corresponds to the next build produced by the player. Additionally, states are encoded as a vector of normalized values to be processed by our neural network.
Replay ï¬les are in a binary format and require preprocessing before knowledge can be extracted. The dataset used in this paper is extracted from an existing dataset. Synnaeve et al. collected a repository of 7,649 replays by scraping the three StarCraft community websites GosuGamers, ICCup and TeamLiquid, which are mainly for highly skilled players including professionals [27]. A large amount of information was extracted from the repository and stored in an SQL [22]. This database contained database by Robertson et al. state changes, including unit attributes, for every 24 frames in the games. Our dataset is extracted from this database, and an overview of the preprocessing steps is shown in Figure 2. | 1707.03743#16 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 16 | # 3.1 Question sets
QUASAR-S: The software question set was built from the deï¬nitional âexcerptâ entry for each tag (entity) on StackOverï¬ow. For example the ex- cerpt for the âjavaâ tag is, âJava is a general- purpose object-oriented programming language designed to be used in conjunction with the Java Virtual Machine (JVM).â Not every excerpt in- cludes the tag being deï¬ned (which we will call the âhead tagâ), so we prepend the head tag to the front of the string to guarantee relevant re- sults later on in the pipeline. We then completed preprocessing of the software questions by down- casing and tokenizing the string using a custom tokenizer compatible with special characters in software terms (e.g. â.netâ, âc++â). Each pre- processed excerpt was then converted to a series of cloze questions using a simple heuristic: ï¬rst searching the string for mentions of other entities, then repleacing each mention in turn with a place- holder string (Figure 2). | 1707.03904#16 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 17 | From this database, we extract all events describing ma- terial changes throughout every Protoss versus Terran game, including when (1) builds are produced by the player, (2) units and buildings are destroyed and (3) enemy units and build- ings are observed. These events take the perspective of one player and thus maintain the concept of partially observable states in StarCraft. The set of events thus represent a more abstract version of the game only containing information about material changes and actions that relate to macromanagement tasks. The events are then used to simulate abstract StarCraft games via the build order forward model presented in Justesen and Risi [10]. Whenever the player takes an action in these abstract games, i.e. produces something, the action and state pair is added to our dataset. The state describes the playerâs own material in the game: the number of each unit, building, technology, and upgrade present and under construction, as well as enemy material observed by the player.
The entire state vector consists of a few sub-vectors de- scribed here in order, in which the numbers represent the indexes in the vector: | 1707.03743#17 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 17 | This heuristic is noisy, since the software do- main often overloads existing English words (e.g. âcanâ may refer to a Controller Area Network bus; âswapâ may refer to the temporary storage of in- active pages of memory on disk; âusingâ may re- fer to a namespacing keyword). To improve pre- cision we scored each cloze based on the rela- tive incidence of the term in an English corpus versus in our StackOverï¬ow one, and discarded all clozes scoring below a threshold. This means our dataset does not include any cloze questions for terms which are common in English (such as âcanâ âswapâ and âusingâ, but also âimageâ âser- viceâ and âpacketâ). A more sophisticated entity recognition system could make recall improve- ments here. | 1707.03904#17 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 18 | BWAPI Py. Forward > > model Repiey | : Replay files file parser SQL databi Event files State-action files j () Event file State-action file LS protoss_build:{ 13: [Probe], 377: [Probe], h / Pylon: 0,0,0,0.1094,0,0, protoss lost : { Probe: 0,0,0,0.1094,0,0,0,0. 2244: [Probe], Probe: 0,0,0,0.125,0,0,0,0, 6018: [Zealot], Gateway: 0,0,0,0.1406,0. Probe: 0,0,0,0.1406,0,0, Assimilator: 0,0,0,0.156. Probe: 0,0,0,0.1406,0.0, in terran_spotted:{ 2088: [(1413568, Supply Depot)], 2184: [(1207, Barracks)], in terran_lost : { 3456: [(1195, SCV)], 4856: [(1413573, Marine)] Probe: 0,0,0,0.2031,0,0, Probe: | 1707.03743#18 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 18 | QUASAR-T: The trivia question set was built from a collection of just under 54,000 trivia ques- tions collected by Reddit user 007craft and re- leased in December 20153. The raw dataset was noisy, having been scraped from multiple sources with variable attention to detail in format- ting, spelling, and accuracy. We ï¬ltered the raw questions to remove unparseable entries as well as any True/False or multiple choice questions, for a total of 52,000 free-response style ques- tions remaining. The questions range in difï¬culty,
3https://www.reddit.com/r/trivia/ comments/3wzpvt/free_database_of_50000_ trivia_questions/
from straightforward (âWho recorded the song âRocket Manââ âElton Johnâ) to difï¬cult (âWhat was Robin Williams paid for Disneyâs Aladdin in 1982â âScale $485 day + Picasso Paintingâ) to de- batable (âAccording to Earth Medicine whatâs the birth totem for marchâ âThe Falconâ)4
# 3.2 Context Retrieval
The context document for each record consists of a list of ranked and scored pseudodocuments rele- vant to the question. | 1707.03904#18 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03904 | 19 | # 3.2 Context Retrieval
The context document for each record consists of a list of ranked and scored pseudodocuments rele- vant to the question.
Context documents for each query were gen- erated in a two-phase fashion, ï¬rst collecting a large pool of semirelevant text, then ï¬lling a tem- porary index with short or long pseudodocuments from the pool, and ï¬nally selecting a set of N top- ranking pseudodocuments (100 short or 20 long) from the temporary index. the | 1707.03904#19 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 20 | ;
!
(c)
# (da)
Fig. 2: An overview of the data preprocessing that converts StarCraft replays into vectorized state-action pairs. (a) shows the process of extracting data from replay ï¬les into an SQL database, which was done by Robinson et al. [22]. (b) shows our extended data processing that ï¬rst extracts events from the database into ï¬les (c) containing builds, kills and observed enemy units. All events are then run through a forward model to generate vectorized state-action pairs with normalized values (d).
1) 0-31: The number of units/buildings of each type present in the game controlled by the player.
2) 32-38: The number of each technology type researched in the game by the player.
3) 39-57: The number of each upgrade type researched in the game by the player. For simplicity, upgrades are treated as a one-time build and our state description thus ignores level 2 and 3 upgrades.
4) 58-115: The number of each build in production by the player.
5) 116-173: The progress of each build in production by the player. If a build type is not in production it has a value of 0. If several builds of the same type are under construction, the value represents the progress of the build that will be completed ï¬rst. | 1707.03743#20 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 20 | for 50+ each from question-and-answer http://stackoverflow.com. Stack- Overï¬ow keeps a running tally of the top-voted questions for each tag in their knowledge base; we used Scrapy5 to pull the top 50 question posts for each tag, along with any answer-post responses and metadata (tags, authorship, comments). From each thread we pulled all text not marked as code, and split it into sentences using the Stanford NLP sentence segmenter, truncating sentences to 2048 characters. Each sentence was marked with a thread identiï¬er, a post identiï¬er, and the tags for the thread. Long pseudodocuments were either the full post (in the case of question posts), or the full post and its head question (in the case of answer posts), comments included. Short pseudodocuments were individual sentences.
To build the context documents for QUASAR-S, the pseudodocuments for the entire corpus were loaded into a disk-based lucene index, each anno- tated with its thread ID and the tags for the thread. This index was queried for each cloze using the following lucene syntax:
SHOULD(PHRASE(question text)) SHOULD(BOOLEAN(question text)) MUST(tags:$headtag) | 1707.03904#20 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 21 | 6) 174-206: The number of enemy units/buildings of each type observed.
7) 207-209: The number of supply used by the player and the maximum number of supplies available. Another value is added which is the supply left, i.e. the difference between supply used and maximum supplies available.
All values are normalized into the interval [0, 1]. The preprocessed dataset contains 2,005 state-action ï¬les with a total of 789,571 state-action pairs. Six replays were excluded because the Protoss player used the rare mind control spell on a Terran SCV that allows the Protoss player to produce Terran builds. While the data preprocessing required for training is a relatively long process, the same data can be gathered directly by a playing (or observing) bot during a game.
# B. Network Architecture | 1707.03743#21 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 21 | SHOULD(PHRASE(question text)) SHOULD(BOOLEAN(question text)) MUST(tags:$headtag)
where âquestion textâ refers to the sequence of tokens in the cloze question, with the placeholder
4In Earth Medicine, March has two birth totems, the fal- con and the wolf.
# 5https://scrapy.org
Java is a general-purpose object-oriented programming language designed to be used in conjunction with the Java Virtual Machine (JVM).
# Preprocessed Excerpt
java â java is a general-purpose object-oriented programming language designed to be used in con- junction with the java virtual-machine jvm .
# Cloze Questions
Question java â java is a general-purpose object-oriented programming language designed to be used in con- junction with the @placeholder virtual-machine jvm . java â java is a general-purpose object-oriented programming language designed to be used in con- junction with the java @placeholder jvm . java â java is a general-purpose object-oriented programming language designed to be used in con- junction with the java virtual-machine @placeholder .
Figure 2: Cloze generation | 1707.03904#21 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 22 | # B. Network Architecture
Since our dataset contains neither images nor sequential data, a simple multi-layered network architecture with fully- connected layers is used. Our game state contains all the material produced and observed by the player throughout the game, unless it has been destroyed, and thus there is no need for recurrent connections in our model. The network that obtained the best results has four hidden layers. The input layer has 210 units, based on the state vector described in Section IV-A, which is followed by four hidden layers of 128 units with the ReLU activation function. The output layer has one output neuron for each of the 58 build types a Protoss player can produce and uses the softmax activation function. The output of the network is thus the probability of producing each build in the given state.
# C. Training
The dataset of 789,571 state-action pairs is split into a training set of 631,657 pairs (80%) and a test set of 157,914 pairs (20%). The training set is exclusively used for training the network, while the test set is used to evaluate the trained network. The state-action pairs, which come from 2,005 dif- ferent Protoss versus Terran games, are not shufï¬ed prior to the division of the data to avoid that actions from the same game end up in both the training and test set. | 1707.03743#22 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 22 | Figure 2: Cloze generation
removed. The ï¬rst SHOULD term indicates that an exact phrase match to the question text should score highly. The second SHOULD term indicates that any partial match to tokens in the question text should also score highly, roughly in proportion to the number of terms matched. The MUST term in- dicates that only pseudodocuments annotated with the head tag of the cloze should be considered.
The top 100N pseudodocuments were re- trieved, and the top N unique pseudodocuments were added to the context document along with their lucene retrieval score. Any questions show- ing zero results for this query were discarded. | 1707.03904#22 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 23 | The network is trained on the training set, which is shuffled before each epoch. Xavier initialization is used for all weights in the hidden layers and biases are initialized to zero. The learning rate is 0.0001 with the Adam optimization algorithm [11] and a batch size of 100. The optimization algorithm uses the cross entropy loss function â )>, yj log(y:), where y is the output vector of the network and yâ is the one-hot target vector. The problem is thus treated as a classification problem, in which the network tries to predict the next build given a game state. In contrast to classical classification problems, identical data examples (states) in our dataset can have different labels (builds), as human players execute different strategies and also make mistakes while playing. Also, there is no correct build for any state in StarCraft, but some builds are much more likely to be performed by players as they are more likely to result in a win. The network could also be trained to predict whether the player is going to win the game, but how to best incorporate this in the decision-making process is an open question. Instead here we focus on predicting actions made by human players, similarly to the supervised learning step in AlphaGo [23].
D. Applying the Network to a StarCraft Bot | 1707.03743#23 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 23 | the pool of text for each question was composed of 100 HTML docu- ments retrieved from ClueWeb09. Each question- answer pair was converted to a #combine query in the Indri query language to comply with the ClueWeb09 batch query service, using simple regular expression substitution rules to remove (s/[.(){}<>:*â_]+//g) or replace (s/[,?â]+/ /g) illegal characters. Any ques- tions generating syntax errors after this step were discarded. We then extracted the plaintext from each HTML document using Jericho6. For long pseudodocuments we used the full page text, trun- cated to 2048 characters. For short pseudodocu- ments we used individual sentences as extracted by the Stanford NLP sentence segmenter, trun- cated to 200 characters.
To build the context documents for the trivia set, the pseudodocuments from the pool were col- lected into an in-memory lucene index and queried using the question text only (the answer text was not included for this step). The structure of the query was identical to the query for QUASAR-S,
# 6http://jericho.htmlparser.net/docs/
index.html
without the head tag ï¬lter:
SHOULD(PHRASE(question text)) SHOULD(BOOLEAN(question text)) | 1707.03904#23 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 24 | D. Applying the Network to a StarCraft Bot
Learning to predict actions in games played by humans is very similar to the act of learning to play. However, this type of imitation learning does have its limits as the agent does not learn to take optimal actions, but instead to take the most probable action (if a human was playing). However, applying the trained network as a macromanagement module of an existing bot could be an important step towards more advanced approaches.
(a) Own material (b) Material under construction (c) Progress of material under construction (e) Supply (d) Opp. material 6 5 1 Bm 00.00 0 Input layer with 210 units. Heo on ome! OBMsa off-i Oo om-co 0 10 0 8 8G i) ommome) ° ole oc (0) 4 hidden layers each with 128 units (ReLU) Output layer O00 090 with 58 units ER EE (Softmax) 05.26 .02 .61 ma o Ul a o bo -.a4 Baa 3.7 50 00 0 4 1211 14 54 47 3 ABBR 8 SG BRAM AAA 900 00 00 00 00 O00 O00 [e) [e) [e) [e) O00 00 AG Sa 00.01 00 .00 | 1707.03743#24 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 24 | index.html
without the head tag ï¬lter:
SHOULD(PHRASE(question text)) SHOULD(BOOLEAN(question text))
The top 100N pseudodocuments were re- trieved, and the top N unique pseudodocuments were added to the context document along with their lucene retrieval score. Any questions show- ing zero results for this query were discarded.
# 3.3 Candidate solutions
The list of candidate solutions provided with each record is guaranteed to contain the correct answer to the question. QUASAR-S used a closed vocab- ulary of 4874 tags as its candidate list. Since the questions in QUASAR-T are in free-response for- mat, we constructed a separate list of candidate solutions for each question. Since most of the cor- rect answers were noun phrases, we took each se- quence of NN* -tagged tokens in the context doc- ument, as identiï¬ed by the Stanford NLP Maxent POS tagger, as the candidate list for each record. If this list did not include the correct answer, it was added to the list.
# 3.4 Postprocessing | 1707.03904#24 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 25 | Fig. 3: Neural Network Architecture. The input layer consists of a vectorized state containing normalized values representing the number of each unit, building, technology, and upgrade in the game known to the player. Only a small subset is shown on the diagram for clarity. Three inputs also describe the playerâs supplies. The neural network has four hidden fully-connected layers with 128 units each using the ReLU activation function. These layers are followed by an output layer using the softmax activation function and the output of the network is the prediction of each build being produced next in the given state. | 1707.03743#25 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 25 | # 3.4 Postprocessing
Once context documents had been built, we ex- tracted the subset of questions where the answer string, excluded from the query for the two-phase search, was nonetheless present in the context document. This subset allows us to evaluate the performance of the reading system independently from the search system, while the full set allows us to evaluate the performance of QUASAR as a whole. We also split the full set into training, val- idation and test sets. The ï¬nal size of each data subset after all discards is listed in Table 1.
Dataset Total (train / val / test) Single-Token (train / val / test) Answer in Short (train / val / test) Answer in Long (train / val / test) QUASAR-S QUASAR-T 31,049 / 3,174 / 3,139 37,012 / 3,000 / 3,000 â 18,726 / 1,507 / 1,508 30,198 / 3,084 / 3,044 25,465 / 2,068 / 2,043 30,417 / 3,099 / 3,064 26,318 / 2,129 / 2,102 | 1707.03904#25 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 26 | In this paper, we build on the UAlbertaBot, which has a production manager that manages a queue of builds that the bots must produce in order. The production manager, which normally uses a goal-based search, is modiï¬ed to use the network trained on replays instead. The production manager in UAlbertaBot is also extended to act as a web client; whenever the module is asked for the next build, the request is forwarded, along with a description of the current game state, to a web server that feeds the game state to the neural network and then returns a build prediction to the module. Since the network is only trained on Protoss versus Terran games, it is only tested in this matchup. Our approach can however easily be applied to the other matchups as well. UAlbertaBot does not handle some of the advanced units well, so these where simply excluded from the output signals of the network. The excluded units are: archons, carriers, dark archons, high templars, reavers and shuttles. After these are excluded from the output vector, values are normalized to again sum to 1. An important question is how to select one build action based on the networkâs outputs. Here two action selection policies are tested: | 1707.03743#26 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 26 | Table 1: Dataset Statistics. Single-Token refers to the questions whose answer is a single token (for QUASAR-S all answers come from a ï¬xed vocabulary). Answer in Short (Long) indicates whether the answer is present in the retrieved short (long) pseudo-documents.
# 4 Evaluation
# 4.1 Metrics
reduce the burden of reading on the volunteers, though we note that the long pseudo-documents have greater coverage of answers. | 1707.03904#26 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03904 | 27 | # 4 Evaluation
# 4.1 Metrics
reduce the burden of reading on the volunteers, though we note that the long pseudo-documents have greater coverage of answers.
Evaluation is straightforward on QUASAR-S since each answer comes from a ï¬xed output vocabu- lary of entities, and we report the average accu- racy of predictions as the evaluation metric. For QUASAR-T, the answers may be free form spans of text, and the same answer may be expressed in different terms, which makes evaluation dif- ï¬cult. Here we pick the two metrics from Ra- jpurkar et al. (2016); Joshi et al. (2017). In prepro- cessing the answer we remove punctuation, white- space and deï¬nite and indeï¬nite articles from the strings. Then, exact match measures whether the two strings, after preprocessing, are equal or not. For F1 match we ï¬rst construct a bag of tokens for each string, followed be preprocessing of each token, and measure the F1 score of the overlap be- tween the two bags of tokens. These metrics are far from perfect for QUASAR-T; for example, our human testers were penalized for entering â0â as answer instead of âzeroâ. However, a comparison between systems may still be meaningful. | 1707.03904#27 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 28 | The best network managed to reach a top-1 error rate of 54.6% (averaged over ï¬ve runs) on the test set, which means that it is able to guess the next build around half the time, and with top-3 and top-10 error rates of 22.92% and 4.03%. For a simple comparison, a baseline approach that always predicts the next build to be a probe, which is the most common build in the game for Protoss, has a top-1 error rate of 73.9% and thus performs signiï¬cantly worse. Predicting randomly with uniform probabilities achieves a top-1 error rate of 98.28%. Some initial experiments with different input layers show that we obtain worse error rates by omitting parts of the state vector described in IV-A. For example, when opponent material is excluding from the input layer the networks top-1 error increases to an average of 58.17%. Similarly, omitting the material under construction (together with the progress) increases the average top-1 error rate to 58.01%. The results are summarized in Table I with error rates averaged over ï¬ve runs for each input layer design. The top-1, top-3 and top-10 error rates in the table show the networksâ | 1707.03743#28 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 28 | # 4.2 Human Evaluation
We also asked the volunteers to provide annota- tions to categorize the type of each question they were asked, and a label for whether the question was ambiguous. For QUASAR-S the annotators were asked to mark the relation between the head entity (from whose deï¬nition the cloze was con- structed) and the answer entity. For QUASAR-T the annotators were asked to mark the genre of the question (e.g., Arts & Literature)7 and the entity type of the answer (e.g., Person). When multi- ple annotators marked the same question differ- ently, we took the majority vote when possible and discarded ties. In total we collected 226 re- lation annotations for 136 questions in QUASAR- S, out of which 27 were discarded due to conï¬ict- ing ties, leaving a total of 109 annotated questions. For QUASAR-T we collected annotations for a to- tal of 144 questions, out of which 12 we marked In the remaining 132, a total of as ambiguous. 214 genres were annotated (a question could be annotated with multiple genres), while 10 ques- tions had conï¬icting entity-type annotations which we discarded, leaving 122 total entity-type anno- tations. Figure 3 shows the distribution of these annotations. | 1707.03904#28 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 29 | with error rates averaged over ï¬ve runs for each input layer design. The top-1, top-3 and top-10 error rates in the table show the networksâ ability to predict using one, three and ten guesses respectively, determined by their output. All networks were trained for 50 epochs as the error rates stagnated prior to this point. Overï¬tting is minimal with a difference less than 1% between the top-1 training and test errors. | 1707.03743#29 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 29 | To put the difï¬culty of the introduced datasets into perspective, we evaluated human performance on answering the questions. For each dataset, we recruited one domain expert (a developer with several years of programming experience for QUASAR-S, and an avid trivia enthusiast for QUASAR-T) and 1 â 3 non-experts. Each volun- teer was presented with randomly selected ques- tions from the development set and asked to an- swer them via an online app. The experts were evaluated in a âclosed-bookâ setting, i.e. they did not have access to any external resources. The non-experts were evaluated in an âopen-bookâ set- ting, where they had access to a search engine over the short pseudo-documents extracted for each dataset (as described in Section 3.2). We decided to use short pseudo-documents for this exercise to
# 4.3 Baseline Systems
We evaluate several baselines on QUASAR, rang- ing from simple heuristics to deep neural net- works. Some predict a single token / entity as the answer, while others predict a span of tokens. | 1707.03904#29 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 30 | Probabilistic action selection: Builds are selected with the probabilities of the softmax output units. In the example in Figure 3, a probe will be selected with a 5% probability and a zealot with 26% probability. With a low probability, this approach will also select some of the rare builds, and can express a wide range of strategies. Another interesting feature is that it is stochastic and harder to predict by the opponent.
To gain further insights into the policy learned by the network, the best networkâs prediction of building a new base given a varying number of probes is plotted in Figure 4. States are taken from the test set in which the player has only one base. The network successfully learned that humans usually create a base expansion when they have around 20-30 probes. | 1707.03743#30 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 30 | 4.3.1 Heuristic Models Single-Token: MF-i (Maximum Frequency) counts the number of occurrences of each candi- date answer in the retrieved context and returns the one with maximum frequency. MF-e is the same as MF-i except it excludes the candidates present in the query. WD (Word Distance) mea7Multiple genres per question were allowed.
(a) QUASAR-S relations (b) QUASAR-T genres (c) QUASAR-T answer categories
Figure 3: Distribution of manual annotations for QUASAR. Description of the QUASAR-S annotations is in Appendix A.
sures the sum of distances from a candidate to other non-stopword tokens in the passage which are also present in the query. For the cloze-style QUASAR-S the distances are measured by ï¬rst aligning the query placeholder to the candidate in the passage, and then measuring the offsets between other tokens in the query and their mentions in the passage. The maximum distance for any token is capped at a speciï¬ed threshold, which is tuned on the validation set. | 1707.03904#30 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 31 | Top-3 error 54.60% ± 0.12% 22.92% ± 0.09% 4.03% ± 0.14% 58.17% ± 0.16% 24.92% ± 0.10% 4.23% ± 0.04% 58.01% ± 0.42% 24.95% ± 0.31% 4.51% ± 0.16% 60.81% ± 0.09% 26.64% ± 0.11% 4.65% ± 0.21% 73.90% ± 0.00% 73.90% ± 0.00% 73.90% ± 0.00% 98.28% ± 0.04% 94.87% ± 0.05% 82.73% ± 0.08%
top-3 and top-10 error rates of trained networks TABLE I: The top-1, (averaged over ï¬ve runs) with different combinations of inputs. (a) is the playerâs own material, (b) is material under construction, (c) is the progress of material under construction, (d) is the opponentâs material and (e) is supply. The input layer is visualized in Figure 3. Probe is a baseline predictor that always predicts the next build to be a probe and Random predicts randomly with uniform probabilities. The best results (in bold) are achieved by using all the input features. | 1707.03743#31 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 31 | Multi-Token: For QUASAR-T we also test the Sliding Window (SW) and Sliding Window + Dis- tance (SW+D) baselines proposed in (Richardson et al., 2013). The scores were computed for the list of candidate solutions described in Section 3.2.
# 4.3.3 Reading Comprehension Models
Reading comprehension models are trained to ex- tract the answer from the given passage. We test two recent architectures on QUASAR using pub- licly available code from the authors8 9.
GA (Single-Token): The GA Reader (Dhingra et al., 2017) is a multi-layer neural network which extracts a single token from the passage to an- swer a given query. At the time of writing it had state-of-the-art performance on several cloze-style datasets for QA. For QUASAR-S we train and test GA on all instances for which the correct answer is found within the retrieved context. For QUASAR- T we train and test GA on all instances where the answer is in the context and is a single token.
# 4.3.2 Language Models | 1707.03904#31 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 32 | 0.8 0.6 0.4 Nexus prediction 0.2 0 10 20 30 40 50 # of probes
Fig. 4: The prediction of the next build being a Nexus (a base expansion) predicted by the trained neural network. Each data point corresponds to one prediction from one state. These states have only one Nexus and are taken from the test set. The small spike around 11 and 12 probes shows that the network predicts a fast expansion build order if the Protoss player has not build any gateways at this point.
# B. Playing StarCraft
UAlbertaBot is tested playing the Protoss race against the built-in Terran bot, with the trained network as production manager. Both the greedy and probabilistic actions selection strategies are tested in 100 games in the two-player map Astral Balance. The results, summarized in Table II, demonstrates that the probabilistic strategy is clearly superior, winning 68% of all games. This is signiï¬cant at p ⤠0.05 according to the two-tailed Wilcoxon Signed-Rank. The greedy approach, which always selects the action with the highest probability, does not perform as well. While the probabilistic strategy is promising, it is important to note that an UAlbertaBot playing as Protoss and following a powerful hand-designed strategy (dragoon rush), wins 100% of all games against the built-in Terran bot. | 1707.03743#32 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 32 | # 4.3.2 Language Models
For QUASAR-S, since the answers come from a ï¬xed vocabulary of entities, we test language model baselines which predict the most likely en- tity to appear in a given context. We train three n- gram baselines using the SRILM toolkit (Stolcke et al., 2002) for n = 3, 4, 5 on the entire corpus of all Stack Overï¬ow posts. The output predictions are restricted to the output vocabulary of entities.
BiDAF (Multi-Token): The BiDAF model (Seo et al., 2017) is also a multi-layer neural network which predicts a span of text from the passage as the answer to a given query. At the time of writ- ing it had state-of-the-art performance among pub- lished models on the Squad dataset. For QUASAR- T we train and test BiDAF on all instances where the answer is in the retrieved context.
# 4.4 Results | 1707.03904#32 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 33 | To further understand the difference between the two ap- proaches, the builds selected by each selection strategy are analyzed. A subset of these builds are shown in Table III. The probabilistic strategy clearly expresses a more varied strategy than the greedy one. Protoss players often prefer a good mix of zealots and dragoons as it creates a good
Action selection Probabilistic Probabilistic (blind) Greedy Random UAlbertaBot (dragoon rush) Built-in Terran 68% 59% 53% 0% 100%
TABLE II: The win percentage of UAlbertaBot with the trained neural network as a production manager against the built-in Terran bot. The probabilistic strategy selects actions with probabilities equal to the outputs of the network while the greedy network always selects the action with the highest output, and random always picks a random action. The blind probabilistic network does not receive information about the opponentâs material (inputs are set to 0.0). UAlbertaBot playing as Protoss with the scripted dragoon rush strategy wins 100% of all games against the built-in Terran bot. | 1707.03743#33 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 33 | # 4.4 Results
We also train a bidirectional Recurrent Neural Network (RNN) language model (based on GRU units). This model encodes both the left and right context of an entity using forward and backward GRUs, and then concatenates the ï¬nal states from both to predict the entity through a softmax layer. Training is performed on the entire corpus of Stack Overï¬ow posts, with the loss computed only over mentions of entities in the output vocabulary. This approach beneï¬ts from looking at both sides of the cloze in a query to predict the entity, as compared to the single-sided n-gram baselines.
Several baselines rely on the retrieved context to extract the answer to a question. For these, we refer to the fraction of instances for which the cor- rect answer is present in the context as Search Ac- curacy. The performance of the baseline among these instances is referred to as the Reading Ac- curacy, and the overall performance (which is a product of the two) is referred to as the Overall Ac- curacy. In Figure 4 we compare how these three vary as the number of context documents is var8https://github.com/bdhingra/ga-reader 9https://github.com/allenai/ bi-att-flow | 1707.03904#33 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 34 | dynamic army, and the greedy strategy clearly fails to achieve this. Additionally, with the greedy approach the bot never produces any upgrades, because they are too rare in a game to ever become the most probable build. The blind probabilistic approach (which ignores knowledge about the opponent by setting these inputs to zero) reached a lower win rate of just 59%, further corroborating that the opponentâs units and build- ings are important for macromanagement decision making. We also tested the probabilistic approach against UAlbertaBot with the original production manager conï¬gured to follow a ï¬xed marine rush strategy, which was the best opening strategy for UAlbertaBot when playing Terran. Our approach won 45% of 100 games, demonstrating that it can play competitively against this aggressive rush strategy, learning from human replays alone. | 1707.03743#34 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 34 | GA on QUASAR-S (short) GA on QUASARS (long) GA on QUASAR-T (short) GA on QUASAR-T (long) 058 0.60) St 056} os) 06 04 âât 04 03 > 03 =a 025 0.25) o 0 20 0 0 0 6 70 o 24 6 8 0 2 WM 16 # sentences in context # sentences in context o 10 20 0 #0 50 60 70 o 2 «4 6 8 0 # sentences in context # sentences in context
Figure 4: Variation of Search, Read and Overall accuracies as the number of context documents is varied.
ied. Naturally, the search accuracy increases as the context size increases, however at the same time reading performance decreases since the task of extracting the answer becomes harder for longer documents. Hence, simply retrieving more docu- ments is not sufï¬cient â ï¬nding the few most rele- vant ones will allow the reader to work best. | 1707.03904#34 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 35 | Figure 5 visualizes the learned opening strategy with greedy action selection. While the probabilistic strategy shows a better performance in general (Table II), the strategy performed by the greedy action selection is easier to analyze because it is deterministic and has a one-sided unit production. The learned build order shown in Figure 5 is a One Gate Cybernetics Core opening with no zealots before the cybernetics core. This opening was performed regularly against the built-in Terran bot, which does not vary much in its strategy. The opening is followed by a heavy production of dragoons and a few observers. A base expansion usually follows the ï¬rst successful confrontation. Some losses of the greedy approach were caused by UAlbertaBot not being able to produce more buildings, possibly because there was no more space left in the main base. A few losses were also directly caused by some weird behavior in the late game, where the bot (ordered by the neural network) produces around 20 pylons directly after each other. Generally, the neural network expresses a behavior that often prolongs the game, as it prefers expanding bases when leading the game. This is something human players also tend to do, but since UAlbertaBot does not handle the late game very well, it is not a good strategy for this particular bot. | 1707.03743#35 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 35 | In Tables 2 and 3 we compare all baselines when the context size is tuned to maximize the overall accuracy on the validation set10. For QUASAR-S the best performing baseline is the BiRNN language model, which achieves 33.6% accuracy. The GA model achieves 48.3% accu- racy on the set of instances for which the an- swer is in context, however, a search accuracy of only 65% means its overall performance is lower. This can improve with improved retrieval. For QUASAR-T, both the neural models signiï¬cantly outperform the heuristic models, with BiDAF get- ting the highest F1 score of 28.5%.
# 5 Conclusion | 1707.03904#35 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 36 | The behavior of the probabilistic strategy is more difï¬cult to analyze, as it is stochastic. It usually follows the same opening as the greedy approach, with small variations, but then later in the game, it begins to mix its unit production between zealots, dragoons and dark templars. The timings of base expansions are very different from game to game as well as the use of upgrades.
Assimilator Cybernetics Core 1322 "| 0.001 Fa 0.002 A. 0.001 Fa 0.001 4 0.001 Fa 0.002 A. 0.002 Fa 0.002 sel 0.006 a 0.001 4 0.006 Ea 0.011 4 0.021 0.004 A 0.025 A. 0.003 A. 0.006 4 0.001 "| 0.043 "| 0.132 0.117 al 0.098 ge 0.006 al 0.358 "| 0.001 4 0.021 al 0.164 0.879 A. 0.870 a 0.989 A 0.616 al 0.998 al 0.922 Ea 0.680 1650 1879 2037 Frame | 1707.03743#36 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 36 | # 5 Conclusion
We have presented the QUASAR datasets for pro- moting research into two related tasks for QA â searching a large corpus of text for relevant pas- sages, and reading the passages to extract an- swers. We have also described baseline systems for the two tasks which perform reasonably but lag behind human performance. While the search- ing performance improves as we retrieve more context, the reading performance typically goes down. Hence, future work, in addition to improv- ing these components individually, should also focus on joint approaches to optimizing the two on end-task performance. The datasets, includ- ing the documents retrieved by our system and the human annotations, are available at https: //github.com/bdhingra/quasar.
# Acknowledgments | 1707.03904#36 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 37 | Fig. 5: The opening build order learned by the neural network when playing against the built-in Terran bot (the build order also depends on the enemy units observed). The number next to each build icon represents the probability of the build being produced next, and points on the timescale indicate when the bot requests the network for the next build. In this example the network follows the greedy strategy, always picking the build with the highest probability.
Probe Zealot Dragoon Dark templar Observer Scout Corsair Leg enhancements Ground weapons Ground armor Plasma shields Action selection Probabilistic Greedy 50.84 70.12 14.62 1.46 17.3 32.75 1.00 0.00 3.56 2.40 0.11 0.00 0.13 0.00 0.32 0.00 0.03 0.00 0.07 0.00 0.01 0.00
TABLE III: The average number of different unit types produced by the two different action selection strategies against the built-in Terran bot. The results show that the greedy strategy executes a very one-sided unit production while the probabilistic strategy is more varied.
# VI. DISCUSSION | 1707.03743#37 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 37 | # Acknowledgments
The best performing baselines, however, lag be- hind human performance by 16.4% and 32.1% for QUASAR-S and QUASAR-T respectively, indicat- ing the strong potential for improvement. Inter- estingly, for human performance we observe that non-experts are able to match or beat the perfor- mance of experts when given access to the back- ground corpus for searching the answers. We also emphasize that the human performance is limited by either the knowledge of the experts, or the use- fulness of the search engine for non-experts; it should not be viewed as an upper bound for auto- matic systems which can potentially use the entire background corpus. Further analysis of the human and baseline performance in each category of an- notated questions is provided in Appendix B.
This work was funded by NSF under grants CCF- 1414030 and IIS-1250956 and by grants from Google.
# References
Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collab- oratively created graph database for structuring hu- man knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data. AcM, pages 1247â1250. | 1707.03904#37 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 38 | # VI. DISCUSSION
This paper demonstrated that macromanagement tasks can be learned from replays using deep learning, and that the learned policy can be used to outperform the built-in bot in StarCraft. In this section, we discuss the short-comings of this approach and give suggestions for future research that could lead to strong StarCraft bots by extending this line of work. The built-in StarCraft bot is usually seen as a weak player compared to humans. It gives a sufï¬cient amount of competi- tion for new players but only until they begin to learn estab- lished opening strategies. A reasonable expectation would be that UAlbertaBot, using our trained network, would defeat the built-in bot almost every time. By analyzing the games played, it becomes apparent the performance of UAlbertaBot decrease in the late game. It simply begins to make mistakes as it takes weird micromanagement decisions when it controls several bases and groups of units. The strategy learned by our network further enforces this faulty behavior, as it prefers base expansions and heavy unit production (very similar to skilled human players) over early and risky aggressions. The trained network was also observed to make a few faulty decisions, but rarely and only in the very late game. The reason for these faults might be because some outputs are excluded, since UAlbertaBot does not handle these builds well. | 1707.03743#38 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 38 | Jamie Callan, Mark Hoy, Changkuk Yoo, and Le Zhao. 2009. Clueweb09 data set.
Danqi Chen, Jason Bolton, and Christopher D Man- the ning. 2016. cnn/daily mail reading comprehension task. ACL . A thorough examination of
10The Search Accuracy for different baselines may be dif- ferent despite the same number of retrieved context docu- ments, due to different preprocessing requirements. For ex- ample, the SW baselines allow multiple tokens as answer, whereas WD and MF baselines do not.
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open- In Association for Computa- domain questions. tional Linguistics (ACL).
Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. | 1707.03904#38 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.